Unix time[a] is a date and time representation widely used in computing. It measures time by the number of seconds that have elapsed since 00:00:00 UTC on 1 January 1970, the beginning of the Unix epoch, less adjustments made due to leap seconds.
Unix time originated as the system time of Unix operating systems. It has come to be widely used in other computer operating systems, file systems, programming languages, and databases.
Two layers of encoding make up Unix time. The first layer encodes a point in time as a scalar real number which represents the number of seconds that have passed since 00:00:00 UTC on Thursday, 1 January 1970. The second layer encodes that number as a sequence of bits or decimal digits.
Unix time differs from both Coordinated Universal Time (UTC) and International Atomic Time (TAI) in its handling of leap seconds. UTC includes leap seconds that adjust for the discrepancy between precise time, as measured by atomic clocks, and solar time, relating to the position of the earth in relation to the sun. International Atomic Time (TAI), in which every day is precisely 86400 seconds long, ignores solar time and gradually loses synchronization with the Earth's rotation at a rate of roughly one second per year. In Unix time, every day contains exactly 86400 seconds but leap seconds are accounted for. Each leap second uses the timestamp of a second that immediately precedes or follows it.
Encoding time as a number
Unix time is a single signed number that increments every second, which makes it easier for computers to store and manipulate than conventional date systems. Interpreter programs can then convert it to a human-readable format.
The Unix epoch is the time 00:00:00 UTC on 1 January 1970. There is a problem with this definition, in that UTC did not exist in its current form until 1972; this issue is discussed below. For brevity, the remainder of this section uses ISO 8601 date and time format, in which the Unix epoch is 1970-01-01T00:00:00Z.
The Unix time number is zero at the Unix epoch and increases by exactly 86400 per day since the epoch. Thus 2004-09-16T00:00:00Z, 12677 days after the epoch, is represented by the Unix time number 12677 × 86400 = 1095292800. This can be extended backwards from the epoch too, using negative numbers; thus 1957-10-04T00:00:00Z, 4472 days before the epoch, is represented by the Unix time number −4472 × 86400 = −386380800. This applies within days as well; the time number at any given time of a day is the number of seconds that has passed since the midnight starting that day added to the time number of that midnight.
Sometimes, Unix time is mistakenly referred to as Epoch time, because Unix time is based on an epoch and because of a common misunderstanding that the Unix epoch is the only epoch (often called "the Epoch").
The above scheme means that on a normal UTC day, which has a duration of 86400 seconds, the Unix time number changes in a continuous manner across midnight. For example, at the end of the day used in the examples above, the time representations progress as follows:
|TAI (17 September 2004)||UTC (16 to 17 September 2004)||Unix time|
When a leap second occurs, the UTC day is not exactly 86400 seconds long and the Unix time number (which always increases by exactly 86400 each day) experiences a discontinuity. Leap seconds may be positive or negative. No negative leap second has ever been declared, but if one were to be, then at the end of a day with a negative leap second, the Unix time number would jump up by 1 to the start of the next day. During a positive leap second at the end of a day, which occurs about every year and a half on average, the Unix time number increases continuously into the next day during the leap second and then at the end of the leap second jumps back by 1 (returning to the start of the next day). For example, this is what happened on strictly conforming POSIX.1 systems at the end of 1998:
|TAI (1 January 1999)||UTC (31 December 1998 to 1 January 1999)||Unix time|
Unix time numbers are repeated in the second immediately following a positive leap second. The Unix time number 1483142400 is thus ambiguous: it can refer either to start of the leap second (2016-12-31 23:59:60) or the end of it, one second later (2017-01-01 00:00:00). In the theoretical case when a negative leap second occurs, no ambiguity is caused, but instead there is a range of Unix time numbers that do not refer to any point in UTC time at all.
A Unix clock is often implemented with a different type of positive leap second handling associated with the Network Time Protocol (NTP). This yields a system that does not conform to the POSIX standard. See the section below concerning NTP for details.
When dealing with periods that do not encompass a UTC leap second, the difference between two Unix time numbers is equal to the duration in seconds of the period between the corresponding points in time. This is a common computational technique. However, where leap seconds occur, such calculations give the wrong answer. In applications where this level of accuracy is required, it is necessary to consult a table of leap seconds when dealing with Unix times, and it is often preferable to use a different time encoding that does not suffer from this problem.
A Unix time number is easily converted back into a UTC time by taking the quotient and modulus of the Unix time number, modulo 86400. The quotient is the number of days since the epoch, and the modulus is the number of seconds since midnight UTC on that day. If given a Unix time number that is ambiguous due to a positive leap second, this algorithm interprets it as the time just after midnight. It never generates a time that is during a leap second. If given a Unix time number that is invalid due to a negative leap second, it generates an equally invalid UTC time. If these conditions are significant, it is necessary to consult a table of leap seconds to detect them.
Non-synchronous Network Time Protocol-based variant
Commonly a Mills-style Unix clock is implemented with leap second handling not synchronous with the change of the Unix time number. The time number initially decreases where a leap should have occurred, and then it leaps to the correct time 1 second after the leap. This makes implementation easier, and is described by Mills' paper. This is what happens across a positive leap second:
|TAI (1 January 1999)||UTC (31 December 1998 to 1 January 1999)||State||Unix clock|
This can be decoded properly by paying attention to the leap second state variable, which unambiguously indicates whether the leap has been performed yet. The state variable change is synchronous with the leap.
A similar situation arises with a negative leap second, where the second that is skipped is slightly too late. Very briefly the system shows a nominally impossible time number, but this can be detected by the TIME_DEL state and corrected.
In this type of system the Unix time number violates POSIX around both types of leap second. Collecting the leap second state variable along with the time number allows for unambiguous decoding, so the correct POSIX time number can be generated if desired, or the full UTC time can be stored in a more suitable format.
The decoding logic required to cope with this style of Unix clock would also correctly decode a hypothetical POSIX-conforming clock using the same interface. This would be achieved by indicating the TIME_INS state during the entirety of an inserted leap second, then indicating TIME_WAIT during the entirety of the following second while repeating the seconds count. This requires synchronous leap second handling. This is probably the best way to express UTC time in Unix clock form, via a Unix interface, when the underlying clock is fundamentally untroubled by leap seconds.
Variant that counts leap seconds
Another, much rarer, non-conforming variant of Unix time keeping involves incrementing the value for all seconds, including leap seconds; some Linux systems are configured this way. Time kept in this fashion is sometimes referred to as "TAI" (although time stamps can be converted to UTC if the value corresponds to a time when the difference between TAI and UTC is known), as opposed to "UTC" (although not all UTC time values have a unique reference in systems that don't count leap seconds).
Because TAI has no leap seconds, and every TAI day is exactly 86400 seconds long, this encoding is actually a pure linear count of seconds elapsed since 1970-01-01T00:00:10 TAI. This makes time interval arithmetic much easier. Time values from these systems do not suffer the ambiguity that strictly conforming POSIX systems or NTP-driven systems have.
In these systems it is necessary to consult a table of leap seconds to correctly convert between UTC and the pseudo-Unix-time representation. This resembles the manner in which time zone tables must be consulted to convert to and from civil time; the IANA time zone database includes leap second information, and the sample code available from the same source uses that information to convert between TAI-based time stamps and local time. Conversion also runs into definitional problems prior to the 1972 commencement of the current form of UTC (see section UTC basis below).
This system, despite its superficial resemblance, is not Unix time. It encodes times with values that differ by several seconds from the POSIX time values. A version of this system, in which the epoch was 1970-01-01T00:00:00 TAI rather than 1970-01-01T00:00:10 TAI, was proposed for inclusion in ISO C's
time.h, but only the UTC part was accepted in 2011. A
tai_clock does, however, exist in C++20.
Representing the number
A Unix time number can be represented in any form capable of representing numbers. In some applications the number is simply represented textually as a string of decimal digits, raising only trivial additional problems. However, certain binary representations of Unix times are particularly significant.
time_t data type that represents a point in time is, on many platforms, a signed integer, traditionally of 32 bits (but see below), directly encoding the Unix time number as described in the preceding section. Being 32 bits means that it covers a range of about 136 years in total. The minimum representable date is Friday 1901-12-13, and the maximum representable date is Tuesday 2038-01-19. One second after 03:14:07 UTC 2038-01-19 this representation will overflow in what is known as the year 2038 problem.
In some newer operating systems,
time_t has been widened to 64 bits. This expands the times representable by approximately 292 billion years in both directions, which is over twenty times the present age of the universe per direction.
There was originally some controversy over whether the Unix
time_t should be signed or unsigned. If unsigned, its range in the future would be doubled, postponing the 32-bit overflow (by 68 years). However, it would then be incapable of representing times prior to the epoch. The consensus is for
time_t to be signed, and this is the usual practice. The software development platform for version 6 of the QNX operating system has an unsigned 32-bit
time_t, though older releases used a signed type.
The POSIX and Open Group Unix specifications include the C standard library, which includes the time types and functions defined in the
<time.h> header file. The ISO C standard states that
time_t must be an arithmetic type, but does not mandate any specific type or encoding for it. POSIX requires
time_t to be an integer type, but does not mandate that it be signed or unsigned.
Unix has no tradition of directly representing non-integer Unix time numbers as binary fractions. Instead, times with sub-second precision are represented using composite data types that consist of two integers, the first being a
time_t (the integral part of the Unix time), and the second being the fractional part of the time number in millionths (in
struct timeval) or billionths (in
struct timespec). These structures provide a decimal-based fixed-point data format, which is useful for some applications, and trivial to convert for others.
The present form of UTC, with leap seconds, is defined only starting from 1 January 1972. Prior to that, since 1 January 1961 there was an older form of UTC in which not only were there occasional time steps, which were by non-integer numbers of seconds, but also the UTC second was slightly longer than the SI second, and periodically changed to continuously approximate the Earth's rotation. Prior to 1961 there was no UTC, and prior to 1958 there was no widespread atomic timekeeping; in these eras, some approximation of GMT (based directly on the Earth's rotation) was used instead of an atomic timescale.
The precise definition of Unix time as an encoding of UTC is only uncontroversial when applied to the present form of UTC. The Unix epoch predating the start of this form of UTC does not affect its use in this era: the number of days from 1 January 1970 (the Unix epoch) to 1 January 1972 (the start of UTC) is not in question, and the number of days is all that is significant to Unix time.
The meaning of Unix time values below +63072000 (i.e., prior to 1 January 1972) is not precisely defined. The basis of such Unix times is best understood to be an unspecified approximation of UTC. Computers of that era rarely had clocks set sufficiently accurately to provide meaningful sub-second timestamps in any case. Unix time is not a suitable way to represent times prior to 1972 in applications requiring sub-second precision; such applications must, at least, define which form of UT or GMT they use.
As of 2009[update], the possibility of ending the use of leap seconds in civil time is being considered. A likely means to execute this change is to define a new time scale, called International Time, that initially matches UTC but thereafter has no leap seconds, thus remaining at a constant offset from TAI. If this happens, it is likely that Unix time will be prospectively defined in terms of this new time scale, instead of UTC. Uncertainty about whether this will occur makes prospective Unix time no less predictable than it already is: if UTC were simply to have no further leap seconds the result would be the same.
This section needs additional citations for verification. (September 2019)
The earliest versions of Unix time had a 32-bit integer incrementing at a rate of 60 Hz, which was the rate of the system clock on the hardware of the early Unix systems. The value 60 Hz still appears in some software interfaces as a result. As late as November of 1971, Unix was still counting time in 60ths of a second since an epoch of 1 January 1971, which is a year later than the epoch currently used. This timestamp could only represent a range of about 2.5 years, forcing the epoch and units counted to be redefined; the Third Edition UNIX manual specifies an epoch of 1 January 1972. Early definitions of Unix time also lacked timezones.
The current epoch of 1 January 1970 00:00:00 UTC was selected arbitrarily by Unix engineers because it was considered a convenient date to work with. The precision was changed to count in seconds in order to avoid short-term overflow.
When POSIX.1 was written, the question arose of how to precisely define
time_t in the face of leap seconds. The POSIX committee considered whether Unix time should remain, as intended, a linear count of seconds since the epoch, at the expense of complexity in conversions with civil time or a representation of civil time, at the expense of inconsistency around leap seconds. Computer clocks of the era were not sufficiently precisely set to form a precedent one way or the other.
The POSIX committee was swayed by arguments against complexity in the library functions, and firmly defined the Unix time in a simple manner in terms of the elements of UTC time. This definition was so simple that it did not even encompass the entire leap year rule of the Gregorian calendar, and would make 2100 a leap year.
The 2001 edition of POSIX.1 rectified the faulty leap year rule in the definition of Unix time, but retained the essential definition of Unix time as an encoding of UTC rather than a linear time scale. Since the mid-1990s, computer clocks have been routinely set with sufficient precision for this to matter, and they have most commonly been set using the UTC-based definition of Unix time. This has resulted in considerable complexity in Unix implementations, and in the Network Time Protocol, to execute steps in the Unix time number whenever leap seconds occur.
Unix time was designed to encode calendar dates and times in a compact manner intended for use by computers internally. It is not intended to be easily read by humans or to store timezone-dependent values. It is also limited by default to representing time in seconds, making it unsuited for use when a more precise measurement of time is needed, such as when measuring the execution time of programs.
Range of representable times
Unix time by design does not require a specific size for the storage, but most common implementations of Unix time use a signed integer with the word size of the underlying computer. As the majority of modern computers are 32-bit or 64-bit, and a large number of programs are still written in 32-bit compatibility mode, this means that many programs using Unix time are using signed 32-bit integer fields. The maximum value of a signed 32 bit integer is 231-1, and the minimum value is -(231), making it impossible to represent dates before December 13, 1901 (at 20:45:52 UTC) or after January 19, 2038 (at 03:14:07 UTC). The early cutoff can have an impact on databases that are storing historical information; in some databases where 32-bit Unix time is used for timestamps, it may be necessary to store time in a different form of field, such as a string, to represent dates before 1901. The late cutoff is known as the Year 2038 problem and has the potential to cause issues as the date approaches, as dates beyond the 2038 cutoff would wrap back around to the start of the representable range in 1901.: 60
64-bit storage of Unix time is generally assumed not to have issues with date range limitations, as the effective range of dates representable with Unix time stored in a signed 64-bit integer is over 584 billion years, or around 42 times larger than the current estimated age of the universe.: 60-61
Unix time is not the only standard for time that counts away from an epoch. On Windows, the
FILETIME type stores time as a count of 100-nanosecond intervals that have elapsed since 0:00 GMT on January 1, 1601; it is used to store time stamps for files, and is used in some protocols used primarily, but not exclusively, on Windows computers, such as the Active Directory Time Service}} and the Server Message Block protocols. The Network Time Protocol used to coordinate time between computers uses an epoch of January 1, 1900, counted in an unsigned 32-bit integer for seconds and another unsigned 32-bit integer for fractional seconds, which rolls over[b] every 232 seconds (about once every 136 years).
Many applications and programming languages provide methods for storing time with an explicit timezone. There are also a number of time format standards which exist to be readable by both humans and computers, such as ISO 8601.
Notable events in Unix time
Unix enthusiasts have a history of holding "time_t parties" (pronounced "time tea parties") to celebrate significant values of the Unix time number. These are directly analogous to the new year celebrations that occur at the change of year in many calendars. As the use of Unix time has spread, so has the practice of celebrating its milestones. Usually it is time values that are round numbers in decimal that are celebrated, following the Unix convention of viewing
time_t values in decimal. Among some groups round binary numbers are also celebrated, such as +230 which occurred at 13:37:04 UTC on Saturday, 10 January 2004.
The events that these celebrate are typically described as "N seconds since the Unix epoch", but this is inaccurate; as discussed above, due to the handling of leap seconds in Unix time the number of seconds elapsed since the Unix epoch is slightly greater than the Unix time number for times later than the epoch.
- At 18:36:57 UTC on Wednesday, 17 October 1973, the first appearance of the date in ISO 8601 format[c] (1973-10-17) within the digits of Unix time (119731017) took place.
- At 01:46:40 UTC on Sunday, 9 September 2001, the Unix billennium (Unix time number 1000000000) was celebrated. The name billennium is a portmanteau of billion and millennium. Some programs which stored timestamps using a text representation encountered sorting errors, as in a text sort, times after the turnover starting with a 1 digit erroneously sorted before earlier times starting with a 9 digit. Affected programs included the popular Usenet reader KNode and e-mail client KMail, part of the KDE desktop environment. Such bugs were generally cosmetic in nature and quickly fixed once problems became apparent. The problem also affected many Filtrix document-format filters provided with Linux versions of WordPerfect; a patch was created by the user community to solve this problem, since Corel no longer sold or supported that version of the program.
- At 23:31:30 UTC on Friday, 13 February 2009, the decimal representation of Unix time reached 1234567890 seconds. Google celebrated this with a Google doodle. Parties and other celebrations were held around the world, among various technical subcultures, to celebrate the 1234567890th second.
- At 09:06:49 UTC on Friday, 16 June 2034, the decimal digits of the Unix time value will match the current year, month, date and hour (2034061609).
- At 03:14:08 UTC on Tuesday, 19 January 2038, 32-bit versions of the Unix timestamp will cease to work, as it will overflow the largest value that can be held in a signed 32-bit number (7FFFFFFF16 or 2147483647). Before this moment, software using 32-bit time stamps will need to adopt a new convention for time stamps, and file formats using 32-bit time stamps will need to be changed to support larger time stamps or a different epoch. If unchanged, the next second will be incorrectly interpreted as 20:45:52 Friday 13 December 1901 UTC. This is referred to as the Year 2038 problem.
- At 06:28:15 UTC on Sunday, 7 February 2106, Unix time will reach FFFFFFFF16 or 4294967295 seconds, which, for systems that hold the time on 32-bit unsigned integers, is the maximum attainable. For some of these systems, the next second will be incorrectly interpreted as 00:00:00 Thursday 1 January 1970 UTC. Other systems may experience an overflow error with unpredictable outcomes.
- At 15:30:08 UTC on Sunday, 4 December 292277026596, Unix time will overflow the largest value that can be held in a signed 64-bit number. This duration is nearly 22 times the estimated current age of the universe, which is 1.37×1010 (13.7 billion) years.
In popular culture
Vernor Vinge's novel A Deepness in the Sky describes a spacefaring trading civilization thousands of years in the future that still uses the Unix epoch. The "programmer-archaeologist" responsible for finding and maintaining usable code in mature computer systems first believes that the epoch refers to the time when man first walked on the Moon, but then realizes that it is "the 0-second of one of humankind's first computer operating systems".
- ^ Unix time is also known as "Epoch time", "POSIX time", "seconds since the Epoch", "Unix timestamp" or "UNIX Epoch time".
- ^ Because NTP is designed to coordinate time between computers which are expected to be close to each other in time, the time can be safely rolled over without creating ambiguity.
- ^ cited retroactively since ISO 8601 was published in 1988.
- ^ a b Farhad, Manjoo (8 September 2001). "Unix Tick Tocks to a Billion". Wired. ISSN 1059-1028. Archived from the original on 11 September 2022. Retrieved 16 October 2022.
- ^ "The Open Group Base Specifications Issue 7, Rationale: Base Definitions, section A.4 General Concepts". The Open Group. Archived from the original on 15 November 2017. Retrieved 9 September 2019.
- ^ a b c d e "The Open Group Base Specifications Issue 7, section 4.16 Seconds Since the Epoch". The Open Group. Archived from the original on 22 December 2017. Retrieved 22 January 2017.
- ^ Matthew, Neil; Stones, Richard (2008). "The Linux Environment". Beginning Linux Programming. Indianapolis, Indiana, US: Wiley. p. 148. ISBN 978-0-470-14762-7.
- ^ Apple File System Reference (PDF), p. 57, retrieved 19 October 2022,
This timestamp is represented as the number of nanoseconds since January 1, 1970 at 0:00 UTC, disregarding leap seconds
- ^ "time — Time access and conversions", Python documentation, archived from the original on 22 July 2022, retrieved 25 July 2022
- ^ "Date and Time Functions", MySQL 8.0 Reference Manual, retrieved 19 October 2022
- ^ "Epoch Converter - Unix Timestamp Converter". Epoch Converter. Archived from the original on 12 January 2020. Retrieved 12 January 2020.
- ^ "Handling timestamps using Format Date in Shortcuts". Apple Inc. Archived from the original on 19 June 2019. Retrieved 19 June 2019.
- ^ "RAW date format in CSV exported reports". International Business Machines Corporation (IBM). 22 April 2016. Archived from the original on 19 June 2019. Retrieved 19 June 2019.
- ^ "TIMESTAMP BY (Azure Stream Analytics)". Microsoft Corporation. Archived from the original on 19 June 2019. Retrieved 19 June 2019.
- ^ Mills, David L. (12 May 2012). "The NTP Timescale and Leap Seconds". eecis.udel.edu. Archived from the original on 15 May 2012. Retrieved 21 August 2017.
- ^ "Precision timekeeping". Sources for time zone and daylight saving time data. Archived from the original on 16 October 2017. Retrieved 30 May 2022.
The tz code and data support leap seconds via an optional "right" configuration where a computer's internal time_t integer clock counts every TAI second, as opposed to the default "posix" configuration where the internal clock ignores leap seconds. The two configurations agree for timestamps starting with 1972-01-01 00:00:00 UTC (time_t 63 072 000) and diverge for timestamps starting with time_t 78 796 800, which corresponds to the first leap second 1972-06-30 23:59:60 UTC in the "right" configuration, and to 1972-07-01 00:00:00 UTC in the "posix" configuration.
- ^ a b "Time Scales". Network Time Protocol Wiki. 24 July 2019. Archived from the original on 12 January 2020. Retrieved 12 January 2020.
- ^ Markus Kuhn. "Modernized API for ISO C". www.cl.cam.ac.uk. Archived from the original on 26 September 2020. Retrieved 31 August 2020.
- ^ "timespec". NetBSD Manual Pages. 12 April 2011. Archived from the original on 10 August 2019. Retrieved 5 July 2019.
- ^ "time.h(0P)". Linux manual page. Archived from the original on 27 June 2019. Retrieved 5 July 2019.
- ^ McCarthy, D. D.; Seidelmann, P. K. (2009). TIME—From Earth Rotation to Atomic Physics. Weinheim: Wiley–VCH Verlag GmbH & Co. KGaA. p. 232. ISBN 978-3-527-40780-4.
- ^ Unix Programmer's Manual (PDF) (1st ed.). 3 November 1971. Archived (PDF) from the original on 5 March 2022. Retrieved 28 March 2012.
time returns the time since 00:00:00, Jan. 1, 1971, measured in sixtieths of a second.
- ^ Unix Programmer's Manual (PDF) (3rd ed.). 15 March 1972. Retrieved 11 February 2023.
time returns the time since 00:00:00, Jan. 1, 1972, measured in sixtieths of a second.
- ^ a b c Rochkind, Mark (2004). Advanced UNIX Programing (2nd ed.). Addison-Wesley. pp. 56–63. ISBN 978-0-13-141154-8.
- ^ "FILETIME (minwinbase.h) - Win32 apps". Microsoft Learn. Microsoft. Retrieved 9 March 2023.
- ^ "File Times - Win32 apps". Microsoft Learn. Microsoft. Retrieved 9 March 2023.
- ^ "How to convert date/time attributes in Active Directory to standard time format". Microsoft Learn. Microsoft. Retrieved 20 October 2022.
- ^ W. Richard Stevens; Bill Fenner; Andrew M. Rudoff (2004). UNIX Network Programming. Addison-Wesley Professional. pp. 582–. ISBN 978-0-13-141155-5. Archived from the original on 30 March 2019. Retrieved 16 October 2016.
- ^ "datetime — Basic date and time types". Python Standard Library Reference. Python Software Foundation. Retrieved 20 October 2022.
Attributes: year, month, day, hour, minute, second, microsecond, and tzinfo.
- ^ a b Tweney, Dylan (12 February 2009). "Unix Lovers to Party Like It's 1234567890". Wired. Archived from the original on 29 March 2014. Retrieved 12 March 2017.
- ^ "Slashdot | date +%s Turning 1111111111". 17 March 2005. Archived from the original on 12 January 2020. Retrieved 12 January 2020.[unreliable source?]
- ^ "Unix time facts & trivia – Unix Time . Info". Archived from the original on 27 October 2017.
- ^ "UNIX Approaches Ripe Old Age of One Billion". Electromagnetic.net. Archived from the original on 13 April 2013. Retrieved 6 December 2012.
- ^ Neumann, Peter G. (15 October 2001). "The Risks Digest Volume 21: Issue 69". Catless.ncl.ac.uk. Archived from the original on 22 October 2015. Retrieved 6 December 2012.
- ^ "Technical Problems". linuxmafia.com. Archived from the original on 11 October 2012. Retrieved 21 August 2017.
- ^ nixCraft. "Humor: On Feb, Friday 13, 2009 Unix time Will Be 1234567890". Cyberciti.biz. Retrieved 6 December 2012.
- ^ "Google 1234567890 Logo". Google Inc. Archived from the original on 11 January 2013. Retrieved 28 January 2013.
- ^ Ahmed, Murad (13 February 2009). "At the third stroke, the Unix time will be 1234567890". The Times. Archived from the original on 14 November 2016. Retrieved 12 January 2020.
- ^ "Countdown to Unix Time 2,034,061,609". Epoch Converter. Retrieved 21 March 2023.
- ^ "Unix Time Stamp.com". UnixTimeStamp.com. Archived from the original on 18 October 2022. Retrieved 12 January 2020.
- ^ Spinellis, Diomidis (7 April 2006). Code Quality: The Open Source Perspective. ISBN 9780321166074. Archived from the original on 18 October 2022. Retrieved 22 November 2020.
- ^ IDRBT Working Paper No. 9 Y2K38 – Ashutosh Saxena and Sanjay Rawat
- ^ Mashey, John R. (27 December 2004). "Languages, Levels, Libraries, and Longevity". Queue. 2 (9): 32–38. doi:10.1145/1039511.1039532. S2CID 28911378. Archived from the original on 10 August 2019. Retrieved 12 January 2020.
- Unix Programmer's Manual, first edition
- Personal account of the POSIX decisions by Landon Curt Noll
- chrono-Compatible Low-Level Date Algorithms – algorithms to convert between Gregorian and Julian dates and the number of days since the start of Unix time