Jump to content

Scientific notation

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 86.176.212.196 (talk) at 02:55, 21 February 2012 (→‎Examples). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Scientific notation is a way of writing numbers that are too large or too small to be conveniently written in standard decimal notation. Scientific notation has a number of useful properties and is commonly used in calculators and by scientists, mathematicians, health professionals, and engineers.

In scientific notation all numbers are written in the form of

(a times ten raised to the power of b), where the exponent b is an integer, and the coefficient a is any real number (however, see normalized notation below), called the significand or mantissa. The term "mantissa" may cause confusion, however, because it can also refer to the fractional part of the common logarithm. If the number is negative then a minus sign precedes a (as in ordinary decimal notation).

Standard decimal notation Normalized scientific notation
300 3×102
4,000 4×103
-53,000 −5.3×104
6,720,000,000 6.72×109
0.000 000 007 51 7.51×10−9

Normalized notation

Any given number can be written in the form of a×10^b in many ways; for example, 350 can be written as 3.5×102 or 35×101 or 350×100.

In normalized scientific notation, the exponent b is chosen such that the absolute value of a remains at least one but less than ten (1 ≤ |a| < 10). Following these rules, 350 is always written as 3.5×102. This form allows easy comparison of two numbers of the same sign in a, as the exponent b gives the number's order of magnitude. In normalized notation, the exponent b is negative for a number with absolute value between 0 and 1 (e.g., negative one half is written as −5×10−1). The 10 and exponent are usually omitted when the exponent is 0. Note that 0 itself cannot be written in normalized scientific notation because the mantissa (a) cannot be less than one while the original number is still zero, causing the exponent to become undefined.

Normalized scientific form is the typical form of expression of large numbers for many fields, except during intermediate calculations or when an unnormalised form, such as engineering notation, is desired. (Normalized) scientific notation is often called exponential notation—although the latter term is more general and also applies when a is not restricted to the range 1 to 10 (as in engineering notation for instance) and to bases other than 10 (as in 315× 2^20).

Engineering notation

Engineering notation differs from normalized scientific notation in that the exponent b is restricted to multiples of 3. Consequently, the absolute value of a is in the range 1 ≤ |a| < 1000, rather than 1 ≤ |a| < 10. Though similar in concept, engineering notation is rarely called scientific notation. This allows the numbers to explicitly match their corresponding SI prefixes, which facilitates reading and oral communication. For example, 12.5×10−9 m can be read as "twelve-point-five nanometers" or written as 12.5 nm, while its scientific notation counterpart 1.25×10−8 m would likely be read out as "one-point-two-five times ten-to-the-negative-eight meters".

Significant figures

A significant figure is a digit in a number that adds to its precision. This includes all nonzero numbers, zeroes between significant digits, and zeroes indicated to be significant. Leading and trailing zeroes are not significant because they exist only to show the scale of the number. Therefore, 1,230,400 has five significant figures - 1, 2, 3, 0, and 4; the two zeroes serve only as placeholders and add no precision to the number.

When a number is converted into normalized scientific notation, it is scaled down to a number between 1 and 10. All of the significant digits remain, but all of the placeholding zeroes are incorporated into the exponent. Following these rules, 1,230,400 becomes 1.2304 x 106.

Ambiguity of the last digit in scientific notation

It is customary in scientific measurements to record all the significant digits from the measurements, and to guess one additional digit if there is any information at all available to the observer to make a guess. The resulting number is considered more valuable than it would be without that extra digit, and it is considered a significant digit because it contains some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together.

Additional information about precision can be conveyed through additional notations. In some cases, it may be useful to know how exact the final significant digit is. For instance, the accepted value of the unit of elementary charge can properly be expressed as 1.602176487(40)×10−19 C,[1] which is shorthand for (1.602176487±0.000000040)×10−19 C

E notation

A calculator display showing the Avogadro constant in E notation

Most calculators and many computer programs present very large and very small results in scientific notation. Because superscripted exponents like 107 cannot always be conveniently represented, the letter E or e is often used to represent times ten raised to the power of (which would be written as "x 10b") and is followed by the value of the exponent. Note that in this usage the character e is not related to the mathematical constant e or the exponential function ex (a confusion that is less likely with capital E); and though it stands for exponent, the notation is usually referred to as (scientific) E notation or (scientific) e notation, rather than (scientific) exponential notation (though the latter also occurs).

Examples and alternatives

  • In the C++, FORTRAN, MATLAB, Perl, Java[2] and Python programming languages, 6.0221418E23 or 6.0221418e23 is equivalent to 6.0221418×1023. FORTRAN also uses "D" to signify double precision numbers.[3]
  • The ALGOL 60 programming language uses a subscript ten "10" character instead of the letter E, for example: 6.02214151023.[4]
  • The ALGOL 68 programming language has the choice of 4 characters: e, E, \, or 10. By examples: 6.0221415e23, 6.0221415E23, 6.0221415\23 or 6.02214151023.[5]
  • Decimal Exponent Symbol is part of "The Unicode Standard 6.0" e.g. 6.0221415⏨23 - it was included to accommodate usage in the programming languages Algol 60 and Algol 68.
  • The TI-83 series and TI-84 Plus series of calculators use a stylized E character to display decimal exponent and the 10 character to denote an equivalent Operator[7].
  • The Simula programming language requires the use of & (or && for long), for example: 6.0221415&23 (or 6.0221415&&23).[6]

Order of magnitude

Scientific notation also enables simpler order-of-magnitude comparisons. A proton's mass is 0.0000000000000000000000000016726 kg. If written as 1.6726×10−27 kg, it is easier to compare this mass with that of an electron, given below. The order of magnitude of the ratio of the masses can be obtained by comparing the exponents instead of the more error-prone task of counting the leading zeros. In this case, −27 is larger than −31 and therefore the proton is roughly four orders of magnitude (about 10000 times) more massive than the electron.

Scientific notation also avoids misunderstandings due to regional differences in certain quantifiers, such as billion, which might indicate either 109 or 1012.

Use of spaces

In normalized scientific notation, in E notation, and in engineering notation, the space (which in typesetting may be represented by a normal width space or a thin space) that is allowed only before and after "×" or in front of "E" or "e" is sometimes omitted, though it is less common to do so before the alphabetical character.[7]

Examples

  • An electron's mass is about 0.00000000000000000000000000000091093822 kg. In scientific notation, this is written 9.1093822×10−31 kg.
  • The Earth's mass is about 5973600000000000000000000 kg. In scientific notation, this is written 5.9736×1024 kg.
  • The Earth's circumference is approximately 40000000 m. In scientific notation, this is 4×107 m. In engineering notation, this is written 40×106 m. In SI writing style, this may be written "40 Mm" (40 megameters).
  • An inch is 25400 micrometers. Describing an inch as 2.5400×104 μm unambiguously states that this conversion is correct to the nearest micrometer. An approximated value with only three significant digits would be 2.54×104 μm instead. In this example, the number of significant zeros is actually infinite (which is not the case with most scientific measurements, which have a limited degree of precision). It can be properly written with the minimum number of significant zeros used with other numbers in the application (no need to have more significant digits that other factors or addends).[clarification needed] Or a bar can be written over a single zero, indicating that it repeats forever. The bar symbol is just as valid in scientific notation as it is in decimal notation.

Using scientific notation

Converting

To convert from ordinary decimal notation to scientific notation, move the decimal separator the desired number of places to the left or right, so that the significand will be in the desired range (between 1 and 10 for the normalized form). If you moved the decimal point n places to the left then multiply by 10n; if you moved the decimal point n places to the right then multiply by 10n. For example, starting with 1230000, move the decimal point six places to the left yielding 1.23, and multiply by 106, to give the result 1.23×106. Similarly, starting with 0.000000456, move the decimal point seven places to the right yielding 4.56, and multiply by 10−7, to give the result 4.56×10−7

If the decimal separator did not move then the exponent multiplier is logically 100, which is correct since 100 = 1. However, the exponent part "× 100" is normally omitted, so, for example, 1.234×100 is just written as 1.234.

To convert from scientific notation to ordinary decimal notation, take the significand and move the decimal separator by the number of places indicated by the exponent—left if the exponent is negative, or right if the exponent is positive. Add leading or trailing zeroes as necessary. For example, given 9.5 × 1010, move the decimal point ten places to the right to yield 95000000000.

Conversion between different scientific notation representations of the same number is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and the exponent parts. The decimal separator in the significand is shifted n places to the left (or right), corresponding to division (multiplication) by 10n, and n is added to (subtracted from) the exponent, corresponding to a canceling multiplication (division) by 10n. For example:

1.234×103 = 12.34×102 = 123.4×101 = 1234

Basic operations

Given two numbers in scientific notation,

and

Multiplication and division are performed using the rules for operation with exponential functions:

and

Some examples are:

and

Addition and subtraction require the numbers to be represented using the same exponential part, so that the significand can be simply added or subtracted. :

Next, add or subtract the significands:

An example:

See also

Notes and references

  1. ^ NIST value for the elementary charge
  2. ^ http://download.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
  3. ^ http://www.math.hawaii.edu/lab/197/fortran/fort3.htm#double
  4. ^ Report on the Algorithmic Language ALGOL 60, Ed. P. Naur, Copenhagen 1960
  5. ^ "Revised Report on the Algorithmic Language Algol 68". 1973. Retrieved April 30, 2007. {{cite web}}: Unknown parameter |month= ignored (help)
  6. ^ "SIMULA Standard As defined by the SIMULA Standards Group - 3.1 Numbers". 1986. Retrieved October 6, 2009. {{cite web}}: Unknown parameter |month= ignored (help)
  7. ^ Samples of usage of terminology and variants: [1], [2], [3], [4], [5], [6]

Template:Link GA