Talk:Rounding

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated B-class, Mid-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
B Class
Mid Importance
 Field: Applied mathematics
WikiProject Computer science (Rated C-class, Low-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Low  This article has been rated as Low-importance on the project's importance scale.
 

Redundant with Significant figures/Rounding?[edit]

This page is basically redundant with Significant figures#Rounding, but I feel that this is better written, so warrants merging instead of deleting. ~ Irrel 17:33, 28 September 2005 (UTC)

Please restrain yourself. They are very different subjects. And this page doesn't describe rounding very well. --Cat5nap 05:46, 3 October 2005 (UTC)
The discussion at Talk:Significant figures#Merge is overwhelmingly against the suggested merge of Rounding with Significant figures. I suggest the tags be removed. If anything, I'd consider merging with Truncation. Jimp 13Oct05
  • Well, this got resolved pretty much as I intended (altho the merge went the other way). Regardless, I hardly think putting up a "merge?" template shows a lack of restraint, seeing as it's just a request for input. Actually doing the merge, with no discussion -- now that'd be un-restrained. ~ Irrel 20:20, 13 May 2006 (UTC)

Rounding rule error[edit]

"but if the five is succeeded by nothing, odd last reportable digits are decreased by one" Shouldn't this be increased?68.227.80.79 05:44, 13 November 2005 (UTC)

Yes. Thanks for spotting this! -- Jitse Niesen (talk) 12:21, 13 November 2005 (UTC)
Hopefully the revisedversion of the article gotthis right too. --Jorge Stolfi (talk) 05:52, 1 August 2009 (UTC)

Rounding negative numbers.[edit]

How are negative numbers handled? According to the article round(-1.5) = -2, which is wrong, right? round(-1.5) = -1 I believe.

There are many different implementations. — Omegatron 19:59, 30 June 2006 (UTC)
There are many implementations, and very few are covered in this article. Here's an external article which covers the field [1]. Hopefully someone has time to incorporate the information from that article without engaging in copyvio. -Harmil 15:55, 14 July 2006 (UTC)
Negative numebrs should be properly covered in the current version. --Jorge Stolfi (talk) 05:52, 1 August 2009 (UTC)

The Rounding of ALL numbers, requires the rounding of the absolute value of the number, and then replace the original sign (+ or -). The above answer is "-2", and not "-1". If the number preceding a "5" is ODD, then make it EVEN by adding "1". If the number preceding a "5" is EVEN, then leave it alone. As there are an equal amount of ODD numbers and EVEN numbers in the counting system.

Example; "1.6 - 1.6 = 0", "1.6 + (-1.6) = 0", rounding to the 1's unit is "2.0 + (-2.0) = 0" as |-1.6| is 1.6, rounded gives us "2.0".
Example; "1.4 - 1.4 = 0", "1.4 + (-1.4) = 0", rounding to the 1's unit is "1.0 + (-1.0) = 0" as |-1.4| is 1.4, rounded gives us "1.0". 
Example; "1.5 - 1.5 = 0", "1.5 + (-1.5) = 0", rounding to the 1's unit is "2.0 + (-2.0) = 0" as |-1.5| is 1.5, rounded gives us "2.0".
Example; "2.5 - 2.5 = 0", "2.5 + (-2.5) = 0", rounding to the 1's unit is "2.0 + (-2.0) = 0" as |-2.5| is 2.5, rounded gives us "2.0".

The Rounding of a number can never give a value of "0.0".

The rounding depends on the type of rounding being done. Rounding to even is normally explained as rounding to the nearest even number without talking about absolute numbers or adding or anything like that. The numbers 0.5 and -0.5 will both round to 0.0 when using round to even, I don't know why you say what you said about it.. Dmcq (talk) 19:29, 9 September 2009 (UTC)

Organization[edit]

I don't know if it's a good idea to merge with floor function, but they are related. A sort of "family tree" of rounding functions:

  • Floor
  • Ceiling
  • Half round
    • Half round up
      • Where up is +∞
      • Where up is away from zero
    • Half round down
      • Where down is −∞
      • Where down is towards zero
    • Half round even
    • Half round odd

Rounding of VAT[edit]

The VAT Guide cited states:

Note: The concession in this paragraph to round down amounts of VAT is designed for invoice traders and applies only where the VAT charged to customers and the VAT paid to Customs and Excise is the same. As a general rule, the concession to round down is not appropriate to retailers, who should see paragraph 17.6.

and paragraph 17.6 says the rounding down only is not allowed. So I don't think it is "quite clearly" stated. --81.178.31.210 17:27, 2 September 2006 (UTC)

Can anyone answer this question? There is a slot in the article for this kind of examples. --Jorge Stolfi (talk) 05:52, 1 August 2009 (UTC)

Which method introduces the most error?[edit]

From the "Round-to-even method" section in this article, as of right now:

"When dealing with large sets of scientific or statistical data, where trends are important, traditional rounding on average biases the data upwards slightly. Over a large set of data, or when many subsequent rounding operations are performed as in digital signal processing, the round-to-even rule tends to reduce the total rounding error, with (on average) an equal portion of numbers rounding up as rounding down."

Huh? Doesn't traditional rounding have an equal portion of numbers rounding up and down? In traditional rounding, numbers between 0 and <5 are rounded to 0, while numbers between 5 and <10 are rounded to 10, if 10 is an increase in the next highest digit of the number being rounded. The difference of (<10 - 5) equals the difference of (<5 - 0), doesn't it? Am I missing something? 4.242.147.47 21:13, 19 September 2006 (UTC)

In four cases (1, 2, 3 and 4) the value is rounded down. In five cases (5,6,7,8,9) the value is rounded up. In one case (0) the value is left unchanged. This may be what you were missing. 194.109.22.149 15:14, 28 September 2006 (UTC)

But "unchanged" is not entirely correct, as there may be further digits after the 0. For example, rounded to one decimal place, 2.304 would round to 2.3; it is not unchanged under the traditional rounding scheme, but rather rounded down, thus making five cases for rounding down and five for rounding up.

If we consider rounding all of 0.00, 0.01, ..., 1.00 (101 numbers) to the nearest one, we get 50 0s and 51 1s. The total amount of down-rounding is -0.01+-0.02+...=\sum_{i=1}^{49} -i/100=-12.25, but the amount of up-rounding is 0.50+0.49+...=\sum_{i=1}^{50} i/100=12.75. The average error is thus \frac{-12.25+12.75}{101}=0.0049505. Importantly, this imbalance remains even if we exclude 1 (where you get 50 numbers going each way), and the average error increases (to precisely half the last digit specified; this is not a coincidence). This is the bias; because it doesn't depend on the granularity of the rounding but rather of the data, it's easy to miss. --Tardis 07:10, 31 October 2006 (UTC)
The number 1.00 should not appear in this calculation. We're interested in rounding the interval [0.00, 1.00) -- that is, all numbers >= 0.00 and < 1.00 (strictly smaller than 1). Once the number 1.00 is removed from the above paragraph, the imbalance disappears (contrary to what's written above). In reality, there is no imbalance in the regular rounding - the interval [0, 0.5) has the same length as the interval [0.5, 1).
But, like 1.00, 0.00 is also not rounded. So the intervals would be (0, 0.5) and [0.5, 1) ; this results in a bias upwards. —The preceding unsigned comment was added by 85.144.113.76 (talk) 00:06, 20 January 2007 (UTC).
If you look at the calculation a bit more closely, you'll see it sums the difference between the number to be rounded and the result of the rounding. In both the 0.00 and 1.00 cases this is 0, so it doesn't matter if you include them in the summation or not.
Tardis said: "This is the bias; because it doesn't depend on the granularity of the rounding but rather of the data, it's easy to miss." This is the crucial point. If the data is quantised (granular) then the error is related to the size of the quantisation. But for any physical process the numbers involved are normally not quantised hence there is no bias at all. If you repeat the calculation above with a rounding granularity of 0.1 still but a data granularity of 0.0001 (say) then you will see this in action. Thus, the conclusion is that if the data set you are rounding has no quantisation, you should use "0.5 goes up" rounding not bankers rounding. 137.222.40.78 (talk) 13:18, 23 May 2008 (UTC)
If your data have "no quantization" (including no discrete probability accumulations), then you can round 0.5 to 23 if you like. It almost never happens, so what does it matter what you do with it? In the real world, this limit is never accessed. --Tardis (talk) 11:11, 9 November 2008 (UTC)
I was just thinking about this issue too, but am convinced that there is an imbalance in rounding 0.5 to 1. Consider the ten values 0.0, 0.1, 0.2 ... 0.9, 1.0. Each covers a 0.1 range. 0.0 covers -0.05 to 0.05. 0.1 covers 0.05 to 0.15. 1.0 covers 0.95 to 1.05. If you want to round to whole values, you have 0 and 1 available. 0 covers -0.5 to 0.5, and 1 covers 0.5 to 1.5. 0.0 through 0.4 thus clearly convert to 0, and 0.6 through 1.0 clearly convert to 1. 0.5's range is perfectly divided in two by the ranges that 0 and 1 cover. Converting 0.5 to 0 is just as valid as converting it to 1. The only way to decide which is better is to examine the context; I don't see any clear correct choice for all cases, since either direction involves tradeoffs. 72.48.98.66 (talk) 12:27, 20 June 2010 (UTC)
Yes that's why round to even is used, it means half of them are rounded up and half down. Round to even is used insted of round to odd so you are less likely to get odd digits in particular 5 at the end in subsequent calculations which means you do less rounding overall. Dmcq (talk) 12:50, 20 June 2010 (UTC)
To weigh in on this little debate, rounding 0.5 up leads to a more uniform distribution of values in the decimal system as a whole. If you divide numbers into sets of 10 with n representing everything but the least significant digit n0-n9 is 10 numbers with the next number being part of the next set(n10 which means that the 1 "carries over"). Since the rounding discussed is concerned with the decimal number system you have to follow the divisions of that system. In other words rounding can be viewed as an operation on all possible values of the least significant digit of a number (and viewed recursively for extension), which means that only the values for the least significant digits should be changed, once you change a number in any other place you're introducing a new set (and would have to introduce the full set for consistency). So 5 numbers down 0-4, 5 numbers up 5-9 (notice there's only one "0"). I'd image the round to even system stems from the issue that the "0" isn't being rounded. Therefore, when dealing with sets of data where exact values are for some reason excluded or less likely than values which are to be rounded, some of the balance needs to be shifted around. The main reason I'm chiming is that it appears as though "asymmetric" in the article regarding round 0.5 up is used incorrectly, so this should either be cited or removed. -mwhipple —Preceding unsigned comment added by 98.229.243.133 (talk) 18:13, 1 October 2010 (UTC)
And to expand a bit from above, the rounding to even method is probably more appropriate when dealing with less certain sets (i.e. there is an ill-defined, unknown, or otherwise troublesome minimum or maximum). —Preceding unsigned comment added by 98.229.243.133 (talk) 18:22, 1 October 2010 (UTC)
Oh...and rounding to even would be a convenient way to offset rounding issues caused by the problems with floating point representation in computers. —Preceding unsigned comment added by 98.229.243.133 (talk) 19:07, 1 October 2010 (UTC)
Disregard most the above, aside from the bit about the misuse of asymmetric. I was too caught up with the number system I lost sight of the values. Rounding is changing values and therefore the significant set is the inexact numbers (and the cumulative change accrued). The argument below is a bit out of context (though does highlight some of my poor wording) —Preceding unsigned comment added by 98.229.243.133 (talk) 22:31, 1 October 2010 (UTC)
The numbers are considered to be rounded to the precision and cover a range slightly below as well as above the exact number. So 0.5 means anything in the range 0.45 to 0.55, it does not mean from 0.5 to 0.6. Your initial argument about 0.0 .. 0.9 is wrong, the ones from -0.05 to 0.05 should go to 0.0, it is not rounding 0.0 to 0.1 to 0.0 Dmcq (talk) 19:28, 1 October 2010 (UTC)

I am amazed that any serious scientist could argue that there are either are either x+1 or x-1 numbers between integers in a base 10 system. When fully discreet, there are an equal amount of numbers that go to the lower integer as the higher integer. 2.0, 2.1, 2.2, 2.3 and 2.4 go to 2. 2.5, 2.6, 2.7, 2.8 and 2.9 go to 3. 5 each. The same would have been true from 1.0 - 1.9 and then from 3.0 t- 3.9. Every single integer will have 10 numbers that can result in that rounding. The tie-breaking rule is silly, and as engineers have known for years, wrong. If you want the true answer, you never analyze rounded numbers! You must use the actual values to beyond at least one significant digit to what you are trying to analyze. This is confusing, unnecessarily, kids in school today. Go back to what we have known for years, and can be proven to be correct. And never, ever analyze rounded numbers to the same significant digit if you know the data set is more discreet. — Preceding unsigned comment added by 24.237.80.31 (talk) 04:24, 3 November 2013 (UTC)

This depends on the model and the context. You can assume a uniform distribution in the interval [0,1], in which case the midpoint 0.5 appears with a null probability, so that under this assumption, the choice for half-points doesn't matter. In practice, this is not true because one has specific inputs and algorithms, but then, the best choice depends on the actual distribution of the inputs (either true inputs of a program or intermediate results). Vincent Lefèvre (talk) 13:37, 20 April 2015 (UTC)

Another Method ?[edit]

There seems to be at least one other method; rounding up or down with probability given by nearness.

It is given in javascript by
R = Math.round(X + Math.random() - 0.5)

It has the possible advantage that the average value of R for a large set of roundings of constant X is X itself.

82.163.24.100 11:51, 29 January 2007 (UTC)

This is called "dithering". Added a section on it. --Jorge Stolfi (talk) 05:41, 1 August 2009 (UTC)
The dithering section says "This is equivalent to rounding y + s to the nearest integer, where s is a random number uniformly distributed between 0 and 1". I'm not a mathematician, so I'm probably missing something, but that seems to round your example to 23 with probability .33 and to 24 with probability .67 instead of .83 and .17. Should be "rounding y+s toward 0" ? (for y>=0 anyway) M frankied (talk) 18:31, 9 August 2010 (UTC)
Yes it should have said rounding down instead of rounding to nearest, I've updated the text. Dmcq (talk) 18:42, 9 August 2010 (UTC)

Why are halfway-values rounded away from zero?[edit]

Can somebody please tell me, why is it - and why does it make sense - that we round, say, 1.5 to 2 and not to 1? Here's my thinking: 1.1, 1.2, 1.3 and 1.4 go to 1 - 1.6, 1.7, 1.8, 1.9 go to 2. So 1.5 is of course right in the middle. Why should we presume it's more 2ish than 1ish, if it's equally close? Do you think it's just because teachers are nice, and when they do grade averages, they want to give you a little boost, rather than cut you down? Is there some philosophical or mathematical justification? Wikidea 23:38, 6 February 2008 (UTC)


You forgot 1.0. —Preceding unsigned comment added by 207.245.46.103 (talk) 19:08, 7 February 2008 (UTC)

I also left out 2.0, surely? Wikidea 19:41, 7 February 2008 (UTC)
I do not see a good reason. A convention is useful, but it could have been the other way around.--Patrick (talk) 23:46, 7 February 2008 (UTC)
That's exactly what I'm thinking too! So maybe it's true: Maths teachers were just being nice in marking up students' average test results! They could've had the convention the other way indeed. Wikidea 00:01, 8 February 2008 (UTC)
Or vendors invented it for finding the price of half a unit. But it is more likely that they round up all broken values, not only those halfway.--Patrick (talk) 00:22, 8 February 2008 (UTC)

Refer to the discussion above, the "Which method introduces the most error?" question. I guess I retract my 1.0 statement. —Preceding unsigned comment added by 207.245.46.103 (talk) 18:30, 8 February 2008 (UTC)

Hopefully the current version of the article makes it clear that it is just an arbitrary choice (and some choice is needed). All the best, --Jorge Stolfi (talk) 05:47, 1 August 2009 (UTC)

Problem with decimal rounding of binary fractions[edit]

The floating point unit on the common PC works with IEEE-754, floating binary point numbers. I have not seen addressed in this page or this talk page the problem of doing rounding of the binary fraction numbers to a specified number of decimal fraction digits. The problem results from fact that variables with binary point fractions cannot generally exactly represent decimal fraction numbers. One can see this from the following table. This table is an attempt to represent the numbers from 1.23 to 1.33 in 0.005 increments.

ExactFrac   Approx.   Closest IEEE-754 (64-bit floating binary point number)
1230/1000    1.230    1.229999999999999982236431605997495353221893310546875
1235/1000    1.235    1.2350000000000000976996261670137755572795867919921875
1240/1000    1.240    1.2399999999999999911182158029987476766109466552734375
1245/1000    1.245    1.24500000000000010658141036401502788066864013671875
1250/1000    1.250    1.25
1255/1000    1.255    1.25499999999999989341858963598497211933135986328125
1260/1000    1.260    1.2600000000000000088817841970012523233890533447265625
1265/1000    1.265    1.2649999999999999023003738329862244427204132080078125
1270/1000    1.270    1.270000000000000017763568394002504646778106689453125
1275/1000    1.275    1.274999999999999911182158029987476766109466552734375
1280/1000    1.280    1.2800000000000000266453525910037569701671600341796875
1285/1000    1.285    1.2849999999999999200639422269887290894985198974609375
1290/1000    1.290    1.29000000000000003552713678800500929355621337890625
1295/1000    1.295    1.2949999999999999289457264239899814128875732421875
1300/1000    1.300    1.3000000000000000444089209850062616169452667236328125
1305/1000    1.305    1.3049999999999999378275106209912337362766265869140625
1310/1000    1.310    1.310000000000000053290705182007513940334320068359375
1315/1000    1.315    1.314999999999999946709294817992486059665679931640625
1320/1000    1.320    1.3200000000000000621724893790087662637233734130859375
1325/1000    1.325    1.3249999999999999555910790149937383830547332763671875
1330/1000    1.330    1.3300000000000000710542735760100185871124267578125

The exact result of doing the division indicated in the first column is shown as the long number in the third column. The intended result is in the second column. The thing to note is that only one of the quotients is exact; the others are only approximate. Thus when we are trying to round our numbers up or down, we cannot use rules based on simple greater-than, less-than, or at some exact decimal fraction value, because, in general, these decimal fraction values have no exact representation in binary fraction numbers. JohnHD7 (talk) 22:08, 28 June 2008 (UTC)

This isn't unique to decimal fractions and is covered reasonably well at floating point. Floating point operations have a finite precision, which lead to rounding errors, which can stack. If you need your code to generate a certain number of accurate decimal (or binary, etc.) places, or to satisfy some other accuracy criterium, you should check that the worst-case deviation your code could generate stays within limits. Shinobu (talk) 07:18, 8 November 2008 (UTC)

Round to even[edit]

I've seen two sources that claim that it's correct to round (e.g.) 2.459 to 2.4 rather than 2.5, though they both give the same bogus logic: that 2.45 might as well be rounded to 2.4 as to 2.5, and so this also applies to 2.459, even though this is a different number and is obviously closer to 2.5. But are we sure this isn't standard practice in some crazy field or other? Evercat (talk) 20:51, 17 September 2008 (UTC)

That isn't a valid rounding method, as the result can deviate significantly more than half an integral step from the original. If it's actually in use somewhere the users should get a rap on the knuckles. Perhaps, if its use were to cause a big scandal, it might be appropriate to document that on Wikipedia, but in the meantime it lacks notability. Shinobu (talk) 07:07, 8 November 2008 (UTC)
Agreed. Those sources are probably just plain wrong. --Jorge Stolfi (talk) 05:44, 1 August 2009 (UTC)

Rounding 2.50001 to nearest integer[edit]

Please comment: rounding 2.50001 to the nearest integer. Is it 2 or 3? Redding7 (talk) 01:42, 5 January 2010 (UTC)

Definitely 3 (distance from 2.50001 to 2 is 0.50001, to 3 is 0.49999, so the latter is nearest). --Jorge Stolfi (talk) 03:02, 5 January 2010 (UTC)

Consistency[edit]

Please edit ‘Rounding functions in programming languages’ for naming consistency. I'm not sure which names you prefer, so I'll let you do it, but please be consistent. Shinobu (talk) 07:07, 8 November 2008 (UTC)

Needs fixing[edit]

The section "Common method" does not explain negative numbers. The first example of negative just says "one rounds up as well" but shows different behavior for the case of 5, otherwise the same as positive. The second example is supposed to be the opposite but shows the same thing! It calls it "down" instead of "up", but gives the same answer. —Długosz (talk) 19:55, 18 February 2009 (UTC)

The section has been thoroughly rewritten. Please check if the problem has been fixed. --Jorge Stolfi (talk) 05:42, 1 August 2009 (UTC)

Useof "-0" by meteorlogists[edit]

The article claims that

"Some meteorologists may write "-0" to indicate a temerature between 0.0 and −0.5 degrees (exclusive) that was rounded to integer. This notation is used when the negative sign is concidered important, no matter how small is the magnitude; for example, when giving temperatures in the Celsius scale, where below zero indicates freezing."

I am bothered by this paragraph because (1) evidence should be provided that meteorologists actualy do this; it may well be an original invention by the editor. Also (2) tallying days with negative temperature seems a rather unscientific idea, since even small errors in measurement could lead to a large error in the result. If the station's thermometer says -0.4C or -0.1C, it is not certain that street puddles are freezing. On the other hand, if the thermometer says +0.1C or +0.4C, the puddles may be freezing nonetheless. To compare the harshness of weather, one shoudl use a more robust statistic, such as the mean temperature. Therefore, there does not seem to be a good excuse for preserving the minus sign. All the best, --Jorge Stolfi (talk) 06:03, 1 August 2009 (UTC)


Another 'Bankers' or 'Accounting' Rounding Method?[edit]

I recall reading about a method (I believe on Wikipedia) some months and/or years ago, allegedly used in some banking systems to not acrue (or lose) pennies or cents over time, it was rounded as follows, if I recall correctly:

$XX.XXY was rounded 'up' if Y was odd, and 'down' id Y was even. (5 of the 10 options going in each direction)

Eg:

$10.012 was rounded to $10.00
$10.013 was rounded to $10.01
$10.014 was rounded to $10.00
$10.015 was rounded to $10.01
$10.016 was rounded to $10.00
$10.017 was rounded to $10.01

I forget how it handled negative cases... The 'bankers round' (Round half to even) given, does perform one of the same purposes of symetric rounding, but in a different way, potentially with a bias that's relevant to finances, and I'm wondering if the article is missing this method (and if so, if it's worth someone who knows the details accurately adding it), if it's documented elsewhere on wikipedia, or if it's even been used at all, or was wrong in my source previously! (Possibly Wikipedia, iirc). The advantages of it seemed to basically be 'random' with even distribution in the sense that things are not moving to the closest number at the required level of precision, where identifiable, but with repeatable results. FleckerMan (talk) 01:03, 18 August 2009 (UTC)

Indeed it seems to be a different rounding method It should be added to the "Basic rounding to integer" section, but recast as rounding XXX.Y to integer. However, this method does not round to the nearest valid value. If I were a client of such a bank, I would feel cheated (even though the method is fair in the long run). 8-) --Jorge Stolfi (talk) 17:29, 18 August 2009 (UTC)

"Rounding in language X" section[edit]

I deleted the long list of rounding functions in language X, Y, Z, ... It was not useful to readers (programmers will look it up in the language's article, not here), it was waaaaay too long (and still far from complete), and extremely repetitive (most languages now simply provide the four basic IEEE rounding functions -- trunc,round,ceil,floor). The lists were not discarded but merely moved to the respective language articles (phew!). All the best, --Jorge Stolfi (talk) 22:31, 6 December 2009 (UTC)

Looks much better without them. I've been wondering for a while what to do with such sections in a lot of articles. I think perhaps just some standard library implementation should be mentioned, certainly the lists of languages is excessive. Dmcq (talk) 23:22, 6 December 2009 (UTC)
The list of rounding IEEE basic functions is of course made of 4 functions, because in most cases it forgets to speak about negative numbers. This is where the rounding up or down are differentiated from rounding towards or from zero. Then for the "round" function, there are also 4 rounding modes, which are available for all floatting point operations, independantly of these rounding functions: it's about how ties are rounded : up or down ? Here again there are the same 4 modes, plus the 2 additional modes for round to nearest even and round to nearest odd... IEEE does not have any randomized rounding modes, and no support for dithering with variable biases (the only biases supported are 0 for floor and ceil function, and the related "truncate" function (round to zero), and 0.5 for the round function). IEEE also does not have the round to infinity (round from zero), and round to odd as they have no practical use....
However the "round to nearest integer, and round ties towards zero" has mathematical applications (notably when computing the shortest continuous fractions of any rational number) because of its symetry: the round to nearest property allows the continuous fractions to be reduced to the smallest form (with less terms), and the symetry allows a negative rational to have exactly the same terms (in absolute value) in the continuous fraction as the opposite positive rational. With less terms (caused by rounding to nearest), it also allows the continuous fractions to converge twice faster per term (so the number of terms is effectively divided by 2 on average when expressing any real number as continuous fractions: this is especially important when optimizing the speed of numeric computation of irrational functions, but also, it ensures a better numeric stability of the result when working with floatting point numbers (with limited precision), producing more exact numbers and monotonous approximations (if we use a limited number of terms when computing the continuous fraction approximants, starting by the last term)... verdy_p (talk) 01:54, 12 August 2010 (UTC)
There seems to be a lot of stuff that I don't remember seeing anywhere else here. I think I need to stick in a number of citations needed in the article. Dmcq (talk) 19:58, 12 August 2010 (UTC)

Round to even, again[edit]

I was considering adding this:

The advantage of "round to even" (if the radix is not a multiple of 4) is that it prevents the problem of halves from propagating. Consider 23.55: with a "round to odd" rule this would become 23.5 and then 23, though 24 would be nearer. Instead, 23.55 is rounded to 23.6 and then to 24.

This is explained in The Art of Computer Programming chapter 4, I think, which I'd cite if my copy were not hidden in a box somewhere. —Tamfang (talk) 03:04, 19 July 2010 (UTC)

Yes, and this is why "round ties to odd" is not part of standard IEEE-754 rounding modes (so it has no native support in x87 FPUs), and also explains why "round ties to even" is the default mode for floatting points in ANSI C, C++, Java, and many other languages supporting or using floatting point types (including Javascript, despite Javascript does not mandate any minimum precision for numbers, except that they are handled as if they were in decimal with infinite precision, as expressed in their source text representation). verdy_p (talk) 02:00, 12 August 2010 (UTC)
The reason is more that if you add two random numbers of about the same magnitude you're less likely to need to round the sum if they are the results of round to even, so you need less rounding operations overall. Dmcq (talk) 09:09, 12 August 2010 (UTC)
You'll still need rounding, because such addition of even numbers will just resist one time, before needing a new rounding again for the least significant bit. On average this will just divide by 2 the residual error created by accumulated values, i.e. it will just add one bit of precision to the sum. If you're cumulating lots of values, this single bit will rapidly not be very significant in the overall error remaining in the sum. So the assertion that "it prevents the problem of halves from propagating" is false. It just reduces the problem by 50%.
There's another model, "round to odd" that better preserves the magnitude, and so that it better prevents this propagation of errors (but also does not prevent it completely). Unfortunately it is not the default rounding mode.
In all cases, there's simply no way to prevent the propagation of errors with any isolated rounding mode, except if roundoff errors are cumulated separately from the main cumulated values, so that they can finally be taken into account in the sum when they reach some threshold.
The effect of this separation of errors is exactly equivalent to computing the sum with an accumulator with a higher precision (i.e. the precision of the final sum to return, and the precision of the error accumulator), so it is just simpler to compute the sum directly with this extra precision.
For example when rounding 64-bit double elements into a 32-bit float, sum the original values into a 64-bit double, and use that high-precision sum to finally round the sum to a 32-bit float: this also explains why C and C++ are automatically promoting float values to double within expressions, and rounding only the result of the full expression back into 32-bit float, if the expression has the float semantics (i.e. was not explicitly promoted to double); in all other cases, the compiler will warn the programmer about possible loss of precision.
However this automatic promotion without intermediate rounding to the precision of floats will not occur if the function is compiled with the "strict floatting point" semantic, where rounding will occur after each operation, even if this creates larger roundoff errors in the final result of expressions. The "relaxed" and "strictfp" computing modes also exist in Java. Generally it is a bad idea to use the "strictfp" rounding mode, unless you want strict portability with systems that don't have support for computing with 64-bit double precision (such systems are now extremely rare).
In fact, C and C++ can also compute expressions by also implicitly promoting also 32-bit float and 64-bit double into a more precise long double (typically 80-bit wide on x87 systems) within expressions. The effective rounding will to 32-bit or 64-bit only occur when storing the result into a float or double variable, or when passing it as a parameter to a function, a constructor or a method. Here again the "strictfp" computing mode can force the comoiler to round all intermediate long double values to their semantic 32-bit or 64-bit type.
In C, C++ and Java (and probably also in many other languages supporting the IEEE 754 standard), the "strictfp" computing mode is not the default and must be specified explicitly, either in the source code with additional modifier keywords in the function declaration or in a statements block declaration (or sometimes within a parenthesed subexpression), or by using a specific compiler option. The compiled code will be less efficient due to the additional rounding operations that will be compiled and executed at runtime. verdy_p (talk) 23:47, 12 August 2010 (UTC)
Would you please stop just asserting things and put in some citations instead. Loads of waffling does not make what you say any better. Dmcq (talk) 07:43, 13 August 2010 (UTC)

The "7-up" rule?[edit]

I was taught that "round half to even" is also called the "7-up" rule. Does anybody else know about this? 216.58.55.247 (talk) 00:36, 26 December 2010 (UTC)

Why would it be called that? Oli Filth(talk|contribs) 00:52, 26 December 2010 (UTC)
No never heard of it. Sounds like some teacher's idea. I'd guess the idea is that 7-up is written with a point after the 7 and 7.5 would round up to 8 as 7 is odd. Just my guess. Dmcq (talk) 10:09, 26 December 2010 (UTC)

Chopped dithering and scaled rounding[edit]

I've cut down the sections on dithering and scaled rounding. The dithering is better dealt with elsewhere. The scaled rounding had no citations and was long and rambling and dragged in floating point for no good reason. Dmcq (talk) 23:30, 10 December 2011 (UTC)

Rounding with an odd radix[edit]

With particular reference to Rounding § Double rounding, it seems significant that when an odd radix is used, AFAICT multiple rounding is stable provided that we start with finite precision (actually a broader class of values that excludes only a limited set of recurring values). Surely work has been published on this, and it would be relevant for mentioning in this article? — Quondum 06:17, 20 November 2012 (UTC)

Hmm. It looks to me as though there is little to nothing published on this. Pity. —Quondum 19:21, 23 April 2015 (UTC)
The reason may be that an odd radix is never used in practice (except radix 3 on the Setun in the past). Vincent Lefèvre (talk) 22:08, 23 April 2015 (UTC)
Maybe it is not all that bad; Google searching on "Setun" and "rounding" pops up some results. Consider [2], Balanced ternary ("Donald Knuth has pointed out that truncation and rounding are the same operation in balanced ternary"), [3] ("ideal rounding is achieved simply by truncation") – this may be sufficient reference for this article. We'd have to make some obvious inferences, such as that repeated ideal rounding is still ideal rounding. We could restrict the comment to the sourced ternary case, though it should apply unchanged to any odd radix. —Quondum 23:05, 23 April 2015 (UTC)

Von Neumann rounding and sticky rounding[edit]

I wonder whether a paragraph should be added on Von Neumann rounding and sticky rounding, or this is regarded as original research (not really used in practice except internally, in particular no public API, no standard, no standard terminology...). In short: Von Neumann rounding (introduced by A. W. Burks, H. H. Goldstine, and J. von Neumann, Preliminary discussion of the logical design of an electronic computing instrument, 1963, taken from report to U.S. Army Ordnance Department, 1946) consists in replacing the least significant bit of the truncation by a 1 (in the binary representation); the goal was to get a statistically unbiased rounding (as claimed in this paper) without carry propagation. In On the precision attainable with various floating-point number systems, 1972, Brent suggested not to do that for the exactly representable results. This corresponds to the sticky rounding mode (term used by J. S. Moore, T. Lynch, and M. Kaufmann, A Mechanically Checked Proof of the Correctness of the Kernel of the AMD5K86™ Floating-Point Division Algorithm, 1996), a.k.a. rounding to odd (term used by S. Boldo and G. Melquiond, Emulation of a FMA and correctly-rounded sums: proved algorithms using rounding to odd, 2006); it can be used to avoid the double-rounding problem. Vincent Lefèvre (talk) 13:58, 20 April 2015 (UTC)

I thought it was standard to have a rounding bit and a sticky bit in descriptions of how it was all done in IEEE. Is that what you mean? That would go into some IEEE article I'd have thought. Dmcq (talk) 15:34, 20 April 2015 (UTC)
The rounding bit and sticky bit are well-known notions (they are only used by implementers, though, i.e. there is nothing about them in the IEEE 754 standard, for instance). However sticky rounding can be seen as a way to include a useful part of the rounding bit and sticky bit information in a return value (mostly internally, before a second rounding in the target precision), and it is much less used. Vincent Lefèvre (talk) 21:21, 20 April 2015 (UTC)