Jump to content

Talk:ITP method

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Peter.schild (talk | contribs) at 11:41, 29 December 2022 (→‎Tuning of n_0). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconArticles for creation C‑class
WikiProject iconThis article was reviewed by member(s) of WikiProject Articles for creation. The project works to allow users to contribute quality articles and media files to the encyclopedia and track their progress as they are developed. To participate, please visit the project page for more information.
CThis article has been rated as C-class on Wikipedia's content assessment scale.
Note icon
This article was accepted from this draft on 4 January 2021 by reviewer Eumat114 (talk · contribs).
WikiProject iconMathematics C‑class Low‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
CThis article has been rated as C-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-priority on the project's priority scale.

How to tune kappa parameters?

One of the weak spots in this article is how to tune the parameters, particularly . The original paper discusses the question a bit, but doesn't itself give any clear guidance. After a bit of experimentation (mostly on inverse arc length problems), I'm pretty confident that is a reasonable default choice, suitable in most cases unless experiment or analysis strongly suggests a different value. But is a bit of a different question. The current example in this article uses a value of , which is the same numerical value as the examples in the paper, but I think that fact is a bit misleading. The examples in the paper have , while the example here has , and there is scaling involved. If , then I think the heuristic that captures the intent of the paper is , and this is what I have documented in the kurbo implementation. Also note, if the example were to adopt this recommendation, then would work with fast convergence; I think you can say that you need more "slack" as provided by when is lowballed.

On that last point, I feel like I might be skating a little close to the edge of WP:NOR here. I personally consider an open source project such as kurbo a reasonably reliable source, but can see how others may feel differently. Ideally there would be some survey paper or textbook chapter that we could cite, but, partly due to the technique being so new, there's precious little out there. I'm going to shamelessly self-promote in the interest of providing useful resources to readers, but open to hearing better ways to handle this. Raph Levien (talk) 16:19, 5 April 2021 (UTC)[reply]

Raph Levien, I have added a small note to link to your library (I *feel* that's compliant to NOR). Cheers 🦀🍺. --Artoria2e5 🌉 12:56, 4 July 2021 (UTC)[reply]
Oh, while we are at it, I feel the description has a gap in the description of . The paper (and this Wikipedia article) only mentions it's ≥ 0, while your documentation seems to suggest a smaller useful range of [0,1] further constrained by the optimization to use usize? --Artoria2e5 🌉 13:13, 4 July 2021 (UTC)[reply]

Observations from tuning of and epsilon

I have conducted a Monte Carlo parametric study of epsilon and , using quasi-random monotonic curves of the form y=x^((1/c)-1) and y=1-(1-x)^(c/(1-c)), where c is a ranomized constant [0,1], in the range x_a=0 and x_b=1. Simulations were conducted on floating point on Visual Basic (machine epsilon = 1E-15). My findings are as follows:

Tuning of epsilon

  • As epsilon is reduced, the number of iterations increases and precision improves. However this effect levels out for small values of epsilon <= 1E-10.
  • For large values of epsilon: 1E-9 to 0.1, there is little benefit of this algorithm compared to basic Bisection, both in terms of number of iterations, and precision of .
  • For small values of epsilon: 1E-16 to 1E-10, there is negligible difference in performance, both in terms of number of iterations and precision of . It levels out, with median 8 iterations.
  • Therefore I recommend to set epsilon equal to the machine epsilon (1E-15 in my case), as there are no drawbacks compared to using lower precision.

Tuning of

  • n_0 = 0</math> is identical to Bisection.
  • halves the number of iterations compared to Bisection, and improved precision of (median 100000 times more accurate, and 3rd quartile 30 times more accurate), assuming small epsilon <= 1E-10.
  • Increasing further (i.e. and higher) leads to further improvements in precision (median and Q3 statistics) with the added benefit of reducing the Mean number of iterations (median and Q3 remain static). The effect levels out exponentially, with little improvement over say = 6 to 8.
  • The maximum possible number of iterations increases with : Max iterations = . However, the likelihood of this hapening sinks for higher , so the Mean number of iterations actually sinks for higher , but the effect levels out exponentially over say = 6 to 8, as mentioned above.
  • Thus I recommend .