User talk:Sadinova

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Welcome![edit]

Hello, Sadinova, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Unfortunately, one or more of your recent edits to the page Bellman's lost in a forest problem did not conform to Wikipedia's verifiability policy, and may have been removed. Wikipedia articles should refer only to facts and interpretations verified in reliable, reputable print or online sources or in other reliable media. Always provide a reliable source for quotations and for any material that is likely to be challenged, or it may be removed. Wikipedia also has a related policy against including original research in articles.

If you are stuck and looking for help, please see the guide for citing sources or come to The Teahouse, where experienced Wikipedians can answer any queries you have! Here are a few other good links for newcomers:

I hope you enjoy editing here and being a Wikipedian! Please sign your name on talk pages using four tildes (~~~~); this will automatically produce your name and the date. If you need personal help ask me on my talk page, or ask a question on your talk page. Again, welcome.  — MarkH21talk 10:42, 17 January 2023 (UTC)[reply]

Indeed, Asymptotic analysis analysis does corroborates your claim that the formula varies as x/2logx. But this analysis is delusional and precondition on the assumption that the error term is bigger than what is stated. The growth rate of the last term before the error term has to be taken as the error term for your analysis to make any reasonable since. However as it appears, one could clearly see that the main term is the whole chunk before the error term. The utility of this formula could be challenged but it cannot certainly be ruled out. What then is the utility of the one existing in the literature, involving the Chebyshev function if the latter is hard to compute?
(Sadinova (talk) 10:30, 24 March 2023 (UTC))[reply]

Prime counting function[edit]

Indeed, Asymptotic analysis does corroborates your claim that the formula varies as x/2logx. But this analysis is delusional and preconditioned on the assumption that the error term is bigger than what is stated. The growth rate of the last term before the error term has to be taken as the error term for your analysis to make any reasonable sense. However as it appears, one could clearly see that the main term is the whole chunk before the error term. The utility of this formula could be challenged but it cannot certainly be ruled out. What then is the utility of the one existing in the literature, involving the Chebyshev function if the latter is hard to compute? Great house removed this edit and claims it is invalid by using some basic asymptotic analysis. However, by deploying his analysis for the estimate for the partial sum of the logarithmic function on the set of integers less that a given threshold x, it will imply that this is asymptotic to xlogx. This implies that one cannot use this analysis to justify estimates results in my edit. I would entreat great house to look into this paper and not make any premature superficial judgment about its correctness and utility.

(Sadinova (talk) 11:28, 24 March 2023 (UTC))[reply]

Formula[edit]

I'm moving this discussion from Talk:Prime-counting function because there's broad consensus that the formula doesn't belong in that article, but you and I may still wish to discuss it.

Your original formula, as posted to that article,

is incorrect; in fact

so it's off by (roughly) a factor of 2. Your corrected formula, offered at the Talk page,

seems to be in the right neighborhood. Let's see if we can work this out explicitly.

For simplicity, let's call the sums respectively. , of course, is just .

absorbing all the small term into the O(1). We'll also need Stirling, in the form

Then we have

and so

Nearly there! Now your formula is

Whew! So the error really is O(1), but it becomes -1/4 + O(1/log x) when x is an even integer. So if you drop the 1/4 from your formula, it looks right!

Feel free to double-check at your leisure.

CRGreathouse (t | c) 00:15, 6 April 2023 (UTC)[reply]

Dear Greathouse,
Articles can only be included backed by a broad consensus of editors. Feel free to move it if it is not appropriate for the article. To remind you, the first version posted on the article page was not incorrect. I suppose you did not take ample time to read. In the sum the second constraint, that n is composite odd was not included but only mentioned at the end of the full statement of the result. You were so embroiled in the mathematical formula you barely read the remaining statements, I suppose.
(Sadinova (talk) 13:06, 19 April 2023 (UTC))[reply]
Dear Greathouse,
The error term has to be the way it is, even though there is a thin line between O(1) and the stated error. In fact the stated error implies that the error is O(1) but the former does not necessarily imply that stated error. You have just confirmed this in the even case so that combining all cases yields the stated error. But I do not think it makes any difference.
One cannot speculate the utility of this formula other than the fact that one can use it to obtain the best estimates for the prime counting function by applying stronger estimates for the Chebyshev theta function guaranteed by the prime number theorem at the compromise of a main term different from the traditional main term Li(x). One can also use it to obtain sharp estimates for certain integrals of the prime counting function and the Chebyshev theta function. It seems you have not read the source of contribution in a any great details. Quite a lot of your analysis had already been worked out in http://pubs.sciepub.com/tjant/7/1/1/
Check out corollary 4.2, Theorem 4.7 and Theorem 4.8. With this I see no reason why the utility could be challenged. You can reach out to me if you have any further issues you wish to discuss with me.
(Sadinova (talk) 14:34, 20 April 2023 (UTC))[reply]