Jump to content

User talk:Nageh: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 278: Line 278:
==Clipping path service==
==Clipping path service==
Unfortunately,The article [[clipping path]] service has been redirected to clipping path according to the administrative decision. Well, obviously I have respect and honor to the decision. At this situation, can I edit the [[clipping path]] article by adding content, Sir? Thanks for your consideration. [[User:Saiful 9999|Md Saiful Alam]] ([[User talk:Saiful 9999|talk]]) 03:54, 27 August 2010 (UTC)
Unfortunately,The article [[clipping path]] service has been redirected to clipping path according to the administrative decision. Well, obviously I have respect and honor to the decision. At this situation, can I edit the [[clipping path]] article by adding content, Sir? Thanks for your consideration. [[User:Saiful 9999|Md Saiful Alam]] ([[User talk:Saiful 9999|talk]]) 03:54, 27 August 2010 (UTC)

:Thank you for the editing in the clipping path service section in the article of [[clipping path]] [[User:Saiful 9999|Md Saiful Alam]] ([[User talk:Saiful 9999|talk]]) 08:31, 27 August 2010 (UTC)

Revision as of 08:31, 27 August 2010

Classical block codes are usually implemented using hard-decision algorithms

Hi Nageh, I think the reference you added clarifies things. I was thinking about block codes in general, but it's true that your statement was about classical block codes, for which I agree that hard-decision was common. My mistake. Itusg15q4user (talk) 15:37, 9 October 2009 (UTC)[reply]

Hi- I've gone back and copy-edited the article to change all "analog" to "analogues", including links that I had not originally edited. However, I didn't change Template:Modulation techniques or Template:Audio broadcasting, which are also used in the article and use the American spelling. The WikiCleaner just changes the redirects back to the current page names, which in this case, happens to be the American spelling version at the moment. Since both spelling versions were previously used in the article, I really couldn't tell that the British spelling would be the dominant version, since my perspective is from the American side. I've also gone back and changed the spelt-out acronyms to have capital letters, including links that were not originally edited by me, so that they match capitalizations throughout the article. Also, I defined a few acronyms, so that non-technical readers will know what they mean.

Could use your help with a few of the links needing disambiguation, (eg., carrier-to-noise ... threshold [disambiguation needed] and quadrature [disambiguation needed] ... -mixed), thanks. --Funandtrvl (talk) 21:31, 18 November 2009 (UTC)[reply]

Thanks again for fixing the disambigs! --Funandtrvl (talk) 20:15, 19 November 2009 (UTC)[reply]

Concatenated error correction codes

The page is still being worked on. There may be minor errors, but on the whole the text you saw was an attempt to make the article more accurate and readable.

My technical background for this is adequate

A lot of the articles on DVB / ATSC and Compact Disc / DVD error correction are in paper form, so often you have to go via what you remember. DVB ECC and ATSC ECC are not the same, so I had to hedge my text a little. DVB-T2 is a totally different can of worms vs DVB-T, so even differentiating between these formats ECC is a minefield.

I am trying to fix and improve this at the moment.

Minor misreadings you made

  • similar to NICAM? Why do you refer to an audio compression standard? Interleaving is a standard technique in error-correction coding, and your reference is totally misplaced! The text applies to the standardized CCSDS randomizer ... that is practically a twin to the NICAM randomizer NICAM and CCSDS interleavers do have a lot in common too, but only the short ones...
  • For example, Voyager initially used concatenated convolutional with Golay codes for the planetary missions, the Golay Code allowed the images to be sent 3x faster than RSV, but after the primary mission was over the RSV code was made mandatory -- however finding the proper papers and articles to cite for this is hard
  • And DVB-T does very well use code concatenation with RS codes. News to me, as I was not 100% certain, however -- DVB-T2 does not and DVB-S2 may not either. DVB vs DVB2 are different creatures when you ignore the standardized video resolution layers.

PS: ODFM is a proposed CCSDS transmission format!

Eyreland (talk) 08:40, 6 February 2010 (UTC)[reply]

Voyager I upgrades, Voyager Program Papers

The Voyager 2 probe additionally supported an implementation of a Reed-Solomon code: the concatenated Reed-Solomon-Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey to Uranus and Neptune.

Yes, true (a lot of what you have put there is news to me as I was never able to get all these details) -- but Voyager I must have had an upgrade of its ECC system once the Cameras were turned off. Where the reference is to this activity in the Voyager Mission Reports alludes me. I don't have any date to go by to prove when the VI EEC upgrade happened. However, the VI ECC upgrade must have happened ... it is the tyranny of the link margin so to speak. Also, identical (but separate) coding for uplink / downlink is cheaper too...

Can you restore the section on CC-ECC with less math

Can you restore the section on CC-ECC with the less mathematical explanation.

That section had nothing to do with misinformation at all, it was at best a guide for those less mathematically inclined.

You must remember that most of the US population (and UK here too etc...) is not that mathematically inclined and would not be helped by the pure math section of Concatenated error correction codes.

I speak from experience, as I am involved with

Getting information on how to decode Voyager Program packets or signals is very difficult as it is a paper era mid-Cold War science programme.

However, understanding the ECC concatenation is equally hard.

This intellectual difficulty should not be imposed on the general public that funded the missions that made these ECC coding standards so obligatory.

Mathematical and Engineering illiteracy hurts mathematicians and engineers right in the pocketbook - so avoid actions that lead to greater levels of redundancy in this profession. If this lot is not getting paid, everyone else's salary is at risk.

Eyreland (talk) 12:25, 6 February 2010 (UTC)[reply]

Hi - I've added the rollback falg. Please review WP:ROLLBACK or ask if you need any help. Pedro :  Chat  20:55, 15 March 2010 (UTC)[reply]

Thanks! Nageh (talk) 21:03, 15 March 2010 (UTC)[reply]

Hi Nageh, I've removed your report from AIV; it's a little too complicated for AIV, where the emphasis is generally on dealing with simple and obvious cases quickly. Your report is better suited for WP:ANI; you need an admin who (a) can spend a little more time looking into this, and (b) knows how to do a rangeblock. I fit (a), but not (b), or I'd do it myself. I suspect if you make that report at WP:ANI, someone will come along who can help. From a review of a few of those IP's, this looks like a reasonable suggestion, except from my limited understanding of IP ranges, I think the range you recommend might be pretty big. --Floquenbeam (talk) 14:21, 18 March 2010 (UTC)[reply]

Hmm ... (Reed Solomon)

Hi Glrx, and thanks for your efforts on Reed-Solomon codes. However, I want to point out that you removed a concise description of how RS codes essentially work, namely by oversampling a polynomial. Even though that statement could have been expanded, it was clear.

I disagree that it was clear. Although RS arrived at their code from a an oversampled polynomial viewpoint, that statement is not clear but rather terse. Furthermore, the oversampled view doesn't comport with modern usage. The modern g(x) viewpoint makes s(x) disappear and lets the error correction focus on just n-k syndromes rather than interpolating polynomials. I reworked the introduction to follow the RS development after your comment, and now I'm unhappy with it -- it lead me into the same trap that I was trying to fix: describing stuff that distracts. I fell into restating the history. The goal should be to explain the code and give insight into how it works. The modern implementation is the BCH viewpoint and transmits coefficients and not values.Glrx (talk) 21:19, 28 March 2010 (UTC)[reply]

The text you added describes RS codes from the point of cyclic codes. Furthermore, what you essentially say is that an error can be detected if the received code word is not divisible by the generator polynomial, which is... trivial from a coding point of view, but does not provide the casual reader with any insight. Furthermore you lead the reader to believe in a tight connection with CRC codes, while the actual connection is with cyclic codes.

I mentioned the CRC processing to build an analogy. I deleted it, and now I'm sorry I did. It also gives context for error correction algorithm using the roots of g(x).Glrx (talk) 21:19, 28 March 2010 (UTC)[reply]

Last but not least, it is actually true that RS codes were not implemented in the early 1960s because of their complexity—it _might_ have been possible to actually implement on some hardware, but nobody did it back then. As far as history tells, RS codes were not implemented until Berlekamp came up with his efficient decoding algorithm together with Massey, after which they were implemented in the Voyager 2 space probe.

I don't understand this comment at all. I deleted a clause that claimed the digital hardware was not advanced enough at the time and left the clause about no practical decoder. The reason the codes were not implemented is because the decoding algorithm was impractical (even on modern hardware) for a large number of errors. If there were a practical decoding algorithm in 1960, there was hardware to do it. Your statement agrees with that assessment, so what does it want? Does it want to keep the inadequate digital technology clause because it may have been possible to implement impractical algorithms in 1960 hardware?Glrx (talk) 21:19, 28 March 2010 (UTC)[reply]


To summarize, the description that you have given is better placed at cyclic codes, and mathematical descriptions, if added, are better placed in the Mathematical formulation section. Cheers, and keep up the work! Nageh (talk) 18:19, 28 March 2010 (UTC)[reply]

RS article in general

My general take is the article is broken in many ways. The original view is a bit interesting from the historical standpoint, but it is irrelevant to how RS codes are used today. The modern decoder interpolates the connection polynomial, but that is not the interpolation that RS described. Even if the Gegenbauer polynomial comment is correct, it is a pointless Easter egg. The Fourier transform description of how decoding works is a pointless, unreferenced, fable that takes the reader out of algebra and into signal processing point of view -- only to switch back to algebra at the last minute because the signal processing view doesn't really help with decoding or understanding. RS used the Huffman D transform (which we could call a z-Transform), but the insight is really for the formal power series manipulations.

Yes, the intro and the article need more work. I see moving some other sections up, but most moves also imply rework to accomodate the move. In the intro, RS's original m = today's k. I'll be doing more edits, but you can edit, too.Glrx (talk) 15:41, 31 March 2010 (UTC)[reply]

Rewrite / intro

You've put in a lot of work on the whole article. I appreciate how difficult and time consuming that is.

Actually I am just trying to clean up the mess that was in there, just as you do. :)

I took out the comment about burst errors the first time because it isn't an essential idea of an RS code. The RS code doesn't know it's correcting a burst error.

Lin & Costello is a very notable reference, and it explicitly points out the RS code's ability to correct both random and burst errors. It discusses this from a historical point of view, starting with single and then multiple-burst error detecting/correcting codes (yes, Fire codes are one of them), and then conclude with Reed-Solomon codes, which they characterize as "burst-and-random-error-correcting codes" when viewed over bit streams.
I have also added a footnote citing an important application of RS codes as both random and burst error correcting, namely in concatenation with convolutional codes.
Burst error correction is not an essential idea of RS codes, but a notable property. Whether it is notable enough to mention it in the introduction is arguable, and I won't complain if you want it moved to the article body. But note that my intention was simply to characterize them as burst-and-random-error-correcting codes.

Now your introduction is claiming that RS is a nonbinary code, but then it flips around and starts talking about a binary representation.

Please be fair and don't misinterpret what I write. An RS code is a non-binary code because it works over symbols of any finite field. The binary representation refers to the fact that any practical code is constructed over finite fields of characteristic 2, i.e., of size 2^m. That means that each symbol can be represented by an bit string of length m. No mystery here. I am surprised that anybody can misunderstand this.

That introduces confusion, so it is not a good idea. Mentioning that RS codes are used in disk drives would be fine; explaining why RS codes are useful for burst errors requires understanding too much detail.

Which is funny because that is the reason I have given to you for moving a lot of your introductory text to the article body.

To view it another way, in the original paper, R and S discussed both random errors and erasures. They discussed mapping the symbols onto a binary alphabet. They did not discuss burst errors, so burst errors were not an issue in the design of RS codes.

The lead section is not about their original paper, but about an introduction to RS codes in general.
And BTW, there is a big difference between mapping onto and mapping over a binary alphabet. RS codes map symbols over a binary alphabet, or more formally, onto a finite field of characteristic 2 (this is the binary aspect), which means symbol sizes 2^m.
And in regards to erasures, the reason I removed them is because they are not so special. Erasure are located errors, and any error-correcting code can correct erasures. In fact, any MDS code with t check symbols can correct t erasures.
And just reading your edit summary, what means "NB in 2D barcodes, not BEC"? I don't understand NB, but BEC is binary erasure channel. Please not that any erasure channel is equivalent to the binary erasure channel.

There are two versions classic RS encodings. If the message is P(x), then one can sent P(x) g(x) or one can send the systematic P(x) x^{n-k} + remainder. Either version sends a polynomial evenly divisible by g(x). The more common version (and the one described further down in the article) is the latter systematic version.

Which I explained in the article, right? The systematic method just reconstructs the generator polynomial, that doesn't change how en- and decoding works otherwise.

I'm watching this page, so add replies here and I will see them.Glrx (talk) 21:00, 2 April 2010 (UTC)[reply]

Done. Nageh (talk) 21:37, 2 April 2010 (UTC)[reply]
PS: I have tried to merge both our views into the lead section. I hope you can agree. Otherwise, let's discuss. :) Nageh (talk) 22:03, 2 April 2010 (UTC)[reply]

Edit collision on Rewrite / intro

You've put in a lot of work on the whole article. I appreciate how difficult and time consuming that is.

Actually I am just trying to clean up the mess that was in there, just as you do. :)
and I made a bigger mess as I was doing it.

I took out the comment about burst errors the first time because it isn't an essential idea of an RS code. The RS code doesn't know it's correcting a burst error.

Lin & Costello is a very notable reference, and it explicitly points out the RS code's ability to correct both random and burst errors. It discusses this from a historical point of view, starting with single and then multiple-burst error detecting/correcting codes (yes, Fire codes are one of them), and then conclude with Reed-Solomon codes, which they characterize as "burst-and-random-error-correcting codes" when viewed over bit streams.
I have also added a footnote citing an important application of RS codes as both random and burst error correcting, namely in concatenation with convolutional codes.
Burst error correction is not an essential idea of RS codes, but a notable property. Whether it is notable enough to mention it in the introduction is arguable, and I won't complain if you want it moved to the article body. But note that my intention was simply to characterize them as burst-and-random-error-correcting codes.
I think we agree. Burst error should get prominence in either properies (which should move up) or applications. In the body interleaving can be mentioned.

Now your introduction is claiming that RS is a nonbinary code, but then it flips around and starts talking about a binary representation.

Please be fair and don't misinterpret what I write. An RS code is a non-binary code because it works over symbols of any finite field. The binary representation refers to the fact that any practical code is constructed over finite fields of characteristic 2, i.e., of size 2^m. That means that each symbol can be represented by an bit string of length m. No mystery here. I am surprised that anybody can misunderstand this.
I'm not disputing the theory but rather the presentation. Casual reader is told nonbinary in one sentence and binary in another. The audience is need not be versed in coding theory.

That introduces confusion, so it is not a good idea. Mentioning that RS codes are used in disk drives would be fine; explaining why RS codes are useful for burst errors requires understanding too much detail.

Which is funny because that is the reason I have given to you for moving a lot of your introductory text to the article body.
No dispute there. My intro also got tangled in detail.

To view it another way, in the original paper, R and S discussed both random errors and erasures. They discussed mapping the symbols onto a binary alphabet. They did not discuss burst errors, so burst errors were not an issue in the design of RS codes.

The lead section is not about their original paper, but about an introduction to RS codes in general.
And BTW, there is a big difference between mapping onto and mapping over a binary alphabet. RS codes map symbols over a binary alphabet, or more formally, onto a finite field of characteristic 2 (this is the binary aspect), which means symbol sizes 2^m.
mea culpa informal. I used the notion as a restricted alphabet used to build words. R&S used translation of K into a binary alphabet.
And in regards to erasures, the reason I removed them is because they are nothing special. Erasure are located errors, and any error-correcting code can correct erasures. And just reading your edit summary, what means "NB in 2D barcodes, not BEC"? I don't understand NB, but BEC is binary erasure channel. Please not that any erasure channel is equivalent to the binary erasure channel.
NB = important. Yes, any error correcting code can correct erasures, but that buries RS corrects twice as many erasures as errors. I'm looking at the Wikipedia reader and not someone who knows all the implications of MDS.

There are two versions classic RS encodings. If the message is P(x), then one can sent P(x) g(x) or one can send the systematic P(x) x^{n-k} + remainder. Either version sends a polynomial evenly divisible by g(x). The more common version (and the one described further down in the article) is the latter systematic version.

Which I explained in the article, right? The systematic method just reconstructs the generator polynomial, that doesn't change how en- and decoding works otherwise.
I wasn't commenting about theory but rather presentation that can confuse a reader. If the reader sees an RS encoding is P(x)g(x) and then sees that an RS encoding is something different further down will confuse him. It's OK to simplify some things.

I'm watching this page, so add replies here and I will see them.Glrx (talk) 21:00, 2 April 2010 (UTC)[reply]

Done. Nageh (talk) 21:37, 2 April 2010 (UTC)[reply]
gotta go. Thanks.Glrx (talk) 22:18, 2 April 2010 (UTC)[reply]

Unindent

(Unindent) Turns out we mostly agree. Yes, the article has multiple issues, but correcting them takes time. (Lots of time.) I am aware that some parts I started working on (section Classic view) are left unfinished. I may continue when I get to it, but no promise there (feel free to work on it).

I understand and agree that I may not assume knowledge on coding theoretic concepts from the reader. It might just take me a while sometimes to get the text right, which means rewriting by myself or by some other person several times. :)

cheers, Nageh (talk) 22:44, 2 April 2010 (UTC)[reply]

Yes, it takes enormous amounts of time. I was very impressed with the time you put into your edits. I still think t erasures is important in the introduction and shouldn't be buried, but we've both got other things to do.Glrx (talk) 23:34, 2 April 2010 (UTC)[reply]

AfD closing of Susan Scholz

Excuse me, but your AfD closing of Wikipedia:Articles_for_deletion/Susan_Scholz was premature. It should have entered a second round, as it does not conform to the wikipedia policy guidelines of notability. I would appreciate if you would reopen the AfD debate. Thanks, Nageh (talk) 10:33, 7 April 2010 (UTC)[reply]

There is nothing in the deletion policy to support a relisting (see WP:RELIST). The question of whether the article meets notability criteria is a question of fact to be established by consensus on the deletion discussion page. There was no such consensus. Stifle (talk) 10:37, 7 April 2010 (UTC)[reply]
No there is not, but proper reaction is something that might be expected from an admin deciding on how to proceed on an AfD debate. To me it was obvious that the article was in need of further discussion, and a proper reaction would be to relist the discussion in order to try coming to a consensus. You also ignored an ongoing discussion on her claimed notability according to WP:PROF claim #1. I would again appreciate if would reconsider your actions. Thanks, Nageh (talk) 10:42, 7 April 2010 (UTC)[reply]
No there is not, but proper reaction is something that might be expected from an admin deciding on how to proceed on an AfD debate. To me it was obvious that the article was in need of further discussion, and a proper reaction would be to relist the discussion in order to try coming to a consensus. You also ignored an ongoing discussion on her claimed notability according to WP:PROF claim #1. I would again appreciate if would reconsider your actions. Thanks, Nageh (talk) 10:42, 7 April 2010 (UTC)[reply]
Admins are bound to operate in accordance with policies and guidelines, and are not entitled to make up new ones on the fly. I'm happy with my no-consensus closure and you're welcome to open a deletion review if you feel it was not correct. Stifle (talk) 10:45, 7 April 2010 (UTC)[reply]

Thank you

Nageh, thank you for your letter. I appreciate the time and thought you gave to what a quick browsing shows to be substantive and constructive writing. After I give it serious study, is this the proper place to reply to you again? Vejlefjord (talk) 21:11, 8 April 2010 (UTC)[reply]

Thank you for the appreciation. Yes, this place is fine for your further comments. Nageh (talk) 21:33, 8 April 2010 (UTC)[reply]


MathJax in Wikipedia

Thank you so much for sharing this. Now let's think about the following.

  • We need an easy way to update to the most recent version. For this, we should prepare a patch for MathJax that enables us to control from the config file the changes you made. This way, as few changes as possible are required in the actual source code.
  • We need to hunt bugs. The most common reason why a page wouldn't render fully seems to be due to a bug of mediawiki. In my sandbox you can see that ':' and the math tags do not understand each other (with 'display as latex' turned on), which makes it impossible for MathJax to match the opening and closing dollar.
  • Any ideas on how to provide the webfonts on wikipedia so that clients don't need to install anything?
  • We should inform the developers of MathJax and mediawiki.

What do you think? ylloh (talk) 14:05, 13 April 2010 (UTC)[reply]

I'm busy for the rest of the week, but on the week-end I'll try to contact the mediawiki developers about this. ylloh (talk) 09:50, 14 April 2010 (UTC)[reply]

Thanks. Btw, while the number of files for the image fonts is enormous, there are only 21 files for the svg fonts. Btw2, the fonts have received an update on the mathjax svn yesterday. ylloh (talk) 13:45, 14 April 2010 (UTC)[reply]

Wow! That's fast! ylloh (talk) 09:27, 15 April 2010 (UTC)[reply]

I have marked you as a reviewer

I have added the "reviewers" property to your user account. This property is related to the Pending changes system that is currently being tried. This system loosens page protection by allowing anonymous users to make "pending" changes which don't become "live" until they're "reviewed". However, logged-in users always see the very latest version of each page with no delay. A good explanation of the system is given in this image. The system is only being used for pages that would otherwise be protected from editing.

If there are "pending" (unreviewed) edits for a page, they will be apparent in a page's history screen; you do not have to go looking for them. There is, however, a list of all articles with changes awaiting review at Special:OldReviewedPages. Because there are so few pages in the trial so far, the latter list is almost always empty. The list of all pages in the pending review system is at Special:StablePages.

To use the system, you can simply edit the page as you normally would, but you should also mark the latest revision as "reviewed" if you have looked at it to ensure it isn't problematic. Edits should generally be accepted if you wouldn't undo them in normal editing: they don't have obvious vandalism, personal attacks, etc. If an edit is problematic, you can fix it by editing or undoing it, just like normal. You are permitted to mark your own changes as reviewed.

The "reviewers" property does not obligate you to do any additional work, and if you like you can simply ignore it. The expectation is that many users will have this property, so that they can review pending revisions in the course of normal editing. However, if you explicitly want to decline the "reviewer" property, you may ask any administrator to remove it for you at any time. — Carl (CBM · talk) 12:33, 18 June 2010 (UTC) — Carl (CBM · talk) 13:20, 18 June 2010 (UTC)[reply]

RfD

I went ahead and RfDed pre coding. A1 was not the correct speedy tag, and because of its age the correct venue is RfD. I made an entry here. Thanks for the follow up. NativeForeigner Talk/Contribs 19:21, 22 June 2010 (UTC)[reply]


General Number Field Sieve

You recently undid my revision 377392928 on the general number field sieve because it does not comply with Wikipedia's definition of L. I just looked at my "Prime Numbers: A Computational Perspective" book and also my "Development of the Number Field Sieve" book, and they use the same notation I use, so this suggest that the problem was not with my general number field sieve correction, but instead with Wikipedia's L definition. I'm also fairly confident that Lenstra and Pomerance were the ones who standardized this notation, so Wikipedia's big O around the front is non-standard, and should be fixed. Let me know if you concur. —Preceding unsigned comment added by Scott contini (talkcontribs) 23:42, 8 August 2010 (UTC)[reply]

UPDATE: You're right that Handbook of Applied Cryptography uses the big O on page 60, Example 2.6.1. and this is not listed as an error on HAC errata web page. I maintain: (i) The Big O on the outside has no effect and does not belong there -- so this can be interpreted as an error which has not yet been reported to the HAC authors, (ii) I think the notation came from either Lenstra or Pomerance, and they do not use Big O. A few sources: pg 358 of the Encyclopedia of cryptography and security (article written by Arjen Lenstra), Any article in the Development of the Number Field Sieve book, and Crandall and Pomerance's book. I'm also going to email Arjen Lenstra on this. More news later. Scott contini (talk) 00:57, 10 August 2010 (UTC)[reply]

Some history: Prior to the number field sieve, the L-notation had only one parameter, because the was always 1/2 for the algorithms they were interested in. I don't yet know when the original L-notation was introduced, but Pomerance used it in hios 1982 seminal paper "Analysis and Comparison of some Integer Factoring Algorithms" which can be downloaded from his website. Pomerance did not use any big-O, only little o, and he explains properties of the little o that make it evident that big O is not needed in section 2 (although he does not explicitly say this, it is implied). When the number field sieve was invented, they no longer had the 1/2 in the exponent so they then added the second parameter to the notation so as to include all subexponential type algorithms. This was combined analysis by several people including Pomerance and H. Lenstra. It's all in the Development of the Number Field Sieve book. I'm still awaiting a reply from Arjen Lenstra, but based upon this I can pretty confidently say that Handbook of Applied Cryptography (HAC) and Wikipedia are using non-standard notation, and the big O can be eliminated. Let me know your thoughts. Scott contini (talk) 03:20, 10 August 2010 (UTC)[reply]

Well, without the Big O it would actually equal Big Theta (Θ). Since the complexity usually refers to the worst case, you'd actually have to use the Big O outside to describe the running time of the algorithm. So either way you'd say it is O(L(...)) or simply L(...), depending on which definition of L you use. Note that the small o inside of L does not take care of that. Nageh (talk) 08:45, 10 August 2010 (UTC)[reply]
Sorry, I have to disagree with you. I also got my reply from Arjen Lenstra which agrees with me. He writes "But O is nonsense if o(1) in exponent". Feel free to email me at scott_contini at yahoo and I will forward his reply to you. But really, the argument is very simple. means there is a constant such that the function asymptotically converges to no more than . Now replace with . Observe:
because is (e.g. is asymptotically negligible in comparison to -- the former is a constant, the latter is not). Scott contini (talk) 12:35, 10 August 2010 (UTC)[reply]
You misunderstood me. It is clear that the small o takes care of multiplicative, exponential, and additive constants. What it does not take care of are functions with lower (different) complexity, i.e., smaller α. For example, a polynomial (in ln(n)) or a constant is not in the set of functions complying to L with any α>0, assuming the definition without the Big O. However, it is in O(L(...)). So while o(1) will ultimately converge to 0, for any c and α>0 the function is nonetheless a superpolynomial function, and L thus a set of such superpolynomial functions. Then, the Big O says nothing but the worst case complexity is ultimately superpolynomial (but nothing about e.g. average case complexity). And this is where I see the point: the L notation actually refers to the expected complexity as n tends to infinity (and thus the average complexity). And for this you truly don't need a Big O outside. (And thus the definition of L without O becomes more reasonable.)
Irrespectively, I would very well like to see Lenstra's response as well. I'll send you an email. Thanks! Nageh (talk) 13:47, 10 August 2010 (UTC)[reply]
I see your point now. However, (i) the standard notation does not use big O, and (ii) the running times for quadratic sieve, number field sieve, and all of these combination of congruence algorithms are the actual running time -- not upper bounds. That is, the running time of the number field sieve is indeed (for the defined in the algorithm):
So, putting the big O on the outside of this is giving the impression that it is an upper bound when in fact that is the running time for every suitable number that is fed in. In general, if one wants to indicate that it is an upper bound rather than proved (under certain assumptions that NFS sieved numbers/norms have smoothness probabilities similar to randomly chosen numbers of the same size) running time, then they can add the big O on the outside. That is, they can say . But if you define the L-notation to have the big O on the outside, then algorithms that have that as their actual run time (not upper bound) are not able to indicate that. Such an example is the number field sieve. So in addition to points (i) and (ii), I add that (iii) the standard definition (which is not the Wikipedia/HAC definition) is more useful. I'm making the same argument on Talk:L-notation. Scott contini (talk) 23:56, 10 August 2010 (UTC)[reply]
Scott, we're concluding along the same lines. I totally agree with you here, as this is what I tried to say in my previous comment. The L in the GNFS refers to the expected complexity, and thus not to an upper bound, so the Big O is inappropriate. From this point of view I also agree that it makes more sense to define the L without the Big O outside, and use O(L(...)) when you truly want to refer to an upper bound.
What I would like to see now are suitable references, both for personal interest and for including at the L notation Wikipedia article. Nageh (talk) 07:45, 11 August 2010 (UTC)[reply]
Thanks. Glad we are in agreement. Scott contini (talk) 12:00, 11 August 2010 (UTC)[reply]

Vejlefjord: report

Nageh, I gave your (April 2010) helpfully specific critique and guidelines re my first Wikipedia try with “Theodicy and the Bible” the serious study it deserved. Motivated by your response (along with Moonriddengirl's interest), I have done a major rewriting that is posted on http://en.wikipedia.org/wiki/User:Vejlefjord with the title “Major rewriting of ‘Theodicy and the Bible’.” Would you be so kind as to look at the rewrite and tell me whether you consider “the way in which it is presented” (to use your words) Wiki-OK? Trying to write Wiki-style is different (and more difficult for me) than my experience with books and journal articles. Thanks, Vejlefjord Vejlefjord (talk) 02:29, 15 August 2010 (UTC)[reply]

Thanks for your invitation to review, I hope to get to it within the next (several) days. From a first quick analysis, the article seems more accessible now, but there are still a couple of issues left, and in part the presentation got a bit bullet-style. Anyway, I intend to come up with some concrete suggestions for further improving the article after reviewing. Nageh (talk) 09:26, 15 August 2010 (UTC)[reply]
I moved the stuff a bit around. The article is now here. I'm not really active on Wikipedia, so your help is much needed. I think the article has potential and it would be a shame if it couldn't be used. Vesal (talk) 12:24, 21 August 2010 (UTC)[reply]
Thanks for the notice and the initial effort. Right, I absolutely intend to get it back into article space at some point. I'm a little bit restricted time wise as well currently, but I will see what I can do. Nageh (talk) 17:06, 21 August 2010 (UTC)

Clipping path service

Unfortunately,The article clipping path service has been redirected to clipping path according to the administrative decision. Well, obviously I have respect and honor to the decision. At this situation, can I edit the clipping path article by adding content, Sir? Thanks for your consideration. Md Saiful Alam (talk) 03:54, 27 August 2010 (UTC)[reply]

Thank you for the editing in the clipping path service section in the article of clipping path Md Saiful Alam (talk) 08:31, 27 August 2010 (UTC)[reply]