# User talk:Ylloh

Welcome!

Hello, Ylloh, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Here are some pages that you might find helpful:

I hope you enjoy editing here and being a Wikipedian! Please sign your messages on discussion pages using four tildes (~~~~); this will automatically insert your username and the date. If you need help, check out Wikipedia:Questions, ask me on my talk page, or ask your question and then place {{helpme}} before the question on your talk page. Again, welcome! Oleg Alexandrov (talk) 03:00, 24 November 2007 (UTC)

## Style note

And a small note. Per WP:MoS#Sections and headings, one should use lowercase in section headings, like this. Cheers, Oleg Alexandrov (talk) 03:00, 24 November 2007 (UTC)

## Coding theory

Hello and thank you very much for your valuable contributions to the articles on various codes for error correction and detection! The reason that I removed category:Coding theory in Walsh code was simply that if the subject is related to Error detection and correction and not to any other subcategory (such as Data compression), there is no use having the article sorted in both categories. In fact, I think most if not all articles in the Coding theory category should be moved to the appropriate subcategory. What's your opinion on this? Isheden (talk) 12:57, 14 October 2010 (UTC)

Thank you! I don't have a strong opinion on whether logical parent categories should be listed too, and would support your rule of not doing so for now. I'm sure there's a Wikipedia guideline somewhere, though. For me, it was just important to add those articles to some sensible category. ylloh (talk) 13:14, 14 October 2010 (UTC)

Hello. After you moved Walsh code to Walsh-Hadamard code, I moved it from Walsh-Hadamard code to Walsh–Hadamard code. That is the form required by WP:MOS for this sort of thing. (I.e. an en-dash, not a hyphen.) Michael Hardy (talk) 06:41, 30 July 2011 (UTC)

Thanks! ylloh (talk) 07:57, 30 July 2011 (UTC)

## Pseudo-random generator

I was a bit confused for the moment because of the existence of three related articles: pseudorandom generator, pseudorandom number generator, and cryptographically secure pseudorandom number generator. Surprisingly, the first article is never mentioned in the other two articles. Do you have some good ideas on how the three articles could be better entangled? Nageh (talk) 18:00, 6 February 2012 (UTC)

Yeah, it seems that some cleanup is necessary here. Unfortunately the three articles use different language and formalization. I guess there should be one main article with common concepts and the others should link back to that main artice.ylloh (talk) 18:54, 6 February 2012 (UTC)

## Hoeffding's inequality

I don't know how it looks with MathJax, but without it your expansion earlier today results in several red error messages beginning "Failed to parse (lexing error): \Pr\Big(\text{$n$ coin tosses...". I don't think you can use \$ signs inside [itex]. Qwfp (talk) 20:31, 21 February 2012 (UTC)

This looked fine in MathJax. I corrected it. Thanks!ylloh (talk)
That was quick! That has indeed fixed it. Thanks, Qwfp (talk) 20:45, 21 February 2012 (UTC)

## expansion rates, again

Hi Ylloh,

I hope you are not mad at me because of my contradicting you so much. I think the root cause for our disagreement is what we think math articles on wikipedia are for.

Completly aside from that, I would like to ask you a favor, since you have a much better level in graphs theory than I have (BTW you should not call me an "expert" all the time, I'm not). Here is my problem:

As you know, for a d-regular graph, we have the relation

$h_{\text{out}}(G) \leq h(G) \leq d \cdot h_{\text{out}}(G)$

I have the intuition that for any graph, we have

$h_{\text{out}}(G) \leq h(G) \leq \Delta \cdot h_{\text{out}}(G)$

where $\Delta$ is the maximum degree of the graph.

Is that relation wrong? If it is true, I did not find it in any source, and you know how I hate unsourced material... ;-)

I'd be thankful if you could help me on this. --MathsPoetry (talk) 21:44, 28 March 2012 (UTC)

This is in fact true since every set S has at most $\Delta S$ neighbors. It is a basic fact that people use all the time without explicitly stating it, which is why you are having a hard time finding a source. ylloh (talk) 21:52, 28 March 2012 (UTC)
Thanks for that explanation, Ylloh. Thank you too for that discussion about the expander rates formula, I hope I was not too harsh with you, with was not my intention. I have finished with the English wikipedia, I will close my discussion page and user page right now. There's no point in trying to help people who don't want to be helped. No need to answer here, I won't read your answer. --MathsPoetry

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Small-bias sample space, you added a link pointing to the disambiguation page Uniform distribution (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:29, 20 September 2012 (UTC)

## Merger

You seem to have merged Walsh-Hadamard code into Hadamard code with no discussion or explanation. Please explain your reasons for the merger, and why you chose not to discuss it. Deltahedron (talk) 17:39, 21 February 2013 (UTC)

Please see my comment on Talk:Walsh–Hadamard_code. Thanks! ylloh (talk) 19:33, 21 February 2013 (UTC)
I have responded there. Deltahedron (talk) 19:42, 21 February 2013 (UTC)

## reed solomon article - simple encoding procedure

I'm looking at the simple encoding procedure section and I seem to be missing something about the codewords being generated, specifically

$C(x) = x \cdot A$

Assume the message x = {1, 0, 0, ... , 0}, then C(x) = {1, 1, 1, ... , 1}, which clearly isn't a multiple of a generator polynomial. The original mesage can be restored using

$x = C(x) \cdot A^{-1}$

but how is corecttion performed on such a codeword? Rcgldr (talk) 03:49, 31 March 2013 (UTC)

Reading that section and the next section again, apparently "correction" meant trying a huge number of combinations, to generate the equivalent of a histogram, then choosing the most popular result. So I assume encoding by multiplying or taking the modulo with a generator polynomial wasn't done till later, and that since the 1980's (or perhaps earlier) almost all implementations use/used the modulo method: (original message) · x^t | (generator polynomial). Rcgldr (talk) 23:08, 1 April 2013 (UTC)
You are right that the method $x = C(x) \cdot A^{-1}$ only works for received words that have not been corrupted. A simple (but inefficient) decoder that does correct up to half the distance many errors is at follows: For all possible messages, check whether the corresponding codeword is close to the received word. If so, we found the original message. This is the Theoretical decoder mentioned in the Reed-Solomon article and is very slow. The efficient methods mentioned in the article also work in the "simple encoding procedure" setting, but their description becomes a bit different since the encoding is a bit different. ylloh (talk) 15:10, 2 April 2013 (UTC)

## Notation Used for Codes

Thanks for leading me to the page Block_code. I think the definition of block codes presented there is not the most general definition. The most general and (most widely accepted) definition of a block code is an injective map from $C:M \to \Sigma^n$. where M is your message set and the code is represented as an (n,|M|,d) code. Of course, M can be $\Sigma^k$. From my understanding, while speaking in general, the term 'message length' does not make sense. When your set M is closed under linear operations and your map is linear, then your code is linear. An alternative definition of code is also the image set of C, in which the linearity just means this should be linear.

You get the definition that you mention from the definition in Block_code by setting $\Sigma=M$ and $k=1$. I'm not sure which definition is most widely accepted, and this requires some literature evidence. In theoretical computer science, I have never seen any other definition than the one in Block_code. ylloh (talk) 13:39, 24 April 2013 (UTC)