User:Eltwarg

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Hello[edit]

I am a guy from central Europe having good times with my family, exciting hobbies and job.

What I like on Wikipedia[edit]

Time to time I am searching on Google to get an information... and usually there is a Wikipedia link in list of results directly on the first page. In such case I go there - it is usually the most comprehensive source of information with wide net of related topics accessible easily.

It's explained somewhere on Wikipedia pages, how it can work, but I always could not believe it really works so perfectly. I was really surprised how my contribution on a math topic "self-improved" in few days to form that on one hand it was hard to say it is my text but on other hand it was something I was proud of! That is GREAT.

I am not contributing to Wikipedia much - the main reason is that if I feel (for a while) I could share my knowledge on some topic, a high quality information is already published and I usually decide it is better to do not touch it with my dirty fingers ;-)

When editing pages I like much TeX support for math (and symbols in general). Thanks Michael Hardy who kicked me to use it properly.

Wikipedia is simply something I would not expect it can "happen" - especially after painful experience with overall devastation of internet cultural value in last years...

What I do not like on Wikipedia[edit]

It is great on natural sciences and weaker in humane sciences, and still very poor in everyday life areas. E.g.: Cooking, fishing, ... and what is worse, I am missing here information related to applied science (e.g. industry processes).

I understand that these things are less public. But I am also afraid the main reason is that the people from industry or with lot of hobbies simply do not find time for writing to Wikipedia...

In general, it seems like it is going to be a playground of non-human perfectionists (especially in the math section ;-). Probably I should take it all much more seriously - the problem could be I am too "fun-oriented" (with little son you must be ;).

Anyway I think that less "academical" paragraphs - even if they occur in more "academical" topics - should not be deleted just because they are not perfect. I think Wikipedia should preserve diversity of opinions as well as diversity of forms... Do not be so obsessive! ;-)

Finally I can detect too much "american mentality" in it (matches with what I have written above} and too less other mentalities...

Some things I have not found a good place for yet...[edit]

The distribution of the average value of vector of k random variables with continuous standard uniform distribution is very nice for presentation of convolution in probabaility theory and also for CLT.

The probability density function of a random variable X with standard uniform distribution is:


  f(x)=\left\{\begin{matrix}
  1 & for\ x \in [1, 0), \\  \\
  0 & otherwise  
  \end{matrix}\right.

Note: Some people are very solocitous concerning values for limits 0 and 1: Should they be both equal 0 or 1 or even 1/2?! Let me say that for probability of the examined event this does not matter at all... The values you choose just should be nice (i.e. one of the listed above and if one is 1/2 then the second should be same). You should choose combination that fits best to the context of the more complex problem you are solving using the distribution.

Let's have k random variables like this. What is the probability density function f_k of their average value? The question shows how we goes from an "unreliable" uniformly distributed event to something "much more predictable" when "doing the same thing" several times (if we have a chance) and every try counts (with the same weight)... In real life sometimes we can choose the best result, but let this not to be the case ;-) Let's also assume our events are independent (in real life most often we at least do learn in time).

Let's start with 2 events. We can utilise convolution of two probability density functions here. It represents probability density function of the sum of two variables (for sum of k variables in general we will denote it \phi_k). Let's denote our basic probability density function (pdf) f_1 as it is pdf of "average value" of one random variable.

From definition of convolution we have:

(f * g )(x) = \int f(\xi) g(x - \xi)\, d\xi.

In our very special case we get:

\phi_2(x) = (f_1 * f_1 )(x) = \int_{-\infty}^{+\infty} f_1(\xi) f_1(x - \xi)\, d\xi.

It's apparent that we get 0\ for\ x \notin [0,2)
The integration intervals [0, x)\ for\ x \in [0,1) and [x-1, 1)\ for\ x \in [1,2) are those where f_1(\xi) f_1(x - \xi) = 1 \neq 0.
Thus we get:


  \phi_2(x)=\left\{\begin{matrix}
  \int_0^x d\xi,     & = & x     & for\ x \in [0,1), \\  \\
  \int_{x-1}^1 d\xi, & = & 2 - x & for\ x \in [1,2), \\  \\
  0                  & = & 0     & otherwise  
  \end{matrix}\right.

We see there is a regular interval division introduced to our formulas and so it's good time to improve notation a bit. Let's define \phi_{k, i} following way:

\forall{i \in \N, x \in [i, i+1)}:\ \phi_k(x)=\phi_{k,i}(x).

Note that we do not care about values outside the i-th interval as those will always get multiplied with 0 or will not be applied at all. So we get:

\phi_{2,0}(x)=x.
\phi_{2,1}(x)=2 - x.

we also can see that:

\forall{k \in \N, i \notin \{0, ... , k-1\}}:\ \phi_{k,i}(x) = 0.

Let's think about average of k random variables now. From convolution we can get general inductive formula for sum:

\phi_k(x)=(\phi_{k-1} * \phi_1)(x) = \int_{-\infty}^{+\infty} \phi_{k-1}(\xi) \phi_1(x - \xi)\, d\xi.

Note that we use \phi_1 \equiv f_1 (as sum and average value of one value are the same) just to make the formula more "visually consistent". Again, it's apparent that

\forall{x \notin [0,k)}:\ \phi_{k-1}(\xi) \phi_1(x - \xi) = 0

and

\forall{i \in \N, x \in [i,i+1)}: \phi_1(x - \xi) = 1 \neq 0 \equiv \xi \in [x - 1, x) (\subset [i-1, i+1))

and so \forall{x \in \R} convolution operates only on two "segments" of \phi_k with overlap sub-intervals [x-1,i) and [i,x), where i \in \N \and i \in [x-1,x).

In particular:

\phi_{k,i}(x)=\int_{x-1}^i \phi_{k-1,i-1}(\xi) d\xi + \int_i^x \phi_{k-1,i}(\xi) d\xi.

To be correct, we should say that \phi_k represents probability density function of sum of our random variables. To get f_k for the average value, we just have to do following simple transformation:

f_k(x)=k\phi_k(kx).

which also means that

\forall{x \in [\frac{i}{k}, \frac{i+1}{k})}:\ f_k(x)=f_{k,i}(x)=k\phi_{k,i}(kx).

Subsequent application of the inductive formula gives following results:


  \begin{matrix}
   \phi_{k,i}(x) & i & 0 & 1 & 2 & 3 \\
   k \\
   1             &   & 1 \\
   2             &   & x & 2-x \\
   3             &   & \frac{x^2}{2} & \frac{3}{4}-(x-\frac{3}{2})^2 & \frac{(3-x)^2}{2} \\
   4             &   & \frac{x^3}{6} & \frac{2}{3}-2x+2x^2-\frac{x^3}{2} & \frac{2}{3}-2(4-x)+2(4-x)^2-\frac{(4-x)^3}{2} & \frac{(4-x)^3}{6}\\
  \end{matrix}.

We see that

\phi_{k,i} = \sum_{j=0}^{k-1} a_{k,i,j}x^j.

The last task of this article will be to find formula for a_{k,i,j}...


Some Wikipedia articles I edited[edit]

Oh no, I do not guess it nmakes sense to classify articles by author here on Wikipedia. I cannot imagine you would like to read something just because it was written by me. I expect you choose articles where you can find information you need instead...

Thanks[edit]

convert to SVG[edit]

Will you please use the program with which you made the image AlphaCentauri AB Trajectory.gif to make a vectorial version, preferably SVG, and after making the new image page on wiki commons, use this tag on your old image page: Template:Vector_version_available. --DynV (talk) 17:22, 24 May 2008 (UTC)