Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 202.177.147.188 (talk) at 15:08, 13 June 2007 (abstract algebra). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Wikipedia:Reference desk/headercfg


June 7

LaTeX question

Not sure if it belongs here or in the Computing Reference desk, but how do I label equations in LaTeX? Say I have this:

...(1)
...(2)

How do I get the labels ...(1) and ...(2) in LaTeX? x42bn6 Talk Mess 21:15, 7 June 2007 (UTC)[reply]

Do you mean genuine LaTeX or the hamstrung approximation used for Wikipedia? --KSmrqT 22:11, 7 June 2007 (UTC)[reply]
Genuine, although the latter could become useful in the future. x42bn6 Talk Mess 22:15, 7 June 2007 (UTC)[reply]

Try this kind of stuff:

\begin{equation}
\, 5x+2y=3
"\label{Equation1}
"\end{equation}%
\begin{equation}
\, 4x+7y=24 \label{MySecondEquation}
\end{equation}%

Of course, you can use these labels later to refer to these equations. Never tried this for wikipedia. deeptrivia (talk) 22:23, 7 June 2007 (UTC)[reply]

I'm using \begin{displaymath} rather than \begin{equation} because I am using lots of chunks of mathematics that don't need referring back, plus I don't want default numbering. I am a LaTeX newcomer, so I don't mind using \begin{equation} if I can remove the default numbering, and stick my own labels in. x42bn6 Talk Mess 22:27, 7 June 2007 (UTC)[reply]

I've always liked:

\begin{eqnarray}
5x + 2y & = & 3\\
4x + 7y & = & 24
\end{eqnarray}

24.8.49.212 01:19, 8 June 2007 (UTC)[reply]

If you do what the anon above suggested, every equation will have a number on it (not always desirable). If you want some to have numbers and some not to have numbers, just append "\nonumber" to the ones you want to remain unlabeled. You must still "label" and "ref" them in the way described by deeptrivia above. –King Bee (τγ) 12:59, 9 June 2007 (UTC)[reply]
  • By the way, to avoid the numbering situation in general, you can use \begin{eqnarray*} and \end{eqnarray*}. It turns off numbering completely. Also, the asterisk has the same effect on \begin and \end {figure} and so forth. Try it out! –King Bee (τγ) 13:16, 13 June 2007 (UTC)[reply]


June 8

Inverse Laplace transform by complex analysis

To find the inverse Laplace transform of F(z) via complex inversion formula, we have to assume |F(z)| <= m/|z|k for all large |z|. Theorem 8.2.1 of J.E. Marsden and M.J. Hoffman, Basic complex analysis, 3rd edition, Freeman, 1999 merely assumes k > 0 and for quotients of polynomials k >= 1. Also A. D. Wunsch, Complex variables, 3rd edition, Addison-Wesley, 2005 assumes nothing more than k > 0 but gives a reference to find a proof. The old book R.V. Churchill, Operational Mathematics, 2nd Ed., ACTUALLY assumes k > 1 in the proof of Theorem 5 of its section 63.

Questions: Where can I find a proof for the case k = 1 or better k > 0? Did I overlook something in the above standard textbooks? Thank you in advance. Twma 06:37, 8 June 2007 (UTC)[reply]

Converting horsepower into amperage

How do I Convert horsepower into amperage?

I don't think you can. Ampere is a measure of current, while horsepower is a unit of Power. --YbborTalk 18:54, 8 June 2007 (UTC)[reply]
You may have been thinking watts, which is also a unit of power. According to my calulator widget, 1 horsepower is 745.69987 watts. Friday (talk) 18:56, 8 June 2007 (UTC)[reply]
For a known voltage and a purely resistive system, wattage is the product of this and amperage, enabling a numerical connection between amps and HP to be derived. If any reactive elements are present, power is less then volt-amps, of course, and the reactive impedance will depend on the size of capacitative or inductive components in farads or henrys and the frequency of the applied voltage. 86.132.167.202 20:25, 8 June 2007 (UTC)[reply]


June 9

ARCH and GARCH

Hi. I am a research scholar of a university and currently I am doing a simple study on volatility of stock market. While browsing for literature I found that virtually all the studies in volatility of stock market use use tool called GARCH to model volatility clustering and persistence. Fascinated by the tool, I also used it in my study and discussed with one of my research guides. All I knew about the tools is that they model time-varying volatility and find whether past variances and disturbance terms affect present volatility or not. Like all other the stock markets in the world, it was significant in case of the stock market of my study as well. My research guide asked me what is the meaning of the results of ARCH and GARCH models.

I said "it means the stock market shows the existence of of volatility clustering and persistence and implies that past volatility significantly affects present volatility of returns. Since the sum of ARCH and GARH coefficient is less than one, the volatility shock is time-decaying and mean-reverting which means any volatility shock does not persist for infinity and it gradually comes down to its long run mean-level at a rate suggested by the sum of ARCH and GARH coefficients."

Now he said to me "It sounds all right but what does it tell you about the stock market behavior and what conclusions can you draw from the results of ARCH and GARCH so that it will be useful for the readers, the investors and the policy makers? You should put your results in words in such a way that it is understandable by everybody"

Now I need to know what unique results about my stock returns can I infer from the GARCH tool and interpret it so that it is useful to every reader who are not much familiar with econometrics like myself. I KNOW THE QUESTION IS VERY DEMANDING. A comprehensive and non-technical answer would be appreciated. Thanks in advance.--202.79.62.21 08:35, 9 June 2007 (UTC)[reply]

This is not really a question about mathematics. The Google search results for "arch garch volatility"[1] may contain something you can use.  --LambiamTalk 10:20, 9 June 2007 (UTC)[reply]
Appropriate wikipedia articles include Autoregressive conditional heteroskedasticity, and Volatility. Volatility measure how changable a stock price is in a given time period. volatility clustering and persistence mean that highly changable stocks will continue to be highly changable, the volatility shock is time-decaying and mean-reverting - peroids of high changability will not last forever, in the long term thing quieten down. --Salix alba (talk) 14:07, 9 June 2007 (UTC)[reply]
Thanks. In fact I have gone through several articles including the wikipedia articles regarding this. But the problem is their interpretation is always statistical and conclusions are always technical. However, the widespread use of GARCH indicates that its results must tell something important about the financial time series (e.g. stock returns) behavior. Studies show every market shows that volatility shock is time-decaying and mean-reverting. What can I infer something "unique" for the market I'm studying about. In other words, to a layperson (a market participant), how to interpret the results of ARCH and GARCH that makes sense to him/her ? As per my understanding it has something to do with the coefficients of alpha and beta (also known as ARCH and GARCH coefficients) for the widely used GARCH (1, 1) model--202.79.62.21 15:38, 9 June 2007 (UTC)[reply]
You have two problems. The first is that you, yourself, must decide what is unique about the market of interest, as measured. The second is that you must remember how to speak plain English, devoid of technical jargon. I'm not sure which is the greater challenge. And the two together may pull your brain in different directions. :-) --KSmrqT 15:52, 9 June 2007 (UTC)[reply]

What's a basic or easier way to explain what Fokker periodicity blocks are ? Guroadrunner 17:43, 9 June 2007 (UTC)[reply]

I'm amazed the Fokker article does not explicitly mention the 31-tone organ he had constructed (still in working condition) and the paper he wrote about it, but only refers indirectly to those through links. Are you interested in this stuff? Perhaps you can help to edit this article into something approximating intelligibility; it ought to be understandable to musicians who have no difficulty grasping equal temperament. After much puzzling I understand the following. I'll restrict myself to the n = 3 case, but it should be easy to infer the general case from that. For integer values p, q and r (no need to restrict this to natural numbers), define C(p,q,r) = 2−k3p5q7r, where k is the largest number such that 2k ≤ 3p5q7r. It follows that 1 ≤ C(p,q,r) < 2. (Not coincidentally, 3, 5 and 7 are the first three primes after 2). For example, C(1,5,1) = 65625/65536 = 1.00135... and C(1,2,−4) = 4800/2401 = 1.99916... . These numbers are close to 1 and 2, respectively, which is what we're looking for. The values of p, q and r are, moreover, not very large, which is also desirable. So (1,5,1) and (1,2,−4) could be two of the three unison vectors we need for constructing a block. I have to stop here for now.  --LambiamTalk 22:54, 9 June 2007 (UTC)[reply]

I

I know that the square root of -1 is I, but how can you contemplate this imaginary number? Is it within the bounds of the human brain? Or does it just take a lot of (understatement) thinking outside of the box? I'm 11 and interested about this stuff. Thanks! Gbgg89 17:50, 9 June 2007 (UTC)[reply]

First, let's get the formalities out of the way: The usual notation for is i (small letter), and for information regarding it you may want to take a look at imaginary unit.
Now, as you no doubt have gathered, this is a very difficult question to answer, and my explanation may not be very good (or even correct). Think of this, however: What is ? Did you ever pay dollars at a store, or have you ever measured, using a ruler, a length of exactly centimeters (as opposed to, say, around 1.41)? I would think not, and in fact humanity may never even know if has any physical manifestation (if the universe is discrete, then it arguably doesn't). But for mathematicians, and people who use mathematics, it is convenient to assume that there is a number whose square is 2, and they even may have an explanation why such an entity should exist. But it is still just a mathematical abstraction introduced because of its usefulness. The fact that you did not ask the exact same question with 2 instead of -1, is evidence that its usefulness is such that it seems absurd to even question its existence.
Having this in mind, you will see that there is no reason why should be any different. It, too, is a mathematical abstraction introduced due to its usefulness (which is just as great as that of ). The motivation for its plausibility may be slightly different, but that doesn't make it "harder to grasp" or on the other side of the bounds of the human brain (as long as you block out the perpetuated brainwash, comitted by mathematics teachers for younger ages, that no number squared can ever be negative). -- Meni Rosenfeld (talk) 18:16, 9 June 2007 (UTC)[reply]
Yes, we have a simple explanation for a number, denoted by i or i (or sometimes j), that squares to −1. But first let's dispel the mystique unnecessarily introduced for √2. Draw a square with side length one. The length of its diagonal is √2, by the Pythagorean theorem. When the ancient Greeks realized this number could not be written as a ratio of two integers, like 2970, it troubled them deeply. But it is hardly a mere abstraction!
The need for a number that squares to −1 also arises naturally. Historically, one of the circumstances that forced the issue was solution of a general cubic equation. Like virtual particles in quantum field theory, complex numbers could appear in the midst of a solution only to disappear at the end.
The simple explanation, like that for √2, is geometric. Stretch out the usual real numbers along a line. With zero and one marked off, we can measure any amount, including √2 and π and their negatives, and so on. Multiplication by −1 has the effect of reversing the line, of rotating it by a half turn (180°) around zero. Multiplication by a square root of −1 works out nicely as a quarter turn (90°), and we can place i one unit above zero, rather than left or right. Thus every complex number has a place, not on a line, but in the complex plane. --KSmrqT 19:34, 9 June 2007 (UTC)[reply]
"Square", "diagonal" etc. are also abstractions, in the sense that if you attempt to actually draw a square, you will obtain a figure with sides more or less equal, with diagonal length of around 1.414 times the "side length".
Of course, I had no intention to confuse the OP. I was merely trying to emphasize that one should not be perplexed by a new mathematical construct, just because it does not have any immediately obvious physical interpretation. -- Meni Rosenfeld (talk) 19:46, 9 June 2007 (UTC)[reply]
I understand what the original poster is saying, that they need to be able to physically visualize something to understand it. Many people learn in this way. Unfortunately, higher mathematics often can't be visualized physically, making the subject difficult to understand for many. The same is also true of advanced particle physics and many other fields. In some cases a "conventionalization" (simplified model) can help, such as the model of electrons in circular, coplanar orbits about a nucleus, versus the reality of wave-probability functions defining the electron shells. In the case of imaginary numbers, thinking of the complex plane versus a number line may help. StuRat 17:54, 11 June 2007 (UTC)[reply]
(edit conflict) If multiply them this way, it may be easier to express them as re^θi where r is the distance from zero, e is Euler's number, θ is the angle in radians (there are 180/π degrees in a radian), and i is the square root of negative one. There's a four dimentional extention of complex numbers called quaternions, and the number of dimentions can be doubled any number of times quite simply. Quaternions aren't used often partially because multiplication isn't cumulative with it (a*b doesn't necessarily equal b*a). I'd also like to add that no form of numbers is strictly necessary, and any sufficiently complex mathematical system can be used to represent any other, and therefore represent reality just as well, though one system might make it easier than another.— Daniel 20:07, 9 June 2007 (UTC)[reply]
I have to take issue with your statement that we can double the number of dimensions "simply" as many times as we like. You did note that the quaternions are not commutative; if we double it again, the most likely candidate is the octonions, which are not only non-commutative but not even associative! And what next? All these are normed division algebras (also called composition algebras), and by Hurwitz's theorem there is no real composition algebra of dimension 16. Sure, we can continue the Cayley-Dickson construction arbitrarily, but you start losing properties left and right -- the octonions are the last algebra before you start getting zero divisors. Tesseran 23:10, 11 June 2007 (UTC)[reply]
This is not any more mysterious than the number −1 itself. If all you're used to is numbers for counting sheep, you'll have a hard time grasping −1 sheep, and you may wonder, should you overhear some mathematicians passing by while herding your sheep: How can adding a sheep to some number of sheep result in no sheep? Is it some kind of antisheep? But how can that be? Is −1 within the bounds of the human brain? Well, yes, it is, and so is i. Here is yet another way of thinking about it. It may be a bit above your head, but hopefully you'll get the drift. First, consider the way the integers are constructed from the natural numbers, the rational numbers from the integers, and the reals from Cauchy sequences of rationals (see, respectively, Integer#Construction, Rational number#Formal construction, and Construction of real numbers#Construction from Cauchy sequences). These constructions all follow the same pattern. First, a concrete kind of mathematical objects is constructed, such as pairs of integers. These new objects capture some notion we have that allows us to solve more equations than before, and they provide a concrete representation for that. For example, in natural numbers you cannot solve the equation x + 3 = 1, in integers you cannot solve the equation x × 3 = 2, and in rational numbers you cannot solve the equation x3 = 2. However, because of the specifics of the representation, "too much" is captured; for example, viewed as pairs of integers, 2/3 and 14/21 are different, but they represent the same idea: a solution of x × 3 = 2. So they are equivalent, as far as we are concerned. What to do? Well, just form yet another concrete kind of objects, each of which consists of a whole class of the objects we had before. In doing so, all objects that are in some (precisely defined!) way equivalent are lumped together in the same class. These are called equivalence classes, and they are the things we actually want. The technical term for this is quotient set. If the relevant equivalence relation is chosen the right way, it is a congruence relation for the relevant mathematical operations, which means that they are "automatically" also defined on the new quotient set. So far, so good. The complex numbers can be formed in just the same way. Take the polynomials with real-valued coefficients in one variable X. This gives us a ring. Now we consider two polynomials equivalent if, on polynomial division by the polynomial X2+1, they give the same remainder. We can then form the quotient set, which in this case is again a ring: the quotient ring. Miraculously, it is even a field. We embed the real numbers as usual: [r] = r. Now consider the class having X as a representative. Let us give it a name: i. What is i2? Well, i2 = [X2] = [X2 − (X2+1)] = [−1] = −1. Voilà, we have constructed a concrete kind of mathematical objects, forming a field with the real numbers embedded, that contains an element i such that i2 = −1.  --LambiamTalk 21:58, 9 June 2007 (UTC)[reply]
Well, this work, but don't forget that the OP is 11, after all. Cthulhu.mythos 09:48, 12 June 2007 (UTC)[reply]
Kids are very smart these days :) -- Meni Rosenfeld (talk) 15:50, 12 June 2007 (UTC)[reply]
Just to see if I can clarify what I think the original poster might have meant, positive real numbers can be easily visualized as lengths of line segments (like the length of a diagnol as described above). A negative number can be visualized as a directed ray segment the same length as a positive segment in the opposite direction. The analogy of "imaginary" rays and line segments, though, is harder to visualize. The complex plane partially works, but mainly as showing that imaginary components of complex numbers are akin to a second component of a binary vector. It doesn't visually explain however why squaring an imaginary number results in a real number with negative sign. So it's the visualization of lengths and areas and directed vectors that seem to have a hard time explaining what imaginary numbers "look like". I'm thinking that maybe what the poster wants, therefore, is a physical setting using both imaginary and real numbers that reflects how imaginary numbers squared become negative real numbers.Dugwiki 16:43, 11 June 2007 (UTC)[reply]

Thanks for all your help! It really cleared things up! Gbgg89 02:24, 13 June 2007 (UTC)[reply]


June 10

(No new questions on this day) :-(


June 11

Laplace transform for PDEs?

In class my professor mentioned that it is possible to use the Laplace transform to solve PDEs but did not go into details, and as the article doesn't mention this technique (actually I have seen it mentioned in a few sparse places but nothing really specific on how it works), I'll ask: is there another name for the technique? And how does it work? Regards. 75.58.181.204 00:06, 11 June 2007 (UTC)[reply]

Example. (I had trouble making an ordinary wiki link.) —Bromskloss 07:05, 11 June 2007 (UTC)[reply]
I turned the exolink into a standard wikilink. The problem was an anomaly (double space) in the section title at the linkend.  --LambiamTalk 11:22, 11 June 2007 (UTC)[reply]
Thanks. —Bromskloss 12:29, 11 June 2007 (UTC)[reply]
That's an ordinary differential equation, not a partial one. Is the Laplace transform easy to apply to PDE problems too? nadav (talk) 11:15, 11 June 2007 (UTC)[reply]
Yes. You just transform one of the variables and treat the other variables as constants during the transform. Note that derivatives with respect to the other variables kind of "pass through" the transform unaffected (i.e. the transform of the derivative of something with respect to the another variable is just the derivative with respect to that variable of the transform of that thing.) --Spoon! 15:44, 11 June 2007 (UTC)[reply]
This is what I was missing; thank you very much. 76.195.72.198 22:55, 12 June 2007 (UTC)[reply]
For certain types of PDEs (usually ones where values at the boundaries of the desired solution are known), there is something called the multigrid method which can computationally efficient and robust. It works by starting with an estimate of the solution, and then by evaluating the error (using a Laplacian operator) and adjusting the solution locally to spread that error out across the solution, producing a new estimate that is better than the one you started with (this process is done on various frequency levels and repeated until the error is reduced to an acceptable level). I don't know if this is what you're looking for, but it sounds like the kind of thing you are talking about. I've also heard of things called "rapid Poisson solvers" which are used to a similar end, but I haven't worked with them personally so I can't tell you much about them. - Rainwarrior 04:09, 12 June 2007 (UTC)[reply]

June 12

Differential Equation for Gravity

What is the general solution to the following DE

where k > 0

I'm asking because I am trying to solve the equation for Gravity.

202.168.50.40 03:48, 12 June 2007 (UTC)[reply]

Equations like this can be solved by trying . In this case, I believe this gives one solution to be . -- Meni Rosenfeld (talk) 04:06, 12 June 2007 (UTC)[reply]
Of course, you can use instead of ; but the general solution is, I think, much harder to find, and probably not expressible using elementary functions. -- Meni Rosenfeld (talk) 04:16, 12 June 2007 (UTC)[reply]
To solve an equation of the form
first find the antiderivative U(x) of the r.h.s. with respect to x (including a constant of integration). If this is not an analytic expression, give up. Otherwise, put Then
Then
Note that if both sides are multiplied by mass m, we can interpret one side as kinetic energy and the other side as mechanical work (integral of force mu(x) with respect to distance travelled x). So, solving for v,
With some luck, this too has an analytic solution. It does for this case (see List of integrals of irrational functions, the last section Integrals involving ), but I'm insufficiently motivated to go through the labour, especially as programs like Maxima, Maple and Mathematica should be able to cough up a solution.  --LambiamTalk 08:17, 12 June 2007 (UTC)[reply]
Just a little warning. I hope you are considering an object falling straight toward Earth (or being thrown upward). If you want sideways motion, like the orbit of a satellite, your equation isn't quite correct. —Bromskloss 09:54, 12 June 2007 (UTC)[reply]

Thank you very much for your efforts. I'm justing trying to work out motion under gravity in one dimension which is as I see it, quite complicated already without bring in three dimensions or movements of planets and moons. 202.168.50.40 00:25, 13 June 2007 (UTC)[reply]

Eh, how did you get the following

All I can get is

202.168.50.40 00:31, 13 June 2007 (UTC)[reply]
It's a little trick involving the chain rule: Confusing Manifestation 04:14, 13 June 2007 (UTC)[reply]

Practice Exam

I've spent a considerable amount of time trying to solve this particular problem in a practice exam, so I hope it will not be viewed as trying to get help with 'homework'. Even just how to obtain a solution would be fine.

The question is written thus: The Western section of a football ground has its seating organised in two section. The upper section has its seating arranged in rows of 19 and tickets in in this section cost $18 each. The lower section has seats in rows of 22 and tickets here cost $28. In the Western section of the ground there are 29 more rows n the upper section than the lower and the overall capacity of this section is 8119. If the game were sold out, how much money woul dbe made from ticket sales for the Western Section?

Thanks heaps Mix Lord 08:31, 12 June 2007 (UTC)[reply]

Let be the cost of the ticketing in the upper section, be the cost of the ticketing in the lower section. Let be the length of a row in the upper section and be the length of a row in the lower section. Now, just draw relationships as expressed in the question to obtain a set of equations. Then solve the equations.

don't think there's a solution ... if the upper section had x more rows than the lower section, x would have to be a non-negative integer smaller than or equal to 23..

A quick calculation shows that the number of rows is not an integer, but the number of dollars is. Looks like whoever wrote this question did not give it much thought. -- Meni Rosenfeld (talk) 15:48, 12 June 2007 (UTC)[reply]

(edit conflict) Here's how you can obtain a solution if there is one. Upper section total seats, , = 22 times upper section rows or,

Lower section total seats = 19 times lower section rows or,

Total capacity: , or

If indeed , then , then or . The problem with this is that 7841 is not a multiple of 41. If instead you misread and it is actually , then . But this is a problem since this implies and 8090 needs to be a multiple of 19 (see above)... So looks like there was a typo in the word problem (or they just plumb messed up). If you were able to find integral solutions, then all you need to do is to plug the upper and lower totals into $ + $. Root4(one) 16:09, 12 June 2007 (UTC)[reply]

Nodal Relationships

This question has been hopped over from the Miscellaneous Desk. Tamfang suggested that this may be most closely related to graph theory. Unfortunately, my maths skills aren't good enough to be able to get enough out of the graph theory article to solve my problem. I'm sorry about the length of the question, but I'm trying to make myself as clear as I can (I'm failing quite miserably though). If you still don't understand what I'm getting at, I'll try doing a different set of diagrams to explain (I have plenty of time to wait for an answer). Thanks for any help you can give. In order to try and prevent confusion, I've superscripted the parts that don't really help your understanding. --80.229.152.246 16:16, 12 June 2007 (UTC)[reply]

Question starts here: Sorry about the title, but I don't really have a clue what to call this question. Anyway, say I have a room of 100 people and I ask each of these 100 people to pick 2 friends/people to link with (it doesn't have to be 100, or 2 or anything, but I thought I'd use it as an example). Are computers/algorithms good at sorting out the resulting links into groups of 5 or 6 people, or would it be better to do it by hand? If algorithms are good, are there any simple ones I can use (ones that preferably don't require a degree in calculus or graph theory or anything).

My follow-up starts here: Here's a quick diagram I have created in order to try and illustrate my question better: Image (I have linked to the file rather than embed it because the arrowheads don't seem to show up properly if I embed it) Using these data as an example, I would like to know if there are any nice, easy ways of quickly arranging these nodes (in this case names) into small groups (for this example, 4 members would probably be about right) based on the relationships shown by the diagram (there are no weightings of the relationships, the thickness/length of the line means nothing).

Would it be quicker/easier to do this by hand (bearing in mind I will probably be looking to do this for at most 33 people (more likely to be somewhere around 25)). If my calculations are correct, this will mean there will be a maximum of 66 links to deal with in order to fit into groups of 5/6/7.

Please also bear in mind that my maths knowledge is not that good, although I am prepared to try and learn. Also, it doesn't matter if some of the links are broken quite badly, the only thing I require is that everybody is in a group with at least one person they have directly linked to, or have indirectly linked to by a factor of 1 step (i.e. a person they have directly linked to has directly linked to the other person).

This requirement isn't strictly strict though, I'm quite prepared to break it (and deal with all hell that comes if I do - that's the only downside to doing this sort of thing, you can't please everyone. It's interesting, but you always get people who are annoyed. Hopefully this method will cause fewer complaints, or rather complaints that will cancel each other out, than last time)

One more thing. The two direct links going out from a node is strict. This cannot/will not/should not/has absolutely no chance of being changed. In other words, each person much choose TWO and only TWO links with people. No more, no less. (Also, God knows why I'm trying to sort this out now, I don't even have any data, this sorting out is being done about 8 months before it needs to be done, and I don't even know if I am actually doing it! Oh well, at least it is interesting) --80.229.152.246 16:16, 12 June 2007 (UTC)[reply]

Data clustering is the article to look at. There is software which can do this for you touchgraph might do ter job (google and look for the sourceforge page). --Salix alba (talk) 18:53, 12 June 2007 (UTC)[reply]
I'm not sure how data clustering really applies to this; that usually deals with finding similar items, trends, etc., not with discrete relationships between individual items. I think this is a graphing problem, and I think the question you are trying to solve is something like "what is the collection of groups with the strongest connections between group members?", am I correct? What you first need is a way to determine what makes a strong group, perhaps take a look at all the members in the group and how they are connected, if there is a two way connection between two members, make it 0 points, if only one way, make it 1, and if no connection make it 2, or something like this, this gives you a number that you can use to evaluate a group, if the number is smaller, it is a stronger group. Next, you want to try combinations of groups until you find the strongest one. If you were to try every combination of about 30 people, I guess it would be about which isn't possible to do.
Instead, I would suggest a heuristic algorithm (something that guesses what would be good) of a greedy variety: maybe start with the 30 people as 30 unconnected groups, and then at each step (of 30 steps): figure out adding which person to which group would give the lowest total score for all groups together (and once a group is full, remove it from the possibilities). Each step would require at most 30*30 evaluations of the groups scores (and usually a lot less), which would be a breeze for a computer, so it would be pretty fast to calculate. It would probably give a pretty good result, though it wouldn't be guaranteed to be the best one. To improve this you could even add a bit of randomness to the selection (e.g. when the scores are equal, select from the options at random) and run it many times and select the outcome with the best score (this too could be automated by a computer). - Rainwarrior 19:25, 12 June 2007 (UTC)[reply]
Wow! Thanks for that answer. Not only is it really useful and interesting, but I can understand it all. Looks like the answer came at the right time as well - I was just beginning to calculate whether or not it would be possible to brute force it, but as you have pointed out, that would give about 9*1019 possibilities. After thinking about it for a while, I have decided that I will do it by hand for the proper one (there's far too much other information that I know but that won't actually be collected in the questionnaire about the two links people would have that would influence the linkages). However, I am very intrigued by this method of solving this problem and I will be sure to look into it. Thanks very much for pointing it out. --80.229.152.246 19:52, 12 June 2007 (UTC)[reply]
Oh, just one more point. With the greedy solution, would it not matter on which person you started with? It's not a problem if it does, I just want to make sure I understand it properly. --80.229.152.246 19:58, 12 June 2007 (UTC)[reply]
Well, with the one I was suggesting, you would see what the score would be if you combined any group with any other group of what you have available (a group might be just 1 person, or you might be combining a group of 2 with a group of 3 or whatever), so at each step you'd be checking them all to see which is the best merge to make. So, no, the order in which you make these checks wouldn't really matter. However, because at any step there could potentially be several choices with the same score, which one you choose can affect the outcome. The algorithm I suggested is just a "good guess" kind of thing, though, sometimes compromising of what looks like a really good merge at this step could work out better several steps later, and a greedy algorithm can't see that. As an alternative you could probably make an intelligent search of all the possibilities (maybe look at the Branch and bound article, I haven't read it but that is the applicable term) that if the data was well behaved enough could potentially find the optimal solution in a reasonable amount of time, but only if there is a way to reject bad solutions early on (and this is often possible). There are ways to narrow the search from the 10^19 brute force down to something that's doable, like if you can figure out what the best and worst possible scores could be if you made a particular choice you can throw it away (along with all choices that would have followed it) if its best is worse than the worst of another case you already know of. It's more complicated to figure out a best/worst kind of thing, but that's another way to approach the problem which could potentially give better results. It depends on the situation though, and sometimes it still can't narrow it down to a reasonable number of possibilities. - Rainwarrior 20:36, 12 June 2007 (UTC)[reply]
Thanks for that, it looks like I was right to ask. Looks like I was looking at the problem from a local level rather than a global one. Thanks very much for the extra info. --80.229.152.246 20:58, 12 June 2007 (UTC)[reply]
(ec) It's quite possible for there to be several equally good solutions, in which case even an optimal algorithm (for whatever meaning of "optimal" you'd like) could produce different results depending on initial choices. For example, in your sample graph, if Ian and Fred are in different groups, which group should Simon be assigned to? How about Mike, if Henry and Emily are not in the same group? (Ps. Though it probably won't help you with this problem, you might be interested in the stable marriage problem, which is a well known conceptually similar problem. It's notorious for having a simple, elegant and easily computable solution which a) is highly asymmetric despite the problem being symmetric, and b) does not generalize even to very trivial modifications of the problem, some of which are known to be intrinsically hard.) —Ilmari Karonen (talk) 21:00, 12 June 2007 (UTC)[reply]
Thanks for that as well! It looks like my problem could be sort of done as a special case of the stable marriage problem (if I have understood it correctly). It could be interesting to see how the results I get by doing it by hand differ from the algorithmically generated groups. I suspect it would be much more than I would predict, due to the fact that I know much more about the people concerned than the data the computer would be given (some of which is just not practical to make machine-readable). The only problem I can see with the algorithmic solutions would be defining what a 'good' solution would be. I don't think there is enough data to ensure that it will work quite as I want it to. Anyway, I'll try and give it a shot. EDIT: Oops, looks like I didn't fully register the last sentence. Looks like it could be a bad idea to try a generalisation... --80.229.152.246 21:06, 12 June 2007 (UTC)[reply]

Thanks very much for all the answers everyone. They were all much more detailed that I was expecting. I wish I could give you all some kind of award/reward (I can only think of Barnstars, but I don't think I'm allowed to give them out...). You've definitely opened up some new avenues that I hadn't even thought about pursuing. Thanks also for showing me the futility of brute forcing it (I was just about to calculate whether or not it would be worth it). --80.229.152.246 21:06, 12 June 2007 (UTC)[reply]

Just a little follow-up: After having a proper look at the stable marriage problem, it looks like my situation is most like the 'residents problem with couples', which looks like it is NP-complete. I think I'll stick with doing it by hand... --80.229.152.246 21:23, 12 June 2007 (UTC)[reply]

Please note your requirements may appear too strong to be satisfied. If your people happen to form cliques of three, there will be no four people connected (link Will to Ian instead of George on your image, and link Simon and George to each other, and you'll get non-extendible group of three: Ian–Matt–Will). Also if links form long chains, you will be unable to find groups of 4 or more people, such that every two persons in group are friends or have common friend (are linked directly or through one intermediate person). Your image gives the example of such chain: Mike, Emily and George form a properly linked group, but you can't extend this group to 4 people. The only two candidates are Henry and Will, but Henry is not properly linked to George, and Will is not properly linked to Mike. So the desired grouping does not exist for your data. --CiaPan 05:50, 13 June 2007 (UTC)[reply]

I don't think it's necessarily a bad thing for him to be grouping unconnected people. He just wants to maximize the connections within the final groups. - Rainwarrior 11:39, 13 June 2007 (UTC)[reply]
When a problem is NP-hard, that does not always mean that good heuristics can't give you a reasonable result; only that there is no sure way of always getting a guaranteed optimal result in an acceptable amount of time. If the problem is hard for computers, it is often also hard to do a good job by hand, and you may actually get a better result by having a program using a good heuristic or having it examine many possibilities.
Here is what I would try by way of an algorithm. I only sketch the idea; the details need fleshing out. Basically, it is a clustering algorithm, with some bells and whistles. For the sake of efficiency you may need to maintain some suitable data structures, as described in Disjoint-set data structure and Heap (data structure), depending on the size of the problem.
Initially, every vertex of the graph forms a singleton cluster. You can form larger clusters by merging two clusters. You may need to break up existing clusters, in which case the pair of smaller clusters is blacklisted as not being allowed to cluster again, to prevent getting stuck in a loop.
There is a (symmetric) matrix M, indexed by clusters, where Mij is an indication how good it is (expected to be) to merge clusters i and j. Repeatedly:
  • Find a pair i and j that are clusterable (combined size ≤ 5, not blacklisted) maximizing Mij.
  • Merge by i := i ∪ j, and set Mik := Mik + Mjk for all other clusters k, and likewise for Mki. Erase or ignore the row Mjk and column Mkj, since j no longer exists as a cluster.
  • If it is impossible to find a clusterable pair before all groups have the desired size, split an existing pair into two, blacklisting it. Choose a split such that merging that pair would have had a low value. It may be best to split some cluster of some size n+1 into one of size n and a singleton. Repeat until a clusterable pair is found.
Initially, M{p},{q} could be set equal to Lpq, the number of edges in the graph between vertices p an q. To improve performance, sort the values of Lpr + Lqr for r ≠ p, q in descending order, giving a sequence v1 ≥ v2 ≥ ..., and set M{p},{q} := Lpq + 0.8×v1 + 0.5×v2. This will give an incentive to cluster vertices p and q that have significant growth potential by adding a suitable vertex r. The weights 0.8 and 0.5 are chosen somewhat haphazardly, but should be decreasing.
Finally, you could have attempted improvements by exchanges between clusters. If one cluster is A ∪ B and another is C ∪ D, where all components mentioned are mutually disjoint, and A and C have the same size, see if it is an improvement to cluster like A ∪ D and C ∪ B instead. Repeat as long as time allows and improvements are found.  --LambiamTalk 14:47, 13 June 2007 (UTC)[reply]

June 13

Regression

How would one perform a linear regression of Y on X and X2 (simultaneously) using Excel?
Zain Ebrahim 11:06, 13 June 2007 (UTC)[reply]

Be very careful if you are using Excel for something like this. Excel has numerous errors in its statistical functions and Microsoft has a poor track record when it comes to fixing them. See, for example, [2], [3] and [4]. A short quote from the latter: "In its Excel XP release, Microsoft attempted to fix some statistical problems, but it did not do a good job (McCullough and Wilson, 2002). This failure presaged Microsoft’s attempt at a major overhaul with Excel 2003. While it fixed many functions, it failed to fix many others."
If you need accurate results, you would be much better off investing time and perhaps even some money in serious statistical software such as R (free) or the SAS System (not free). If you prefer to use a spreadsheet, Gnumeric has a better track record on accuracy than Excel.[5] If you must use Excel, don't use versions earlier than Excel 2003, and read the relevant parts of this site first: Errors, Faults and Fixes for Excel Statistical Functions and Routines. The LINEST function is Excel's regression workhorse. -- Avenue 13:05, 13 June 2007 (UTC)[reply]

abstract algebra

if G/Z(G)is cyclic then show that G is abelian