Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 213.246.165.17 (talk) at 11:59, 12 September 2014 (→‎What's the full pi number). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


August 24

Hi i am Er. Mohit Iyer and i am a good mathematician. I am curious about maths and i love to solve the problems.

Today, i want to gave some idea about How to solve SQUARE OF A NUMBER?

if you want to solve the square of 25? (25)*(25) then firstly, you have to find 5*5=25 then you have to find 2*2=4 now, you have 425 now, you have To find 2*5*(power)=2*5*2=20 now,add this multiplication by 425 20* 625 that is the required answer. — Preceding unsigned comment added by Er. Mohit Iyer (talkcontribs) 07:46, 5 September 2014 (UTC)[reply]

August 29

We have an article notation for differentiation but no article of notation for integration which is telling. Why? — Preceding unsigned comment added by 174.3.125.23 (talk) 17:14, 29 August 2014 (UTC)[reply]

Because one is a big enough subject for its own article and the other is a small topic best dealt as a subsection in Integral. Dmcq (talk) 17:58, 29 August 2014 (UTC)[reply]
I mostly agree with that, but it doesn't really answer the underlying motivation: why are there several conventions for notation of differentiation still in modern use, but only one for integration (at least restricting to functions of a single real variable)? Put another way, why do we still use sometimes use Newton's notation for derivatives, but not for integrals? I suspect the answer is that the various options for differentiation have different strengths and weaknesses, while in contrast, the integral notation doesn't have any real downsides. Of course, there are a few different notations for different types of integrals, e.g. path integral, double integral, surface integral, Ito integral etc. In that light, it wouldn't be so strange to have an article that mentions each of these briefly. Checking the articles, the notation is fairly consistent, but sometimes in text books the integral symbol gets adorned in different ways, depending on context. SemanticMantis (talk) 21:40, 29 August 2014 (UTC)[reply]
Let's look at the question in another way: . is the inverse of but there is only one way to write this. Differentiation on the other hand has many different ways, but the inverse, integration, has one way. Why?174.3.125.23 (talk) 22:17, 29 August 2014 (UTC)[reply]
Actually there's an article Integral symbol. I just remembered about that as it describes how the Germans and Russians use much more upright versions. Dmcq (talk) 22:18, 29 August 2014 (UTC)[reply]
Don't forget the physicists habit of writing the d-whatever right after the integral sign as opposed to after the integrand. YohanN7 (talk) 22:19, 29 August 2014 (UTC)[reply]
I'm quite liable to leave it out altogether sometimes ;-) Dmcq (talk) 14:16, 30 August 2014 (UTC)[reply]
This mention makes me think of differential geometry, where the integral does not form a notational pairing with a formal variable of integration (as in Exterior derivative#Stokes' theorem on manifolds); it is only over a region of a manifold. This might be relevant in that while it looks similar, it is a distinct notation. —Quondum 20:54, 30 August 2014 (UTC)[reply]

Ok, let's ask another question. We know that "n" is any number, "b" is any number. Newtonian notation uses "dt". Is "dt" = "dx"? Why?174.3.125.23 (talk) 16:07, 30 August 2014 (UTC)[reply]

I can't quite make out what you are saying but Newtonian notation does not use dx or dt. It assumes a single independent variable, t normally but something else can be assumed instead. For instance describes simple harmonic motion with time as the independent variable but might describe the exponential function with x as the independent variable - but in mechanics it would just be time again. Dmcq (talk) 17:50, 30 August 2014 (UTC)[reply]

Ok, my situtationsituation is at a Math 31 level, which is a grade 12 calculus course in Alberta. I am stuck on the quotient rule. I need a proof. I believe where I was stuck uses Leibniz notation. I think the quotient rule is one multiplied by another, but I don't understand why.174.3.125.23 (talk) 20:13, 31 August 2014 (UTC)[reply]

This is quite different from your original question. Try reading quotient rule and product rule. —Quondum 01:20, 1 September 2014 (UTC)[reply]
That is a poor explanation of my question. Here's another question, why is d over dx?174.3.125.23 (talk) 01:28, 1 September 2014 (UTC)[reply]
The purpose of the reference desk is not to act as a tutoring service, but is primarily to provide references such as I gave you; in particular, you need to be prepared to take the information and links given and extract the information that is relevant to your question. If you cannot frame your questions so that it is clear what information you seek, and especially if you are so dismissive, you can't expect much of a response. You are not demonstrating that you are trying to synthesize the information that you have been given. —Quondum 01:51, 1 September 2014 (UTC)[reply]


August 30

Integers/whole numbers vs decimals

The advantage of using integers instead of decimals would seem obvious to most (9 mm instead of 0.09 cm, 1500 metres instead of 1.5 kilometres). But is preference for integers/whole numbers over decimals when using SI units an established principal?--Gibson Flying V (talk) 03:22, 30 August 2014 (UTC)[reply]

It is more that people like to use a system where their measurements have a whole number part but not be too big and to use the largest unit like that they can. 1500 meters is an example where one tries as far as possible to use the same scale for all ones measurements. In athletics one would say 1500 meters but in a car one might say 1.5 kilometers. Dmcq (talk) 07:31, 30 August 2014 (UTC)[reply]
Right, but for whatever reason 9mm and 1500m were chosen. Similarly, drinks are in 700ml bottles, not 0.7l bottles, snacks are in 200g packs, not 0.2kg packs, films are 90 minutes, not 1.5 hours. It seems that where integers can be used, they are, and I was curious to know from those knowledgeable in mathematics if this apparent preference has ever been acknowledged anywhere (or does it just go without saying).--Gibson Flying V (talk) 07:40, 30 August 2014 (UTC)[reply]

Note that 9 mm = 0.9 cm, (not = 0.09 cm). Integers are more elementary and were historically used before fractions, and so an integer number of subunits were preferred to fractions of larger units. The prefix c = 0.01 is usually considered part of the unit, cm = 0.01 m, rather than part of the number, 0.9c = 0.009 . Of course 0.9 cm = 0.9c m. Bo Jacoby (talk) 20:25, 30 August 2014 (UTC).[reply]

  • Medical professionals are taught to avoid working with decimals, particularly when measuring dosages.[1][2][3][4][5]
  • The UK Metric Association's Measurement units style guide says, "Use whole numbers and avoid decimal points if possible - e.g. write 25 mm rather than 2.5 cm."
  • In his book entitled The Fear of Maths: How to Overcome It Steve Chinn opens the chapter entitled "Measuring" with I am sure that most people would rather avoid decimals and fractions. This is the reason we have "pence" rather than "one hundredths of a pound". The metric system allows us to avoid decimals by using a prefix instead of a decimal point. If £1 is the basic unit of money, then 1 metre is now the basic unit of length. The metre is too long for some measurements, so we use prefixes, as in "millemetre" as a way of dealing with fractions of a metre.
  • This article cites the Australian construction industry's standardisation on millemetres for all measurements in 1970 as having saved it 10-15% in construction costs due to the eilimination of errors associated with decimals.
That's all I could find so far.--Gibson Flying V (talk) 01:12, 31 August 2014 (UTC)[reply]

Absurd or meaningless rate

I couldn't decide what desk to post this question to. It's kind of a logical/mathematical question but it's also a semantic/linguistic question, so if this is the wrong place to ask this question, please forgive.

Consider the following statements: 1) "I can run fast, up to 10 miles an hour" 2) "I can run at least one mile in at least an hour"

The first statement refers to a maximum possible speed or rate or ratio. But the second statement appears to be absurd or meaningless (I think). Can someone explain to me in a quasi-systematic way *why* the second statement is meaningless.--Jerk of Thrones (talk) 06:51, 30 August 2014 (UTC)[reply]

The Humanities reference desk would probably have been the right place for a question like this.
The first asserts that you can run at that speed for a short distance at least. The second is not meaningless, it says you can run one whole mile but sets no limit on the speed. The meaningless bit is because of the very reasonable expectation that the speaker actually meant something more otherwise they wouldn't have said so many unnecessary words, that implies they made a mistake in what they said. In English that sort of sentence can easily be the result of a common habit of duplicating a superlative and one would suppose they just made a mistake and meant "I can run at least one mile in an hour", but there may be some other explanation depending on the circumstances. Dmcq (talk) 07:21, 30 August 2014 (UTC)[reply]
It is absurd because it seems as if it should be a statement about how fast someone can run, but isn't. It could be paraphrased as 'I can run for some unspecified distance of a mile or more - but it will take me an hour or more to do it.' It isn't actually meaningless, just less informative than it first appears. AndyTheGrump (talk) 07:24, 30 August 2014 (UTC)[reply]
I think the odd part is claiming one can move a mile in a period of time without any upper limit. Unless the person is infirm, that should be true of everyone. Of course, just what constitutes "running" is open for debate, but most wouldn't call a mile in an hour to be a run at all, only a slow walk. If you said it as "I can travel at least a mile in at least an hour", then that might be a reasonable statement from somebody with some type of injury, or carrying a heavy load. StuRat (talk) 02:35, 31 August 2014 (UTC)[reply]
Running vs. walking isn't defined by the speed, but by the gait. When walking you have 1-2 feet on the ground at any time, when running you have 0-1. -- Meni Rosenfeld (talk) 07:50, 2 September 2014 (UTC)[reply]
It's defined by both: [6]. There's not much point to using a running gait when moving that slowly. Even joggers move faster than that. StuRat (talk) 14:45, 2 September 2014 (UTC)[reply]
The meaninglessness comes with both the over-generalization of the sentence (mentioned above, effectively weakening the statement to "I can run 1 mile before I die") and the contrast with the listener's expectation ("...in at least one hour? That doesn't help one bit").
Advertisers do this a lot, throwing a heap of positive-sounding phrases which don't actually synergize at the audience. ("Save up to 50%, and more" is the textbook example. It could be 50%, 99%, or only 1%, and due to the illogical structure of their promises, they didn't really lie even if most customers save much less than 50%.
Some politicians use similar patterns, usually for similar reasons (to suggest, rather than actually make, promises).
Sometimes employed for comedy ("A messy death is the last thing that could happen to you" – literally) or by a "lawful" character who would never lie. TV Tropes calls it a "false reassurance" . - ¡Ouch! (hurt me / more pain) 10:43, 1 September 2014 (UTC)[reply]
That's a good one. Absolutely true but conveying no information. I like those in my speech, like 'If I don't go to sleep I'll never wake up in the morning'. I think there's a term for those but I've forgotten it. Dmcq (talk) 11:07, 1 September 2014 (UTC)[reply]
Sports produce a lot of those, like "The reason we lost is that they scored more points than us." StuRat (talk) 17:07, 5 September 2014 (UTC)[reply]

I read the article Coequalizer, and feel a little bit stupid, because even after repeatedly thinking about it, it evades my grasp.

The article tells me:

In the category of sets, the coequalizer of two functions f, g : XY is the quotient of Y by the smallest equivalence relation such that for every , we have .[1] In particular, if R is an equivalence relation on a set Y, and r1, r2 are the natural projections (RY × Y) → Y then the coequalizer of r1 and r2 is the quotient set Y/R.

Firstly I have trouble understanding what the smallest equivalence relation is. I assume, it's the finest?

To make a simple example, assume X=Y is the set of real numbers and and . What would be the coequalizer? 77.3.137.128 (talk) 13:08, 30 August 2014 (UTC)[reply]

Yes, smallest means finest. The term smallest is justified by thinking of an equivalence relation as a set of pairs. Then the smallest one with property X is the intersection of all equivalence relations with property X.
Another way to view it is to start with for all , then make it reflexive and symmetric and close under transitivity.
Using your example, for every nonnegative , , so we start with for all . Of course, we also add symmetry and reflexivity. Normally we'd need to close under transitivity, but this is already transitive. So now we take the quotient of the reals by this, which gets us a set which can be naturally identified with the nonnegative reals.--80.109.106.3 (talk) 14:38, 30 August 2014 (UTC)[reply]
Excuse me, it really looks like I have some extraordinarily mental block on that subject. Please tell me what the morphism of this coequalizer would be. 77.3.137.128 (talk) 14:57, 30 August 2014 (UTC)[reply]
I'm not sure what morphism you're asking for. The equivalence relation from your example is given by if or . We get the coequalizer by taking the quotient of the reals by this, so the coequalizer is the set . The natural identification I mentioned earlier is given by .--80.109.106.3 (talk) 17:04, 30 August 2014 (UTC)[reply]
Thank you so far. I guess my problem is some misunderstanding deep inside my head, probably mixing limits and colimits. At least I now have an example that is not tainted by this fault inside my brain. Thanks. 77.3.137.128 (talk) 20:18, 30 August 2014 (UTC)[reply]
Got it! I finally got my brain bug fixed. Having been trained on resolving equations, my mind was tied on thinking about the domain, but, as the name co-equalizer strongly suggests, we are rather forcing equality on the codomain. Nice koan. 95.112.216.113 (talk) 08:53, 31 August 2014 (UTC)[reply]
{{reflist-talk}} added here for clarity 71.20.250.51 (talk) 11:58, 31 August 2014 (UTC)[reply]

References

  1. ^ Barr, Michael; Wells, Charles (1998). Category theory for computing science (PDF). p. 278. Retrieved 2013-07-25.


August 31

Trilateral symmetry

My question relates to a hypothetical sentient lifeform based on trilateral symmetry. Assume their mathematics to be base-9 (since they have 3 digits on each of their 3 appendages; the only reason humans created the decimal system is that we happened to be created with ten "digits").  —The question is: Are irrational numbers such as π and φ irrational for all base systems –in the sense that they cannot be expressed with a finite set of ordinal digits, (or whatever the proper terminology is)? Does this relate to Commensurability, and would this be applicable to all number-base systems (specifically, base-3 and base-9)?  —I might not be expressing myself clearly, but hopefully you get the idea. A second (tangentially related) question might best be asked on the computing or science desk, but I'll give it a try here: is there such a thing as a trinary computer based on (null, +/-); translated as (0,1,2) or base-3 (?)     ~:71.20.250.51 (talk) 11:08, 31 August 2014 (UTC)[reply]

Actually, humans developed place-value arithmetic three times, with three different bases. The first place-value system was that of the ancient Babylonians, with base 60. The Mayans used base 20. We use so-called Arabic numerals, which were actually invented in India before being adopted by the Arabs, with base 10. The connection of the arithmetic base with evolutionary anatomy would appear to be sort of random. There are still a few vestiges of Babylonian mathematics, such as 60 minutes to a degree and 60 seconds to a minute, reflecting the use of Babylonian mathematics in astronomy and astrology. Except for that specialized use, Babylonian mathematics did not displace the use of non-place-value systems such as Egyptian, Greek, and Roman numerals. It had the advantage (as do Arabic numerals) of permitting calculations with an arbitrary amount of precision. (That is, you can always carry out a long division to as many decimal places or sexagesimal places as you need, which is important for calculating astronomical events.) It had the disadvantage that it was difficult to memorize the addition and multiplication tables.
However, the question about rational, irrational, and transcendental (incommensurable) numbers has already been answered, which is that rationality does not depend on the base. The axiomatic formulation of mathematics, with Peano postulates, Dedekind cuts, etc., does not depend on the base. Robert McClenon (talk) 19:21, 31 August 2014 (UTC)[reply]
The definition of irrational is that such a number cannot be expressed as the ratio of two integers. Since being an integer doesn't depend on base, being irrational does not depend on base. The fact that the decimal expansion of irrationals is infinite without repetition is a theorem. If you go through the proof, you'll see that it can be repeated in whatever integer base you like. So yes, π's expansion is infinite without repetition in base 9.
Since being irrational (and similarly, being rational) does not depend on your base, commensurability does not depend on your base. Otherwise, I don't see much of a way in which it's related.--80.109.106.3 (talk) 12:58, 31 August 2014 (UTC)[reply]
(E.C.) Yes, they are still irrational. An irrational number is one that can't be expressed as a fraction -- or ratio -- of two integers, and this definition is irrespective of base. One consequence of this definition, discussed in Irrational number#Decimal expansions, is that an irrational number cannot be expressed as a terminating or repeating expansion in any natural base (decimal, binary, ternary, whatever), while a rational number can be expressed as a terminating or repeating expansion in every base, although any given rational number may have an infinite but repeating expansion in one base and a terminating one in another. For instance, 1/3 = 0.333333... in base 10 and 0.010101... in base 2, but 0.3 in base 9 and 0.1 in base 3.
For base 3, see our articles Ternary numeral system, Balanced ternary, and Ternary computer. -- ToE 13:09, 31 August 2014 (UTC)[reply]
Thank you, everyone, for your informative replies and links!   ~:71.20.250.51 (talk) 00:16, 1 September 2014 (UTC)[reply]

Defining a perfect number

Go to Perfect number. It says:

In number theory, a perfect number is a positive integer that is equal to the sum of its proper positive divisors, that is, the sum of its positive divisors excluding the number itself (also known as its aliquot sum). Equivalently, a perfect number is a number that is half the sum of all of its positive divisors (including itself) i.e. σ1(n) = 2n.

It's a provable theorem that the 2 definitions equate. But what I want to know is why the latter definition is preferred by some modern mathematicians. Georgia guy (talk) 13:42, 31 August 2014 (UTC)[reply]

I can't speak for all of those modern mathematicians but moving out any one exception from a definition looks well worth trading in an additional factor somewhere. 95.112.216.113 (talk) 14:22, 31 August 2014 (UTC)[reply]
While I would not think of a uniform exclusion as an exception, there is a pleasing symmetry between:
  • A perfect number is a number for which its positive divisors sum to twice the number, and
  • A perfect number is a number for which the reciprocals of its positive divisors sum to 2.
The second statement becomes rather awkward when the reciprocal of the number itself is omitted. —Quondum 19:15, 1 September 2014 (UTC)[reply]
Probably because mathematicians like to reduce things to other things and annex them as much as possible into theories. So they like to write things with predefined functions, like , which they can define in a natural way by Dirichlet convolution as = Id * 1, and they similarly like Dirichlet convolution because it is related to Dirichlet series. Definitions with - Id might look clunkier.John Z (talk) 20:25, 8 September 2014 (UTC)[reply]

Total degree of elementary symmetric polynomials

One can think of the Fibonacci numbers as the number of integer solutions to x1, x2, ..., xn ≥ 0, x1+x2, x2+x3, ... xn-1+xn ≤ 1, the number solutions being Fn+2. Define S(n,k) as the number of integer solutions to x1, x2, ..., xn ≥ 0, x1+x2, x2+x3, ... xn-1+xn ≤ k. So S(n,0)=1, S(n,1)=Fn+2. (S(n,k) is the value at k of the Ehrhart polynomial of polytope defined by the first set of inequalities.) I computed S for n and k ≤ 7 and found a matching set of values in OEISA050446, but I don't understand the description of the entry "total degree of n-th-order elementary symmetric polynomials in m variables," Also, some insight on how S(n, k) might be related to the degree of an elementary symmetric polynomial would be appreciated. --RDBury (talk) 19:45, 31 August 2014 (UTC)[reply]

September 1

Addition help

Hi guys

How can you do 1+1 WITHOUT a calculator?

tks — Preceding unsigned comment added by 84.26.201.18 (talk) 15:01, 1 September 2014 (UTC)[reply]

Perhaps this from Principia Mathematica might help? ;-) Dmcq (talk) 15:56, 1 September 2014 (UTC)[reply]
If you're asking how to teach children to add, the usual first step is to take one object, say an apple, then add another object, then have them count the total. Repeat this with various objects, and eventually they will understand that if you add 1 + 1 of any objects, you always get 2. StuRat (talk) 16:20, 1 September 2014 (UTC)[reply]
Unless of characteristic 2 YohanN7 (talk) 16:47, 1 September 2014 (UTC)[reply]
Not Unless, but rather Including when: that 2 = 0 for characteristic 2 does not change the validity of 1 + 1 = 2 (2 is normally defined as 1 + 1 in the general case). —Quondum 18:06, 1 September 2014 (UTC)[reply]
I-I don't understand what any of those symbols mean. 181.60.185.140 (talk) 00:05, 5 September 2014 (UTC)[reply]
Nobody said calculating 1+1 was easy. -- Meni Rosenfeld (talk) 12:00, 7 September 2014 (UTC)[reply]
Replying to StuRat. Actually you have taken rather a jump there. Before we get to do 1 + 1 the concept of "counting on" need to be introduced, what do you get if you count on one place from one. (In technical terms applying the successor operator S(x) to one, S(1)). Quite a bit of work is needed both educationally and logically to go from counting-on to full addition. --Salix alba (talk): 19:27, 7 September 2014 (UTC)[reply]

September 2

September 3

Coequalizer, followup

Via google I can't find examples, and my brain bug that prevents me from seeing things clear is not really dead yet. What do coequalizers in the arrow category of sets look like? 93.132.23.10 (talk) 12:44, 3 September 2014 (UTC)[reply]

If my question was too special or too complex or too difficult, where can I get some reading about the arrow category of sets?
77.3.171.164 (talk) 14:38, 6 September 2014 (UTC)[reply]
And any reading, or examples of, coequalizers in other categories will be welcome, too. 77.3.171.164 (talk) 14:54, 6 September 2014 (UTC)[reply]
Any two functions and are objects in the arrow category of sets. Let and be morphisms from to , that is and such that and .
Now construct the coequalizers in the category of sets of the pairs of morphisms and as in your previous question, i.e. let and be the smallest equivalence relations on and such that for all and for all , let and and define by for all and by for all where and denote equivalence classes of and .
Now we've got all that notation we can construct the coequalizer of the pair of morphisms . The hard part is to show that for all , if then . I'm short of time right now so I won't prove this. It follows that we can define a function by for all . Then together with the morphism from to is the coequalizer of . 121.99.220.36 (talk) 22:47, 6 September 2014 (UTC)[reply]
Thanks a lot. It always strikes me how simple those things turn out to be when I'm still not able to get those ideas without help. So what you do is you construct the coequalizers for source and target separately and then put them together so that this works in the arrow category. I guess it's a good exercise for me to fill in the missing proof. Thanks again. 95.112.218.246 (talk) 09:25, 7 September 2014 (UTC)[reply]

Why is this surface area approximation for a solid of revolution erroneous?

The task is rather simple: find the surface area over the unit circle centered at the origin of the paraboloid , the solid of revolution obtained by revolving the parabola around the z-axis. Denote the unit circle as R for the double integral. The correct computation is using the double integral (using polar coordinates). This is also in agreement with a solid of revolution formula (ds is the arc length element of the paraboloid along a radial plane passing through the origin and parallel to the z-axis).

The problem is, why doesn't the following approximation work? Naively one may expect it to be the same, but it is not. Approximate the surface area using the sum of infinitely many cylindrical sections with height dz and circumference , whose surface area is each . Since z varies from 0 to 1 as x varies from 0 to 1, the erroneous notion was that , which is obviously wrong. My hunch is that this cylindrical approximation becomes very poor near the origin, where the paraboloid has a large horizontal component to it.--Jasper Deng (talk) 19:38, 3 September 2014 (UTC)[reply]

Instead of discarding the horizontal component of your paraboloid section to get a cylinder of radius and height dz, you should "twist" the paraboloid section to get a cylinder of radius and height . Egnau (talk) 20:36, 3 September 2014 (UTC)[reply]
The approximation is not following the surface closely enough. It's analogous to approximating a diagonal with a staircase of vertical and horizontal segments. In that case the approximation can be arbitrarily close to the curve and can be used to approximate the area underneath, but if you try to use it for length it will be incorrect and it doesn't help to use more segments. (This idea was shown to me as a "proof" that √2 = 2.) In order to get an approximation to the length of a curve or the area of a surface, the approximating curve (resp. surface) has to approximate the tangent line (resp. plane) of the original, not just the position. --RDBury (talk) 11:15, 5 September 2014 (UTC)[reply]

September 4

Explanation for the solution of x in this equation.

Well, I've certainly tried everywhere, but it seems the answer is tricky. Can someone explain me how do I solve x in ? Thank you very much 181.60.185.140 (talk) 00:03, 5 September 2014 (UTC)[reply]

ln(x) = x * y
ln(x)/x = y
y = g(x) where g(x)=ln(x)/x
x = invg(y)
I am not sure how you can solve it symbolically but you can now solve it numerically

202.177.218.59 (talk) 01:54, 5 September 2014 (UTC)[reply]

I believe you need a special function for that called the Lambert W function. I'll leave you the fun of figuring out how to do it :) Dmcq (talk) 08:08, 5 September 2014 (UTC)[reply]
I've known you have to use this Lambert W function, yet I can't manage to get the equation to be of the form without y having a value of x. So, when I use this function x keeps on both sides, making it unsolvable, yet again. 181.60.185.140 (talk) 16:56, 5 September 2014 (UTC)[reply]
Rewrite your equation to , then substitute . You immediately get . 95.112.218.246 (talk) 09:46, 7 September 2014 (UTC)[reply]

Rewrite the equation

0=eyx−x.

Expand the exponential,

pn(x)=1+(y−1)x+(y2/2)x2+ . . . +(yn/n!)xn

For a sufficiently big value of n the equation 0=pn(x) is solved numerically by, say, the Durand-Kerner method. Bo Jacoby (talk) 08:18, 5 September 2014 (UTC).[reply]

If a numerical solution is desired, Newton's method works better. (The method you linked to is for polynomial equations, not transcendental equations.) Sławomir Biały (talk) 12:37, 5 September 2014 (UTC)[reply]
For every n the equation 0=pn(x) is a polynomial equation of degree n. Newtons method does not always converge. (Try Newton on 0=x2+1). 14:47, 5 September 2014 (UTC).
Note that y = ln(x)/x has a maximum of y = 1/e at x = e so that there are no real solutions for y > 1/e but two solutions when 0 < y < 1/e (one in the range 1 < x < e and one e < x < ∞). For negative y there is only one real solution (in the range 0 < x < 1) --catslash (talk) 13:48, 5 September 2014 (UTC)[reply]

Linear congruential generator

Hi, at Linear congruential generator it says:

For example, the Java implementation operates with 48-bit values at each iteration but returns only their 32 most significant bits. This is because the higher-order bits have longer periods than the lower-order bits (see below).

I don't see how the second sentence necessarily justifies the first. Sure, I get that if you want 32 bits then you are better off with the high 32 than the low 32, but are you necessarily better off with the high 32 than with all 48? Specifically, in my case, I want to return a real number between 0 and 1, with the most precision and best randomness possible. Am I better off taking the high 32 bits and dividing by 2^32 rather than using all the bits and dividing by 2^48? If so, why? 31.51.7.25 (talk) 20:41, 5 September 2014 (UTC)[reply]

A short period means the bits in question follow an obvious pattern, which disqualifies them for use as pseudorandom digits. So the bits are discarded rather than being used as output from the pseudorandom number generator. --RDBury (talk) 22:26, 5 September 2014 (UTC)[reply]
As described, there might be a specific problem with the high-order bits, namely that the returned values are not uniformly distributed over the full 32-bit range. If the modulus is m, then the output distribution will be uniform over the range 0 to ⌊(m − 1)/216⌋ − 1, with the value ⌊(m − 1)/216⌋ having lower probability, and higher values never occurring. Your suggestion of dividing be 248 would have the same problem, though adding 0.5 and dividing by m would give a more uniform distribution. Applications that have any sensitivity to correlation between values and other statistical properties should steer clear of linear congruential generators.
If you are looking for high quality pseudo-random numbers as you seem to be, I'd suggest using a secure random number generator (which I'd expect is available in Java), and concatenating enough data to fill a real number storage, then convert to real. You may find you need over 50 bits for this. —Quondum 22:22, 5 September 2014 (UTC)[reply]
Thanks for the replies. Firstly, I understand what a short period means. However, if I use a 32-bit random number in an application as a double-precision value, then the low bits are going to be always zero. That is a period of one, and I don't see why any period, however short, should not be better. Second, Quondam, you lost me at "As described, there might be a specific problem with the high-order bits". The problem under discussion here is with the LOW-order bits. Are you referring to a different problem? 31.51.7.25 (talk) 22:53, 5 September 2014 (UTC)[reply]
Yes, but on closer review, I see that m = 248, which nullifies my point. You seem to misunderstand the period, though. The period is how soon the same number repeats, and is unlikely to be less than 232, regardless of which bits you use. Unless you have a random data requirement exceeding this, this should not be an issue to you. But since you indicated that you wanted data "with the most precision and best randomness possible", you might want more than 48 bits per value (easily achieved by concatenating two 32-bit values), as well as avoiding certain undesirable statistical properties that linear congruential generators exhibit. If, for example, you are using the data for a Monte Carlo simulation of some sort, you can be unfortunate and get highly improbable behaviour. LCGs sometimes bite one that way. —Quondum 23:51, 5 September 2014 (UTC)[reply]
Thanks for your reply. I understand exactly what the period is. With respect, I think the problem is not with my understanding but with yours. You seem not to grasp the point that I am making. 31.51.7.25 (talk) 00:30, 6 September 2014 (UTC)[reply]
Apologies, you are probably correct. I was confused by your phrase "That is a period of one". —Quondum 01:02, 6 September 2014 (UTC)[reply]
If the LCG returned all 48 bits it would not really be a pseudorandom number generator, since the low bits are not random-looking. Client software would have to be written to work around the non-randomness of the output bits, which would tie it to that particular generator. It's normally better to make the PRNG emit only high-quality random bits; then the client doesn't need to know how the PRNG works.
If your application was so time-sensitive that you couldn't afford to clock the LCG twice per generated double, then it would probably be better to use all 48 bits than just the top 32, but hopefully you'll never find yourself in that situation. Otherwise, if your application is sensitive to N bits of the mantissa, then you should put at least N high-quality random bits in the mantissa. If N ≤ 32 then you might as well use the 32-bit output, and if N > 32 then you'd be much better off using two concatenated 32-bit outputs than one 48-bit state. -- BenRG (talk) 05:05, 7 September 2014 (UTC)[reply]
I'm really not sure why anyone would use a linear-congruential generator anymore. It was a nice solution compared to the ones that existed at the time, and there's some fun math that goes into picking good values for the coefficients and modulus. But better ways have been found — algoes that have both better statistical properties (well, depending on which ones you look at, but better for most properties anyway) and are, on most systems, faster, because they don't require division, just bit manipulation. See Mersenne twister. There's a simpler algo based on primitive trinomials mod 2, called Tausworthe, which we don't seem to have an article on. --Trovatore (talk) 05:12, 7 September 2014 (UTC)[reply]

Meaning of a symbol

What is the meaning of the symbol in "Let S = C ∪ {∞} ?" I mean, the infinity sign between curled brackets. It is taken from here, sub-chapter: "Examples," example #3 (bullet 3). I could not find it among the List of Mathematical Symbols. Thanks --AboutFace 22 (talk) 21:43, 5 September 2014 (UTC)[reply]

The infinity symbol is simply a constant symbol; an arbitrary name for a new point outside of the complex plane. The brackets are standard set brackets. That line is saying "Take the complex plane and add a new point. Call that new point infinity. Call the resulting space S."--88.217.142.67 (talk) 22:01, 5 September 2014 (UTC)[reply]
See Riemann sphere for more details on this construction. --RDBury (talk) 22:12, 5 September 2014 (UTC)[reply]

It is now clear. Thank you. --AboutFace 22 (talk) 00:41, 6 September 2014 (UTC)[reply]

what's happening?

I'm trying to convert a boolean expression (in sum of products form) to NOR logic using WolframAlpha, but the results seem off. For example, this expression, I scroll down to where it says "minimal forms", select "text notation", copy everything to Notepad, then paste the line where it says NOR, back to WolframAlpha. The result is this, but it's a different function! is it a bug or do I not "get" something (precedence maybe)? shouldn't they be identical? Asmrulz (talk) 09:55, 6 September 2014 (UTC)[reply]

You copied only the last half of the solution. In WolframAlpha's copyable plaintext, the solution will be formatted as
 ...
 NOR | <solution using NORs>
 NAND | <solution using NANDs>
 ...
It's important that you locate the line where it says NOR followed by a vertical bar and copy possibly multiple lines until the NAND followed by a vertical bar. Egnau (talk) 14:53, 6 September 2014 (UTC)[reply]
but I did... Here's what I'm copying: http://s15.postimg.org/w77xofwhn/snapshot8.png ... Asmrulz (talk) 18:31, 6 September 2014 (UTC)[reply]
Let me show you the differences by aligning the different answers (use the scrollbar).
What I get:          ((NOT v) NOR  (NOT w)) NOR  ((NOT v) NOR  (NOT z)) NOR  ((NOT w) NOR  x) NOR  ((NOT w) NOR  y) NOR  (x NOR  (NOT z)) NOR  (y NOR  (NOT z))
Your link "this":                                                                                  ((NOT w) NOR  y) NOR  (x NOR  (NOT z)) NOR  (y NOR  (NOT z))
Your blue highlight: ((NOT v) NOR  (NOT w)) NOR  ((NOT v) NOR  (NOT z)) NOR                                       (w NOR  x NOR  (NOT z)) NOR  (y NOR  (NOT z))
Egnau (talk) 00:58, 7 September 2014 (UTC)[reply]
and? paste your first line into WA and tell me it's the same function as that in my first link, what with different truth densities (17/32, as compared to 11/32, meaning, as I understand it, they have entirely different truth tables which aren't "rearrangements" of one another) and different DNFs. It's not about copying/pasting I now realized that the screenshot I posted belongs to another expression, sorry. But it's the same thing, the sum of products and what WA says is it's NOR form are different functions Asmrulz (talk) 09:19, 7 September 2014 (UTC)[reply]
I think I know. The problem is WA's parser thinks NOR is right-associative but in the "copyable plaintext" it is left-associative. I'll post a shorter example in a moment Asmrulz (talk) 10:05, 7 September 2014 (UTC)[reply]
1) expression: (a and b) or (c and d) (screenshot)
2) plaintext result: (a NOR c) NOR (a NOR d) NOR (b NOR c) NOR (b NOR d)
3) pasting result back, screenshot
4) observe how interpretation of NOR is that it is right-associative
5) manually parenthesizing previous output assuming left-associative NOR, and... NOPE, still not the equivalent of "(a and b) or (c and d)". I give up. Asmrulz (talk) 10:33, 7 September 2014 (UTC)[reply]
The parser does treat it as right-associative (which must be a bug). In the output it isn't left- or right-associative but has the natural interpretation x ⊽ y ⊽ z ≡ ¬(x ∨ y ∨ z). The only way to turn the output into something acceptable to the parser would be to use a completely different syntax, like Mathematica's native syntax Nor[x, y, z]. -- BenRG (talk) 16:28, 7 September 2014 (UTC)[reply]
Thank you! I started doubting my sanity for a moment Asmrulz (talk) 18:09, 7 September 2014 (UTC)[reply]

names for particular slices of the 5 cube (Vertex first)

Consider the 4-D slices of the 5 cube {0,1}^5. starting with the (0,0,0,0,0) point, the next slice that includes vertices are the 5 permutations of (0,0,0,0,1) which are a standard pentatope.

  • What is the Polytope is the next slice that includes vertices at the 10 permutations of (0,0,0,1,1)?
  • What is the Polytope of the middle of the 5-cube at the 30 permutations of (0,0,.5,1,1)? (This is the equivalent of the hexagonal slice of the 3-cube or the octahedral slice of the 4-cube.)12:28, 6 September 2014 (UTC)
The permutations of (0,0,0,1,1) form a polytope with 5 tetrahedral faces, 5 octahedral faces, and whose vertex figures are triangular pyramids. I believe it's the Rectified 5-cell. The other solid is more complex and may not have a name, but I need to compute some statistics on it before searching. WP has gone a bit overboard (imo) as far as its listing of polytopes, including not just the regular ones but their truncated, cantilevered and reticulated versions. So if it has a name then we probably have an article on it. --RDBury (talk) 17:43, 6 September 2014 (UTC)[reply]
Yes, the first is the Rectified 5-cell, there is a sentence under co-ordinates that says

More simply, the vertices of the rectified 5-cell can be positioned on a hyperplane in 5-space as permutations of (0,0,0,1,1) or (0,0,1,1,1).

no clue on the other for now.19:23, 6 September 2014 (UTC)
By analogy, I suspect that the halfway slice is the bitruncated 5-cell – and that entry agrees. —Tamfang (talk) 19:34, 6 September 2014 (UTC)[reply]
Yes, according to the linked article, "the vertices of the bitruncated 5-cell can be constructed on a hyperplane in 5-space as permutations of (0,0,1,2,2). These represent positive orthant facets of the bitruncated pentacross." This is the above scaled by 2. --RDBury (talk) 01:18, 7 September 2014 (UTC)[reply]

Extensions

Any idea how to extend this idea, i.e. all of the polytopes consisting of vertices of an n-cube equidistant from a single vertex and how to get all of the polytopes which are the halfway cuts of n-cubes (where n is odd, n being even gives a result from the first group)20:21, 6 September 2014 (UTC) (Naraht (talk) 01:55, 7 September 2014 (UTC))[reply]

Look at a more general slice. You get sum(xi)=a with xi>=0, xi<=1. Scale by 1/a = b to get sum(xi)=1, xi>=0, xi<=b. This is the (n-1)-simplex sum(xi)=1, xi>=0 truncated by the planes xi<=b, in other words it's in the continuum of truncations of the (n-1)-simplex starting from the full simplex (b=1) and ending at a single point (b=1/n). At b = 2/3 you get the (standard) truncated simplex and at b=1/2 you get the rectified simplex. Apparently (I'm having trouble understanding the definition) the bitruncated simplex is at b=2/5. For b=1/k the polytope has n choose k vertices, namely the permutations of (1/k, ... 1/k, 0, ... , 0) and is called a rectified, birectified, trirectified, etc simplex. For b=2/k, k odd, there are n × (n-1 choose (k-1)/2) vertices, the permutations of (2/k, ..., 2/k, 1/k, 0, ... 0). These are called truncated, bitruncated, tritruncated, etc simplices (again, assuming I've understood the meanings of these terms). Applying this to n=9 for example gives the middle slice as the quadritruncated 8-simplex.

September 7

Denumerable sets of trigonometric polynomials

Hi! I've been puzzled by a problem for some time now:

1. A real number is algebraic if it is a root of a polynomial, integer coefficients etc. 2. How is it provable that the set of roots for trigonometric polynomials is countable? It seems any proof I can imagine depends on the definition of hyperbolic trig. functions. But, I feel like there must be a possibility of proving this that does not rely on that.


```` — Preceding unsigned comment added by 76.102.205.17 (talk) 01:59, 7 September 2014 (UTC)[reply]

I'm not sure why you brought up the definition of algebraic real. Perhaps you meant to ask a second question?
As far as roots of trigonometric polynomials, trigonometric polynomials are analytic, and in fact any analytic non-zero function must have only countably many zeros. The reasoning is as follows: since the real line (and indeed the complex plane) is second-countable, any uncountable set contains an accumulation point. By unique analytic continuation, only the identically zero function can have a zero-set that contains an accumulation point.--80.109.106.3 (talk) 07:53, 7 September 2014 (UTC)[reply]

What's the full pi number

With all the numbers jacobroozie@gmail.com 65.175.250.157 (talk) 20:18, 7 September 2014 (UTC)[reply]

See Pi. The numbers never come to an end. Never. They've worked out the first 12 trillion digits, literally, and they haven't even scratched the surface. -- Jack of Oz [pleasantries] 20:28, 7 September 2014 (UTC)[reply]
You're thinking too decimal. I can satisfy the request very easily. The full pi number is pi. That's all the numbers in pi. Hope this helps. --Trovatore (talk) 20:43, 7 September 2014 (UTC)[reply]
10, in base pi. Double sharp (talk) 06:51, 9 September 2014 (UTC)[reply]
For what it's worth, you can download the 12-trillion-digit approximation here (split into 120000 zip files), though you need some serious storage space for the whole thing (12 terabytes for the ASCII form, natch).--Link (tcm) 21:20, 7 September 2014 (UTC)[reply]
Even better than that are the clever spigot algorithms for pi that allow you to compute any digit of pi in a reasonable amount of time. If you want every digit of pi, you will need to run a calculation that will continue for an infinite amount of time. But if you want any specific digit, no matter how far "down the line" it is, it can be calculated with such an algorithm. Nimur (talk) 16:25, 9 September 2014 (UTC)[reply]
But keep in mind that such spigot algorithms aren't available for every base. The ones given for pi are typically in binary or hexadecimal or some similar power-of-two base. I'm unaware of a spigot algorithm for pi which can be done in decimal, and attempting to convert from binary (or hexadecimal) to decimal requires you know all of the preceding digits, effectively "unspigotting" your algorithm. -- 160.129.138.186 (talk) 17:21, 10 September 2014 (UTC)[reply]
Not an issue with pi.
The pi algorithm I know is pi = 2 + (1/3) (2 + (2/5) (2 + (3/7) (2 + (4/9) ( ... )), and that can be evaluated using a quite short array of integers, of which all except the first are small:
 1 |1/3 2/5 3/7 4/9
 2 | 2 | 2 | 2 | 2
Multiply by 10 (you can multiply by 2 or 8 for a binary approximation here):
20 |20 |20 |20 |20
Resolve carries
20 |20 |20 |28 | 2 (9 in the last place equals 4 in the second-to-last; we do this twice)
20 |20 |32 | 0 | 2 (7 in the 4th column become 3 in the 3rd, etc)
20 |32 | 2 | 0 | 2
30 | 2 | 2 | 0 | 2
Multiply by 10 again:
300|20 |20 | 0 |20
Resolve carries
300|20 |20 | 8 | 2
300|20 |23 | 1 | 2
300|28 | 3 | 1 | 2
309| 1 | 3 | 1 | 2
So, 309 is our approximation to 100 pi. This sucks, but only because the array is so short. Another entry will about halve the truncation error. 10 more entries mean about 3 more decimals.
There can be unresolved carries in the leftmost column (for example, the initial return was pi > 2 and the next step returned 10pi > 30) but they are quite low (usually 0 or 1); it's an approximation to a true spigot algorithm. - ¡Ouch! (hurt me / more pain) 08:47, 11 September 2014 (UTC)[reply]

64 digits of Pi is all you ever need for home renovation. 202.177.218.59 (talk) 01:59, 8 September 2014 (UTC)[reply]

Count Iblis (talk) 17:26, 10 September 2014 (UTC)[reply]

Which is infamous for extremely slow convergence. - ¡Ouch! (hurt me / more pain) 08:47, 11 September 2014 (UTC)[reply]


The full pi number that has all of the digits is 3.14159265358979323846264338327950 which is where you get the first zero.

Does the Witch of Agnesi really have a well-defined centroid?

Although the first moment with respect to y is well defined, the one with respect to x is not (where R is the entire region between the x-axis and the curve):

Reversing the order of integration does not help either, since also is ill-defined.

Why then does the article say that the centroid's x coordinate is located at x=0? This is the Cauchy principal value of the improper integral but I feel like it should be better-defined than this.--Jasper Deng (talk) 00:24, 8 September 2014 (UTC)[reply]

Your reasoning looks correct. Is there any objection to deleting the statement? --RDBury (talk) 16:16, 8 September 2014 (UTC)[reply]
Why can't you just apply symmetry? As the curve is symmetrical about x=0, the centroid must lie on x=0. --Salix alba (talk): 17:04, 8 September 2014 (UTC)[reply]
Well, if there is a centroid, then the symmetry argument is fine. The question is whether there is one, given that the integral defining the moment apparently does not converge. --Trovatore (talk) 17:15, 8 September 2014 (UTC)[reply]
Symmetry basically is the Cauchy principle value of the moment. It's basically the same reason why the Cauchy distribution has ill-defined moments (the integrals are almost exactly the same).
The centroid article says that the centroid is the arithmetic mean of all the coordinates in R, but that mean is ill-defined unless we take it to be the Cauchy principle value.--Jasper Deng (talk) 17:49, 8 September 2014 (UTC)[reply]
What's wrong with using the Cauchy principal value? This is how integrals like this are evaluated and the symmetry argument Salix alba mentioned makes the zero x component an elegant derivation. --Mark viking (talk) 18:09, 8 September 2014 (UTC)[reply]
(ec) Well, what's wrong with the Cauchy p.v. in general is that it's generally not as well-behaved as convergent integrals are. If the value of an integral is fully well-defined, then you should be able to chop it up however you like and still get the same answer — for example, evaluate integral on the positive x-values first, and then the negative, or from positive n2 to positive (n+1)2 followed by the integral from −n−1 to −n, and then continue through all n, or any other such scheme. If all you have is a Cauchy p.v., you can't do that.
Cauchy p.v.'s are closely analogous to conditionally convergent series, which is a very second-class sort of convergence.
As to whether these considerations should bar us from using the Cauchy p.v. in the specific context of the centroid, that's another question. Conceptually, the idea of the centroid does seem to jibe fairly well with the Cauchy-p.v. technique of expanding the domain of integration out in all directions at the same "speed". --Trovatore (talk) 19:27, 8 September 2014 (UTC)[reply]
The definition of a centroid in the article is rather weak, but I take it to mean a well-defined arithmetic mean, which I take as requiring a well-behaved integral rather than just the Cauchy principal value. Also, one would expect to be able to compute the centroid using the centroids of subregions of R of whatever partitioning scheme. Here it obviously does depend on how we divide R.--Jasper Deng (talk) 19:25, 8 September 2014 (UTC)[reply]
Related is the fact that the Cauchy distribution lacks a well-defined mean. Sławomir Biały (talk) 18:15, 8 September 2014 (UTC)[reply]

(One note: Those with sharp eyes might notice that the second order of integration appears to actually come out to 0. But I still consider the double integral itself is ill-defined, since rather obviously Fubini's theorem does not give consistent results, and the second order of integration is not equal to for any real number c, since both integrals diverge).--Jasper Deng (talk) 19:25, 8 September 2014 (UTC)[reply]

I agree removing the bit about the centroid is best. It is simply not defined for the shape. There's nothing wrong with that - it is infinite in length and infinity is where things happen in maths that don't in the real world. Dmcq (talk) 21:23, 8 September 2014 (UTC)[reply]
I agree that it should be removed. However, it is somewhat of a puzzle that there is an obvious "correct" answer, despite a lack of compelling formalism leading to it. What makes the Cauchy principal value the "right" thing to compute? (Symmetry is a red herring I think, since we can perturb the distribution just a little to break the symmetry.) Sławomir Biały (talk) 22:49, 8 September 2014 (UTC)[reply]
The Cauchy p.v. is what you get when you expand the domain of integration out "isotropically", without favoring one direction over another. Centroids intuitively seem to be that sort of thing. --Trovatore (talk) 22:57, 8 September 2014 (UTC)[reply]
Hmm... interesting. I was thinking that it wasn't translationally invariant, but I was wrong in that thought. That does make the p.v. look rather canonical here. Sławomir Biały (talk) 17:00, 9 September 2014 (UTC)[reply]

@Trovatore, Dmcq, Slawekb, Salix alba, and Mark viking: (sorry for the mass ping) Do any of you think it would be a good idea to replace the statement about the centroid with a sentence pointing to the Cauchy distribution article's section on undefined moments? I think that would be best for our readers.--Jasper Deng (talk) 16:52, 9 September 2014 (UTC)[reply]

Sounds fine to me. Dmcq (talk) 16:57, 9 September 2014 (UTC)[reply]
No objection here. Sławomir Biały (talk) 17:00, 9 September 2014 (UTC)[reply]
Ideally I'd like to see a citation. I had a very brief look for suitable citation and I didn't find anything discussing the centroid of the Witch of Agnesi. The Cauchy distribution moments section is also lacking citation so everything counts as OR at the moment. --Salix alba (talk): 17:27, 9 September 2014 (UTC)[reply]
Sounds good to me. I agree a citation would be good, too. --Mark viking (talk) 17:54, 9 September 2014 (UTC)[reply]

Greatest common divisor

Greatest common divisor discusses the gcd of two numbers. Some articles (eg Achilles number) refer to the gcd of a list of numbers ( gcd(a,b,c,d) etc). What is the definition of gcd for several numbers, and what is the best way to determine it? -- SGBailey (talk) 08:53, 8 September 2014 (UTC)[reply]

The natural numbers (including 0) form a lattice under the order of divisibility. (Note: every natural number divides 0, including 0. You should be aware that there are different conventions, but any other convention is, frankly, stupid.)
So the gcd of any finite set of integers is simply its infimum in this lattice. (I think you could do infinite sets too but I don't want to bother to check ATM.)
The best algorithm is probably Euclid's algorithm, iterated, but if someone comes up with a better one, I won't be too shocked. --Trovatore (talk) 09:25, 8 September 2014 (UTC)[reply]
Greatest common divisor starts: "In mathematics, the greatest common divisor (gcd), also known as the greatest common factor (gcf), highest common factor (hcf), or greatest common measure (gcm), of two or more integers (at least one of which is not zero), is the largest positive integer that divides the numbers without a remainder."
Note it said "or more" so the gcd has to divide each number. For example, gcd(12, 20, 30) = 2. gcd(12, 20) = 4, gcd(12, 30) = 6, gcd(20, 30) = 10, but none of those divide the third number. Greatest common divisor#Properties says: "The gcd of three numbers can be computed as gcd(a, b, c) = gcd(gcd(a, b), c), or in some different way by applying commutativity and associativity. This can be extended to any number of numbers." PrimeHunter (talk) 10:20, 8 September 2014 (UTC)[reply]
The "not zero" bit is unnecessary — gcd(0,0)=0. In fact gcd(0,n)=n for every n, including 0. As I said, there are other conventions, but none that aren't stupid.
The only thing is, "greatest" needs to be understood in the order of divisibility, not the usual order. Zero is the greatest element as measured by divisibility. --Trovatore (talk) 10:36, 8 September 2014 (UTC)[reply]
@PH - Thanks, I'd missed that line. -- SGBailey (talk) 10:37, 8 September 2014 (UTC)[reply]
It's incorrect and needs to be changed. --Trovatore (talk) 10:40, 8 September 2014 (UTC)[reply]
What's written is not incorrect. It is true that the greatest common divisor of two or more integers, at least one of which is not zero, is the largest positive integer that divides them. This may not be complete as a definition, but it is not a false statement. Although in principle I agree that the arithmetic partial order on the integers is the relevant ordering rather than the standard one, I think that bringing this up in the lead of that article is likely to confuse most of the readers of the article (which may include school children, for instance). A proper "Definition" section in which to discuss such nuances seems to be lacking. Sławomir Biały (talk) 12:47, 8 September 2014 (UTC)[reply]
You're right about the statement being literally true, of course. --Trovatore (talk) 17:14, 8 September 2014 (UTC)[reply]

The Lenstra–Lenstra–Lovász lattice basis reduction algorithm can be interpreted as a generalization of Euclid's algorithm, although it then won't output a GCD, rather it is the analogue of using Euclid's algorithm to do rational reconstruction. Count Iblis (talk) 16:26, 10 September 2014 (UTC)[reply]

Friends meeting at the park

Three people come and meet at the park each day. Each comes 7 out of 10 days. What is the chance of all three coming? Only two? Only one? None? Thanks. I'm stumped. Please try to answer in a way that a complete idiot (me) will understand. Actually, the reason isn't so imporant. It is the actual % chance that I'm after. Many many thanks. :) Anna Frodesiak (talk) 03:32, 10 September 2014 (UTC)[reply]

We'd have to start with the assumption that each person showing up is an independent event. This probably isn't correct, as they may all avoid rainy days, or all show up when they planned to meet. But, if we assume each has an independent chance of showing up 70% of the time, then the chances of all or none showing up are:
Zero: 0.33 = 0.027
Three: 0.73 = 0.343
Now the chances of 1 or 2 showing up are complicated by the fact that a different one or two might show up, so we have to account for all the ways that can happen. In this case there are 3 ways 1 person can show up (A, B or C) and 3 ways 2 people can show up (AB, AC, or BC):
One: 3(0.320.71) = 0.189
Two: 3(0.310.72) = 0.441
To check our work, add them all up and you should get 1.0, or, if we multiply all the numbers by 100, we get percentages: 2.7% + 34.3% + 18.9% + 44.1% = 100%. StuRat (talk) 04:42, 10 September 2014 (UTC)[reply]


Wow! You are a super-genius. I am very impressed. I sort of figured out the zero and three part, but got stuck on the one and two. A thousand thanks for your help. :) :) :) Yay StuRat! And yay refdesk. The best kept secret on the Internet. :) Anna Frodesiak (talk) 07:37, 10 September 2014 (UTC)[reply]
You're quite welcome. Here it is, presented in the tree diagram format mentioned below (or as close as I can get using ASCII text):
                            I N D E P E N D E N T   E V E N T S
              +-----------------------------------------------------------------------+
    Person A: |     P R E S E N T   ( 0 . 7 )     |      A B S E N T   ( 0 . 3 )      |
              +-----------------+-----------------+-----------------+-----------------+   
    Person B: |  Present (0.7)  |   Absent (0.3)  |  Present (0.7)  |   Absent (0.3)  |
              +--------+--------+--------+--------+--------+--------+--------+--------+
    Person C: | P (0.7)| A (0.3)| P (0.7)| A (0.3)| P (0.7)| A (0.3)| P (0.7)| A (0.3)|
              +--------+--------+--------+--------+--------+--------+--------+--------+
            / |.7×.7×.7|.7×.7×.3|.7×.3×.7|.7×.3×.3|.3×.7×.7|.3×.7×.3|.3×.3×.7|.3×.3×.3|
Probability   +--------+--------+--------+--------+--------+--------+--------+--------+
            \ | 0.343  | 0.147  | 0.147  | 0.063  | 0.147  | 0.063  | 0.063  | 0.027  |
              +--------+--------+--------+--------+--------+--------+--------+--------+
   # Present: |    3   |    2   |    2   |    1   |    2   |    1   |    1   |    0   |
              +--------+--------+--------+--------+--------+--------+--------+--------+
 
 3 present = 0.343                         = 34.3%
 2 present = 0.147 + 0.147 + 0.147 = 0.441 = 44.1%
 1 present = 0.063 + 0.063 + 0.063 = 0.189 = 18.9% 
 0 present = 0.027                         =  2.7%
                                            ------
                                            100.0%
Note that tree diagrams are only practical for a small number of events, with a small number of possible outcomes for each event. Here we have 3 events, with two outcomes each (3 people who can be present or absent), making for 23 or 8 possible outcomes. If we had 10 events with 2 outcomes each, that would give us 1024 possible outcomes, or if we had 3 events with 10 possible outcomes each, that would give us 1000 possible outcomes. Either would be way too big to draw as a tree. But, if you can draw a tree, it can help to visualize dependencies on events, as well as if all events are independent. For example, let's say person A and B are a couple, and always are present (0.7) or absent (0.3) at the same time. The presence of person C (0.7) remains an independent event:
                                D E P E N D E N T   E V E N T S   ( A = B )
              +-----------------------------------------------------------------------+
    Person A: |     P R E S E N T   ( 0 . 7 )     |      A B S E N T   ( 0 . 3 )      |
              +-----------------+-----------------+-----------------+-----------------+   
    Person B: |  Present (1.0)  |   Absent (0.0)  |  Present (0.0)  |   Absent (1.0)  |
              +--------+--------+--------+--------+--------+--------+--------+--------+
    Person C: | P (0.7)| A (0.3)| P (0.7)| A (0.3)| P (0.7)| A (0.3)| P (0.7)| A (0.3)|
              +--------+--------+--------+--------+--------+--------+--------+--------+
            / |.7×1×.7 |.7×1×.3 |.7×0×.7 |.7×0×.3 |.3×0×.7 |.3×0×.3 |.3×1×.7 |.3×1×.3 |
Probability   +--------+--------+--------+--------+--------+--------+--------+--------+
            \ |  0.49  |  0.21  |    0   |    0   |    0   |    0   |  0.21  |  0.09  |
              +--------+--------+--------+--------+--------+--------+--------+--------+
   # Present: |    3   |    2   |    2   |    1   |    2   |    1   |    1   |    0   |
              +--------+--------+--------+--------+--------+--------+--------+--------+
 
 3 present = 0.49             = 49%
 2 present = 0.21 + 0 + 0     = 21%
 1 present = 0    + 0 + 0.21  = 21% 
 0 present = 0.09             =  9%
                               ----
                               100%
StuRat (talk) 13:58, 10 September 2014 (UTC)[reply]
You should also take a look at Binomial distribution. -- Meni Rosenfeld (talk) 09:05, 10 September 2014 (UTC)[reply]
You mean me? That page could be upside down and scrambled and would make as much sense to me. Thank you, though. :) Anna Frodesiak (talk) 09:16, 10 September 2014 (UTC)[reply]
Yeah, I've noticed that page isn't very newbie-friendly. But it describes the general way to solve problems like the one you've presented. If there are n different things which can either happen or not, each with probability p, and they are independent, then the probability that exactly k of them will happen is , where n! is the factorial. In your case, the things that happen are each person showing up, and . -- Meni Rosenfeld (talk) 12:49, 10 September 2014 (UTC)[reply]
A good way to understand this sort of thing "visually" is with a tree diagram. That article is just a stub, but the linked BBC page has some good examples. If you take their 3-coin-tosses example, and replace the 3 tosses with the arrival or non-arrival of each of the 3 people (and change the 0.5 probabilities of heads and tails to 0.7 and 0.3) then you should end up with the same answers as above. AndrewWTaylor (talk) 13:22, 10 September 2014 (UTC)[reply]
Holy moly. I'm actually understanding this. I sort of lost it half way through, but was getting it. I will read it again tomorrow after a big coffee. This is very nice. I never understand stuff like this. I have the IQ of lichen. Anna Frodesiak (talk) 14:47, 10 September 2014 (UTC)[reply]
Awesome, glad we could help. StuRat (talk) 16:57, 10 September 2014 (UTC)[reply]

is any of Spinoza's Ethics mathematically rigorous? (to the standard of published proofs.)

Hi,

Spinoza's Ethics takes the form of extreme mathematical rigor. I was wondering if any of it were rigorous enough to be published as a mathematical proof, or, on the contrary, does it just take this form with all the real convincing 'power' couched in the terms and language themselves, which are left undefined?

I hope you see my question. If indeed it contains real (rigorous) proofs, is there any chance that you can quote (or produce) me a small 'lemma'-like (or even smaller!) rigorous argument from that work, to show how we can translate it into mathematics and treat it as such?

What I mean is that clearly it takes the form of proofs with premises, logical steps, and conclusions - but are these vacuous? Could we translate any of htis into a proof in a computer language, for example?

I've only just glanced at it but for me it seems that there is no logic or rigor used at all, and in fact the form is highly misleading, as it makes it seem as though there are definitions that are being applied, whereas there are no such definitions and instead we are left with undefined terms like perfection and God that are useless in a formal context. however this is just my impression!!! I've had the same, mistaken, impression, of highly rigorous works that could be understood well and translated directly into code.

Therefore I would like your opinion about whether Spinoza's Ethics is of this kind of work, and, if so, I wonder if you could produce for me either a quotation or your own synthesis of a very short (perhaps trivial) but rigorous "proof" from it.

Thank you kindly! --212.96.61.236 (talk) 01:42, 11 September 2014 (UTC)[reply]

One can look at the logical structure and try to formalize it; see for instance [7]. But as a review of that paper states [8], there is more to Spinoza's Ethics than logical inference. --Mark viking (talk) 02:43, 11 September 2014 (UTC)[reply]
Well, of course there is more as it is a religious text, not a proof. But it takes the form of a proof. The question is: is that form rigorous? Is there anything interesting or good about what it proves rigorously? For example, I could write a whole treatise on color coordination in interior decorating, what colors go together and what don't, what needs to match with what, I can define premises and conclusions, such as the maximum number of colors that can be in touch with each other without dissonance, etc. It will all be absolutely meaningless gibberish! Color simply isn't the kind of thing that is amenable to reasoning about rigorously. Period. I could also write the same thing about physics. But in this case it would be highly meaningful. Physics is the kind of thing you can reason about. Now what about Spinoza's Ethics (the work) - is it like an axiomatic treatment of interior decorating (i.e. meaningless gibberish) or like an axiomatic treatment of quantum mechanics (fully meaningful and formalizable)? Given that in a sense you could say Spinoza's Ethics (the work) is a work of physics, if we treat it as such does it (in parts) attain modern standards of rigor? Could you produce kind of an extract of one, or a synthesis of one? Thanks. 212.96.61.236 (talk) 02:50, 11 September 2014 (UTC)[reply]
An axiomatic treatment of interior decorating can be absolutely rigorous, as far as the math, in which case, those who accept that the axioms truthfully correspond to decorating will also be compelled, rationally, to accept that any theorems apply equally as much. Axiomatic systems of physics work pretty much the same (though, physics isn't axiomatized, or, at least, it is not done from axioms). I don't much about the specific case of Spinoza, but you are essentially asking if he makes any logical errors, if taken at face value. It sounds, more so, like you are asking if his theories are correct, or have correspondence, in the way that physics does - that question has nothing to do with axiomatics and logical structure, however, and, really, is just asking if his premises are sound, which is not a mathematics question at all; and, as for physics, it does not correspond to reality because its structure is mathematical (nor is that a defining aspect of what physics is, to be honest).Phoenixia1177 (talk) 03:40, 11 September 2014 (UTC)[reply]
Right, but my point is there is not a single axiom in interior decorating that anyone would accept, or even so much as entertain, for even a second. Any axiom can be false under certain conditions - i.e. leads "logically" to a conclusion that you would, however, not accept. So, obviously a system of axioms which is the null set does not make for a very interesting axiomatic system. But is theology the same way? Or, in the case of Spinoza, does he use axioms that are in some sense interesting and perhaps have correspondence, and then does he reason from them in a logically sound way? Or, is it just pseudomathematical/logical? (Following the form, but without any chance of correspondence.) 212.96.61.236 (talk) 04:01, 11 September 2014 (UTC)[reply]
You're not asking about mathematics, though, you're either asking if Spinoza had true axioms or if those axioms were interesting, neither has anything to do with mathematics and logic, it has everything to do with philosophy and ethics and the world. Even if nobody believed some set of axioms for "interior design" those axioms would be every bit as legit as the axioms of ZFC, or any other system. A set of axioms is a system, that is it, the content of those axioms and what they mean is not the purview of mathematics, as a subject. This is especially so when you are talking about axioms for something philosophical. I don't know, personally, if his reasoning is logically sound, but you might have better luck trying at the humanities desk as this is really a philosophy question - or just try googling criticisms of Spinoza's work, you're sure to find something far more salient.Phoenixia1177 (talk) 06:47, 11 September 2014 (UTC)[reply]
You've hit at the crux of my question by stating, "A set of axioms is a system...the content of those axioms and what they mean is not the purview of mathematics". That might be true for a real set of axioms, but not everything that is labelled that way is an axiomatic system. Here is an example. I've prepared a farcical "axiomatic system" and some proofs, below. They're just the first few sentences of our Color article. What do you think?

What do you think of the above?

What you SHOULD think is that it's total nonesense, it doesn't even try to look like an axiomatic system. It's just borrowing the form, like gibberish or gobbledegoock or Lorem ipsem. It only might look like an axiomatic system at a brief glance.

It's obviously NOT actually an axiomatic system!

So, my problem/question is that to me, Spinoza's Ethics seems the same way (at a first impresssion). I was wondering if it actually was that way - or if, on the contrary, it really is an axiomatic system and some proofs within it.

So, which is it? Is it like my farcical example? Or is it more? Does it meet logical/mathematical rigor, is it nonesense (from a mathematical point of view), much as my sentences above are. Note that my sentences aren't actual total nonesense - they're quoted from the Wikipedia color article after all. 213.246.165.17 (talk) 09:18, 11 September 2014 (UTC)[reply]

If you rearranged the "axioms" above so that they were all sentences, they would be axioms, just pointless ones as far as that goes. As for the propositions, if the terms all traced back (that's not even that necessary, really), that would work too. The only issue is the logic of your proofs wouldn't be standard logic (I'm sure you could concoct some goofy deduction rules too, if you really wanted, why not?). That's kind of the problem with your whole question, any set of sentences can be "axioms", as long as they are some form of declarative essentially. So, when you ask if Spinoza's are nonsense, do you, literally, mean to ask if they satisfy being declarative sentences relating terms and if he was capable of following basic logic? Most philosophical work is going to be logically valid (as a general rule), the debate is over soundness. I'm not trying to be a jerk, but as far as mathematical requirements go, the answer should be immediately obvious upon reading it, the terms can be complete nonsense, the relationships all bullshit, as long as axioms aren't of the form "Is it good to steal, ever?" or "Stop!", I'm sure someone can whip something up in symbols. For example:
  1. All lines love at least 3 distinct circles.
  2. Some circle loves a line.
  3. Every circle is loved by some line.
  4. If puppy A loves puppy B loves puppy C, puppy A loves puppy C.
  5. Every circle is a puppy.
  6. Every line is a puppy.
Is a system of axioms. And we can deduce that there is some line that loves some other line, and that there are at least 3 circles if there is a line. That's all perfectly valid and fine, mathematically - of course, it's all meaningless gibberish as far as humans go (I imagine, but who knows? Maybe it has a neat model - I doubt it).Phoenixia1177 (talk) 09:47, 11 September 2014 (UTC)[reply]
One thing to keep in mind here is that the standards of rigor have changed significantly since Spinoza's time. There was a time when you wouldn't be considered an educated person unless you could recite the 47th proposition of Euclid upon demand, and The Elements where hugely influential a result. Spinoza modeled Ethics on it in an attempt to bring the same sense of certainty to his conclusions as was perceived in be in the propositions of Euclid. But much has happened since then and The Elements would not stand up to modern standards of rigor. For on thing Euclid's geometry was trying describe the universe as it actually exists, at least in some idealized Platonic sense. The general theory of relativity says the universe doesn't behave as described. Also, modern analysis has found many hidden assumptions in the Elements and a truly rigorous axiomatization (see Hilbert's axioms e.g.) requires many more axioms than Euclid gave. Finally, the whole viewpoint of mathematics changed from a description of the world to a purely logical construct. People of Spinoza's time would not be familiar with the concept of symbolic logic, but today's standards of rigor require that mathematical reasoning can, at least in theory, be stated in symbolic form. Euclid and Spinoza basically start by saying "here is a bunch of things we can all agree about what they are, and here is a list of things we can all agree are true about them, now let's see what conclusions we can draw." The definitions used are not definitions in a rigorous sense but descriptions that enable the reader and author to agree on what things are being talked about. An example of this type of definition might be "A cat is a small fuzzy creature that sometimes lives in people's houses." But mathematically this definition is nonsense, just as the definitions of point and line given in Euclid are more or less nonsense by modern standards. A mathematician would say "Small relative to what? What does it mean for a thing to be fuzzy? What is a house? etc." At some point people realized that in order to avoid circularity a mathematical theory would have to include undefined concepts, from which other terms could be defined. But Spinoza doesn't take that approach, instead starting out with definitions involving things like "essence", "nature", "conceivable" which the reader is supposed to already understand. So, the short answer to the original question is no, at least by modern standards, it's not mathematically rigorous, but then it's hard to see how someone from the 17th century could produce something of the kind that would be. Whether it was rigorous by 17th century standards is something you'd have to get from contemporaries. Leibniz was a philosopher in his spare time so perhaps he had something relevant to say about it. --RDBury (talk) 11:49, 11 September 2014 (UTC)[reply]


Thank you for the responses! There is a lot to read there. Let me ask some basic background questions. 1) As a mathematician, do you find Spinoza's Ethics any more convincing than the "some circles love a line" axiomatic system and proofs? Or is it equally gobbledegook. 2) Although it's not formally rigorous by today's standards, is Euclid convincing to modern mathematicians, i.e. can they follow and rely on those proofs, within the system that Euclid set up? (Despite its being insufficiently formal). 3) A clarification on your analogy. Is this still an equally valid axiomatic system:

  1. All lines love at least 3 distinct circles.
  2. [+] There is at least 1 line.
  3. [+] No lines love at least 3 distinct circles.
  4. Some circle loves a line.
  5. [+] No circle loves a line.
  6. Every circle is loved by some line.
  7. [+] No circle is loved loved by some line.
  8. [+] There is at least 1 circle
  9. If puppy A loves puppy B loves puppy C, puppy A loves puppy C.
  10. Every circle is a puppy.
  11. [+] Every circle is NOT a puppy.
  12. [+] There is at least 1 circle
  13. Every line is a puppy.

And for good measure:

  1. [+] No line exists
  2. [+] There is no circle

Are we still good? Even though I've now added literal contradictions, just the same sentences with a "not" in them, as axioms?

Does at least this make the axiomatic system nonsensical? 213.246.165.17 (talk) 10:14, 12 September 2014 (UTC)[reply]

Statistical statement

Message from a "stop smoking" campaign: "Stop smoking for 28 days and you're 5 times more likely to stop for good."

It seems to me that this statement is nonsensical. Am I missing anything? 86.129.18.104 (talk) 03:15, 11 September 2014 (UTC)[reply]

I think it means that people who stop for 28 days are 5 times more likely to quit for good, compared to the ones that don't make it 28 days. Bubba73 You talkin' to me? 03:35, 11 September 2014 (UTC)[reply]
There's probably a better, more formal way to say it. Suppose that, as seems likely, the probability of a relapse strictly decreases with length of abstinence. Say is the probability of never smoking again after not smoking for n days. Clearly is near zero, and is greater than that, but five times what? What is q such that  ? —Tamfang (talk) 03:59, 11 September 2014 (UTC)[reply]
I take the meaning to be . That is, it's comparing to the probability before you even try. Another issue is whether "5 times more" implies a factor of 5, as Tamfang assumed, or a factor of 6. --65.94.51.64 (talk) 04:19, 11 September 2014 (UTC)[reply]
I agree it's nonsensical. It's like a "Live long!" campaign that says, "Live to the age of 80 and you're 30 times more likely to live to the age of 85." Well, great. That helps someone live to 85 how? The hard part of smoking is probably quitting smoking for 28 days. In fact, it sounds like it's 80% of the hard part." 213.246.165.17 (talk) 08:11, 11 September 2014 (UTC)[reply]
  • Hmmm, it seems that no one else saw quite the same fundamental illogicality as me, so let me explain the way I see it. Let's say, for the sake of argument, that if you stop smoking for 28 days you have a 50% chance of stopping for good. The statement then implies that if you DON'T stop smoking for 28 days you have a 10% chance of stopping for good. To me, this obviously cannot be correct. If you don't stop smoking for 28 days then you have NO chance of stopping for good. In order to give up for good, you HAVE to stop for 28 days. Any further thoughts? 86.160.86.83 (talk) 11:00, 11 September 2014 (UTC)[reply]
I think it's trying to convey a statement about conditional probability-- . While the English phrasing might be awkward, I sincerely doubt that the stat isn't drawn from some fairly legit study. They're just struggling to get it into a snappy ad campaign. SemanticMantis (talk) 14:03, 11 September 2014 (UTC)[reply]
Wouldn't that calculation depend on X, though? Let's take a concrete example. Say ten people try to stop smoking, and the number of days they last is {1, 2, 5, 10, 25, 40, 60, permanently, permanently, permanently}, then how would you do that calculation? Where the original statement has "5 times", what factor would this data yield? 86.160.86.83 (talk) 19:20, 11 September 2014 (UTC)[reply]
No, I was just specifying it for clarity, thinking of X as a calendar date when they did the survey. Mark viking seems to (mostly) share my interpretation below. The difference is, I'm lumping everything less than 28 days together, rather than comparing to zero-days-stopped. This would be an easy way to "cherry pick" the data to find a nice statistic, just keep dividing the pool into two groups based on Y days stopped, until you get the multiplier that you want. SemanticMantis (talk) 23:09, 11 September 2014 (UTC)[reply]
I interpret this as the conditional probability of quitting forever given that you stopped for 28 days is five times the probability of quitting given no such 28 day stoppage, i.e, the probability of quitting after 0 days of stoppage, right at the start. It makes sense to me, as after 28 days, most of the the physical withdrawal effects are probably gone and the psychological habit may be broken, too. It looks to be part of the Stoptober campaign, but I could not find a source for the statistic. --Mark viking (talk) 19:54, 11 September 2014 (UTC)[reply]
Perhaps the missing or unclear information, then, is "5 times more likely than what?"? I read it as "5 times more likely than if you don't stop for 28 days". In your interpretation, I suppose it would be "5 times more likely than when you start out". Is that right? 86.160.86.83 (talk) 20:38, 11 September 2014 (UTC)[reply]
Presumably it means that if you picked two persons at random from a group of recently quit smokers, one of whom had just quit that day and the other had quit 28 days ago, the person who had quit 28 days ago would be five times more likely to quit for good than the person who had just quit. Doctors are not known for any Bayesian subtlety when they make statements like this. Sławomir Biały (talk) 20:46, 11 September 2014 (UTC)[reply]
Why compare to the 0 day quitter? pooling all days-quit<28 as I did above makes more sense to me... SemanticMantis (talk) 23:09, 11 September 2014 (UTC)[reply]

On the article linked in the subject line, it asks "Is there a logic satisfying the interpolation theorem which is compact?", I'm assuming the reference is to Craig interpolation, but FO is compact and satisfies it, so I'm not sure what it is asking. I don't have access to the source, unfortunately.Phoenixia1177 (talk) 10:27, 11 September 2014 (UTC)[reply]

The statement in the WP article is not correct. From the chapter referenced, the open problem is Is there a logic L which satisfies both the Beth property and Δ-interpolation, is compact but does not satisfy the interpolation property? The interpolation here looks like Craig interpolation, but I have little knowledge of this field, so don't trust me on that. --Mark viking (talk) 21:57, 11 September 2014 (UTC)[reply]
Is this a case for WP:SOFIXIT then? SemanticMantis (talk) 23:10, 11 September 2014 (UTC)[reply]
Phoenixia1177 just did. --Mark viking (talk) 23:34, 11 September 2014 (UTC)[reply]
Thank you for the response:-)Phoenixia1177 (talk) 03:15, 12 September 2014 (UTC)[reply]

puzzle from game question...

In this game there is a puzzle as follows. There are 16 items, four each of four colors and four each of different types of shells. Call the colors A-D and the shell types 1-4. They are placed in a 4x4grid at random and the game is won if the top row is A1-A4 in order, second row B1-B4 in order and so on. Legal moves are as follows: Two items may be switched if and only if the cells border each other vertically, horizontally or diagonally *and* the two items share a characteristic (color or type of shell). Can it be won from any starting position? (If moves are only allowed Horizontally and Vertically, then the Order 4 Graeco-Latin square would be a losing starting position) If the puzzle can be won from any position, is there anything like a strategy to win it in as few moves as possible?Naraht (talk) 12:59, 11 September 2014 (UTC)[reply]

I had a go at solving it starting from a random position and thought it made a nice puzzle. It's similar to but slightly harder than the 15 puzzle, much easier than Rubik's cube. I think this starting position
A1 B2 C1 D2
C3 D4 A3 B4
A2 B1 C2 D1
C4 D3 A4 B3
leaves you with no moves allowed, which would mean it's not always possible to win from any starting position, but it looks like such configurations are very rare. Anyway, I'm sensing commercial possiblities, maybe a cell-phone app? --RDBury (talk) 11:23, 12 September 2014 (UTC)[reply]
I agree that that starting position has no moves. (and rotating the columns/rows and/or rotating the numbers/letters also would give a no move position, so a few more than just that one). It is a (small) part of a game that my wife downloaded a few days ago, I'll try to find the name. Any ideas for solution strategy?11:55, 12 September 2014 (UTC)