User:Jon Awbrey/Pebble Beach
Pebble Beach
[edit]It's like a sandbox, only less litter.
Recursive Poems
[edit]- "The Deils's awa wi th' Exciseman", Robert Burns (1792)
| CHORUS | The Deil's awa, the Deil's awa, | The Deil's awa wi th' Exciseman! | He's danc'd awa, he's danc'd awa, | He's danc'd awa wi th' Exciseman! | | The Deil cam fiddlin thro the town, | And danc'd awa wi th' Exciseman, | And ilka wife cries: — "Auld Mahoun, | I wish you luck o the prize man!" | | "We'll make our maut, and we'll brew our drink, | We'll laugh, sing, and rejoice, man, | And monie braw thanks to the meikle black Deil, | That danc'd awa wi th' Exciseman." | | There's threesome reels, there's foursome reels, | There's hornpipes and strathspeys, man, | But the ae best dance e'er cam to the land | Was The Deil's awa wi th' Exciseman.
Bibliographies
[edit]Theory of relations
[edit]References
[edit]- Peirce, C.S., "Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic", Memoirs of the American Academy of Arts and Sciences, 9, 317-378, 1870. Reprinted, Collected Papers CP 3.45-149, Chronological Edition CE 2, 359-429.
- Ulam, S.M. and Bednarek, A.R., "On the Theory of Relational Structures and Schemata for Parallel Computation", pp. 477-508 in A.R. Bednarek and Françoise Ulam (eds.), Analogies Between Analogies: The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, University of California Press, Berkeley, CA, 1990.
Bibliography
[edit]- Bourbaki, N., Elements of the History of Mathematics, John Meldrum (trans.), Springer-Velag, Berlin, Germany, 1994.
- Chang, C.C., and Keisler, H.J., Model Theory, North-Holland, Amsterdam, Netherlands, 1973.
- Halmos, P.R., Naive Set Theory, D. Van Nostrand Company, Princeton, NJ, 1960.
- Kelley, J.L., General Topology, Van Nostrand Reinhold, New York, NY, 1955.
- Kneale, W. and Kneale, M., The Development of Logic, Oxford University Press, Oxford, UK, 1962. Reprinted with corrections, 1975.
- Lawvere, F.W., and Rosebrugh, R., Sets for Mathematics, Cambridge University Press, Cambridge, UK, 2003.
- Lawvere, F.W., and Schanuel, S.H., Conceptual Mathematics, A First Introduction to Categories, Cambridge University Press, Cambridge, UK, 1997. Reprinted with corrections, 2000.
- Mathematical Society of Japan, Encyclopedic Dictionary of Mathematics, 2nd edition, 2 vols., Kiyosi Itô (ed.), MIT Press, Cambridge, MA, 1993.
- Mili, A., Desharnais, J., Mili, F., with Frappier, M., Computer Program Construction, Oxford University Press, New York, NY, 1994. — Introduction to Tarskian relation theory and its applications within the relational programming paradigm.
- Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 1-6, Charles Hartshorne and Paul Weiss (eds.), vols. 7-8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 1931-1935, 1958. (= CP)
- Peirce, C.S., Writings of Charles S. Peirce: A Chronological Edition, Volume 2, 1867-1871, Peirce Edition Project (eds.), Indiana University Press, Bloomington, IN, 1984. (= CE 2)
- Royce, J., The Principles of Logic, Philosophical Library, New York, NY, 1961.
- Runes, D.D. (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ, 1962.
- Styazhkin, N.I., History of Mathematical Logic from Leibniz to Peano, MIT Press, Cambridge, MA, 1969.
- Tarski, A., Logic, Semantics, Metamathematics, Papers from 1923 to 1938, J.H. Woodger (trans.), first edition, Oxford University Press, 1956, second edition, J. Corcoran (ed.), Hackett Publishing, Indianapolis, IN, 1983.
- Ulam, S.M., Analogies Between Analogies: The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, A.R. Bednarek and Françoise Ulam (eds.), University of California Press, Berkeley, CA, 1990.
- Venetus, P., Logica Parva, Translation of the 1472 Edition with Introduction and Notes, Alan R. Perreiah (trans.), Philosophia Verlag, Munich, Germany, 1984.
Relations in general
[edit]In a realistic computational framework, where incomplete and inconsistent information is the rule, it is necessary to work with genera of relations that are gradually more relaxed in their constraining characters but that still preserve appropriate measures of analogy with the original species of relations that are found to prevail in perfect information contexts.
In the present application, the kinds of relations of primary interest are functions, equivalence relations, and other species of relations that are defined by their axiomatic properties. Thus, the information-theoretic generalizations of these structures lead to partially defined functions and partially constrained versions of these specially defined families of relations.
The purpose of this Section is to describe the kinds of generalized functions and other generic orders of relations that are needed to extend the discussion of sign relations to a more realistic level of computational and data-theoretic analysis. In this connection, to frame the problem in concrete syntactic terms, I need to adapt the "equivalence class" notation for two different generalizations of equivalence relations, to be defined below. But first, a number of preliminary topics need to be treated.
Object-theoretic and sign-theoretic options
[edit]Generally speaking, one is free to interpret references to "generalized objects" in either one of two fashions:
- As distinct indications of partially formed versions of objects.
- As partially informed descriptions of distinct types of objects.
I will describe these choices as the "object-theoretic" and the "sign-theoretic" interpretations, respectively.
The object-theoretic way of reading partial signs assumes that general references and vague references nevertheless have their objective denotations, but purely, simply, and quite literally denoting "general objects" and "vague objects", respectively.
The sign-theoretic way of reading partial signs ascribes the partialities of information to the characters of the signs, the expressions, and the texts that are doing the denoting.
In most of the cases that arise in casual discussion the choice between these conventions is purely stylistic. However, in many of the more intricate situations that arise in formal discussion, the object-theoretic choice frequently fails utterly, and whenever the utmost care is required it will usually be a due attention to the partialities of signs that saves the day, and so this is the direction of generalization that I ultimately tend to adopt in all of the most critical applications.
Local incidence properties of relations
[edit]A local incidence property of a relation L is one that depends on the properties of specified subsets of L that are known as its local flags.
Suppose that L is a k-place relation L ⊆ X1 × … × Xk.
Choose a relational domain Xj and one of its elements x. Then Lx.j is a subset of L that is called "the flag of L with x at j", or "the x.j-flag of L". With these conventions, the flag Lx.j ⊆ L is defined as follows:
- Lx.j = {(x1, …, x_j, …, xk) in L : xj = x}.
Any property C of the local flag Lx.j ⊆ L may then be classified as a local incidence property of L with respect to the locus "x at j".
A k-adic relation L ⊆ X1 × … × Xk is "C-regular at j" if and only if every flag of L with x at j has the property C, where x is taken to vary over the "theme" of the fixed domain Xj.
Coded more symbolically, L is C-regular at j if and only if C(Lx.j) is true for all x in Xj.
Numerical incidence properties of relations
[edit]Of particular interest are the local incidence properties of relations that can be calculated from the cardinalities of their local flags, and these are naturally enough called numerical incidence properties.
For example, L is said to be "c-regular at j" if and only if the cardinality of the local flag Lx.j is c for all x in Xj, or, to write it in symbols, if and only if |Lx.j| = c for all x in Xj.
In a similar fashion, one can define the NIP's "(< c)-regular at j", "(> c)-regular at j", and so on. For ease of reference, I record a few of these definitions here:
- L is c-regular at j iff |Lx.j| = c for all x in Xj.
- L is (< c)-regular at j iff |Lx.j| < c for all x in Xj.
- L is (> c-regular at j iff |Lx.j| > c for all x in Xj.
The definition of a local flag can be broadened from a point x in Xj to a subset M ⊆ Xj, arriving at the definition of a "regional flag".
Suppose that L ⊆ X1 × … × Xk, and choose a subset M ⊆ Xj. Then "LM.j" denotes a subset of L called "the flag of L with M at j", or "the M.j-flag of L". The regional flag LM.j is defined as follows:
- LM.j = {<x_1, ..., x_j, ..., x_k> in L : x_j in M}.
Returning to 2-adic relations, it is useful to describe some familiar classes of objects in terms of their local and their numerical incidence properties. Let L c S x T be an arbitrary 2-adic relation. The following properties of L can be defined:
- L is "total" at S iff L is (>=1)-regular at S.
- L is "total" at T iff L is (>=1)-regular at T.
- L is "tubular" at S iff L is (=<1)-regular at S.
- L is "tubular" at T iff L is (=<1)-regular at T.
If L c S x T is tubular at S, then L is called a "partial function" or a "prefunction" from S to T, sometimes indicated by giving L an alternate name, say, "p", and writing L = p : S ~> T.
Just by way of formalizing the definition:
- L = p : S ~> T iff L is tubular at S.
If L is a prefunction p : S ~> T that happens to be total at S, then L is called a "function" from S to T, indicated by writing L = f : S -> T. To say that a relation L c S x T is totally tubular at S is to say that it is 1-regular at S. Thus, we may formalize the following definition:
- L = f : S -> T iff L is 1-regular at S.
In the case of a function f : S -> T, one has the following additional definitions:
- f is "surjective" iff f is total at T.
- f is "injective" iff f is tubular at T.
- f is "bijective" iff f is 1-regular at T.
What is information that a sign may bear it?
[edit]Three more questions arise at this juncture:
- How is a sign empowered to contain information?
- What is the practical context of communication?
- Why do we care about these bits of information?
A very rough answer to these questions might begin as follows:
Human beings are initially concerned solely with their own lives, but then a world obtrudes on their subjective existence, and so they find themselves forced to take an interest in the objective realities of its nature.
In pragmatic terms our initial aim, concern, interest, object, or 'pragma' is expressed by the verbal infinitive 'to live', but the infinitive is soon reified into the derivative substantial forms of 'nature', 'reality', 'the world', and so on. Against this backdrop we find ourselves cast as the protagonists on a 'scene of uncertainty'. The situation may be pictured as a juncture from which a manifold of options fan out before us. It may be an issue of truth, duty, or hope, the last codifying a special type of uncertainty as to what regulative principle has any chance of success, but the chief uncertainty is that we are called on to make a choice and find that we all too often have almost no clue as to which of the options is most fit to pick.
Just to make up a discrete example, let us suppose that the cardinality of this choice is a finite n, and just to make it fully concrete let us say that n = 5. Figure 1 affords a rough picture of the situation.
o-------------------------------------------------o | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` `?` ` `?` ` `?` ` `?` ` `?` ` ` ` ` ` | | ` ` ` ` ` `o` ` `o` ` `o` ` `o` ` `o` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` `o` ` o ` `o` ` o ` `o` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` `o` `o` `o` `o` `o` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` `o` o `o` o `o` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` `o o o o o` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` `ooooo` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` `O` ` ` ` ` ` ` ` `n = 5` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | o-------------------------------------------------o Figure 1. Juncture of Degree 5
This pictures a juncture, represented by "O", where there are n options for the outcome of a conduct, and we have no clue as to which it must be. In a sense, the degree of this node, in this case n = 5, measures the uncertainty that we have at this point.
This is the minimal sort of setting in which a sign can make any sense at all. A sign has significance for an agent, interpreter, or observer because its actualization, its being given or its being present, serves to reduce the uncertainty of a decision that the agent has to make, whether it concerns the actions that the agent ought to take in order to achieve some objective of interest, or whether it concerns the predicates that the agent ought to treat as being true of some object in the world.
The way that signs enter the scene is shown in Figure 2.
o-------------------------------------------------o | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` k_1 = 3 ` ` ` `k_2 = 2` ` ` ` ` ` | | ` ` ` ` ` `o-----o-----o` ` `o-----o` ` ` ` ` ` | | ` ` ` ` ` ` ` ` "A" ` ` ` ` ` "B" ` ` ` ` ` ` ` | | ` ` ` ` ` ` `o----o----o` ` o----o` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` `o---o---o` `o---o` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` `o--o--o` o--o` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` `o-o-o o-o` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` `ooooo` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` `O` ` ` ` ` ` ` ` `n = 5` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | o-------------------------------------------------o Figure 2. Partition of Degrees 3 and 2
This illustrates a situation of uncertainty that has been augmented by a classification.
In the particular pattern of classification that is shown here, the first three outcomes fall under the sign "A", and the next two outcomes fall under the sign "B". If the outcomes make up a set of things that might be true about an object, then the signs could be read as nomens (terms) or notions (concepts) of a relevant empirical, ontological, taxonomical, or theoretical scheme, that is, as predicates and predictions of the outcomes. If the outcomes make up a set of things that might be good to do in order to achieve an objective, then the signs could be read as bits of advice or other sorts of indicators that tell us what to do in the situation, relative to our active goals.
This is the basic framework for talking about information and signs in regard to communication, decision, and the uncertainties thereof.
Just to unpack some of the many things that may be getting glossed over in this little word 'sign', it encompasses all of the 'data of the senses' (DOTS) that we take as informing us about inner and outer worlds, plus all of the concepts and terms that we use to argue, to communicate, to inquire, or even to speculate, both about our ontologies for beings in the worlds and about our policies for action in the world.
Here is one of the places where it is tempting to try to collapse the 3-adic sign relation into a 2-adic relation. For if these DOTS are so closely identified with objects that we can scarcely imagine how they might be discrepant, then it will appear to us that one role of beings can be eliminated from our picture of the world. In this event, the only things that we are required to inform ourselves about, via the inspection of these DOTS, are yet more DOTS, whether past, or present, or prospective, just more DOTS. This is the special form to which we frequently find the idea of an information channel being reduced, namely, to a 'source' that has nothing more to tell us about than its own conceivable conducts or its own potential issues.
As a matter of fact, at least in this discrete type of case, it would be possible to use the degree of the node as a measure of uncertainty, but it would operate as a multiplicative measure rather than the sort of additive measure that we would normally prefer. To illustrate how this would work out, let us consider an easier example, one where the degree of the choice point is 4.
o-------------------------------------------------o | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` `?` ` `?` ` ` ` ` `?` ` `?` ` ` ` ` ` | | ` ` ` ` ` `o` ` `o` ` ` ` ` `o` ` `o` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` `o` ` o ` ` ` ` o ` `o` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` `o` `o` ` ` `o` `o` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` `o` o ` ` o `o` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` `o o` `o o` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` `oo oo` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` `O` ` ` ` ` ` ` ` `n = 4` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | o-------------------------------------------------o Figure 3. Juncture of Degree 4
Suppose that we contemplate making another decision after the present issue has been decided, one that has a degree of 2 in every case. The compound situation is depicted in Figure 4.
o-------------------------------------------------o | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` `o` `o o` `o` ` ` `o` `o o` `o` ` ` ` ` | | ` ` ` ` ` \ / ` \ / ` ` ` ` \ / ` \ / ` ` ` ` ` | | ` ` ` ` ` `o` ` `o` ` ` ` ` `o` ` `o` `n_2 = 2` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` `o` ` o ` ` ` ` o ` `o` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` `o` `o` ` ` `o` `o` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` `o` o ` ` o `o` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` `o o` `o o` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` `oo oo` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` `O` ` ` ` ` ` ` `n_1 = 4` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | o-------------------------------------------------o Figure 4. Compound Junctures of Degrees 4 and 2
This illustrates the fact that the compound uncertainty, 8, is the product of the two component uncertainties, 4 times 2. To convert this to an additive measure, one simply takes the logarithms to a convenient base, say 2, and thus arrives at the not too astounding fact that the uncertainty of the first choice is 2 bits, the uncertainty of the next choice is 1 bit, and the compound uncertainty is 2 + 1 = 3 bits.
In many ways, the provision of information, a process that reduces uncertainty, is the inverse process to the kind of uncertainty augmentation that occurs in compound decisions. By way of illustrating this relationship, let us return to our initial example.
A set of signs enters on a setup like this as a system of middle terms, a collection of signs that one may regard, aptly enough, as constellating a medium.
o-------------------------------------------------o | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` k_1 = 3 ` ` ` `k_2 = 2` ` ` ` ` ` | | ` ` ` ` ` `o-----o-----o` ` `o-----o` ` ` ` ` ` | | ` ` ` ` ` ` ` ` "A" ` ` ` ` ` "B" ` ` ` ` ` ` ` | | ` ` ` ` ` ` `o----o----o` ` o----o` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` `o---o---o` `o---o` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` `o--o--o` o--o` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` `o-o-o o-o` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` `ooooo` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | | ` ` ` ` ` ` ` ` ` ` ` `O` ` ` ` ` ` ` ` `n = 5` | | ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` | o-------------------------------------------------o Figure 5. Partition of Degrees 3 and 2
The language or medium here is the set of signs {"A", "B"}. On the assumption that the initial 5 outcomes are equally likely, one may associate a frequency distribution (k1, k2) = (3, 2) and thus a probability distribution (p1, p2) = (3/5, 2/5) = (0.6, 0.4) with this language, and thus define a communication channel.
The most important thing here is really just to get a handle on the 'conditions for the possibility of signs making sense', but once we have this much of a setup we find that we can begin to construct some rough and ready bits of information-theoretic furniture, like measures of uncertainty, channel capacity, and the amount of information that can be associated with the reception or the recognition of a single sign. Still, before we get into all of this, it needs to be emphasized that, even when these measures are too ad hoc and insufficient to be of much use per se, the significance of the setup that it takes to support them is not at all diminished.
Consider the classification-augmented or sign-enhanced situation of uncertainty that was depicted above. What happens if one or the other of the two signs, "A" or "B", is observed or received, on the constant assumption that its significance is recognized on receipt?
- A. If we receive "A" our uncertainty is reduced from log 5 to log 3.
- B. If we receive "B" our uncertainty is reduced from log 5 to log 2.
It is from these characteristics that the information capacity of a communication channel can be defined, specifically, as the 'average uncertainty reduction on receiving a sign', a formula with the striking mnemonic 'AURORAS'.
In our present case, the channel capacity works out as follows:
Textbox. Version 1
Capacity | of the channel {"A", "B"} |
Textbox. Version 2
In other words, the capacity of this channel is slightly under 1 bit. This makes intuitive sense, since 3 against 2 is a near-even split of 5, and the measure of the channel capacity or the entropy is supposed to attain its maximum of 1 bit whenever a two-way partition is 50-50, that is to say, as uniform a distribution as it can be.