Talk:Plessey System 250

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
WikiProject Computing / Hardware (Rated Stub-class)
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Stub-Class article Stub  This article has been rated as Stub-Class on the project's quality scale.
 ???  This article has not yet received a rating on the project's importance scale.
Taskforce icon
This article is supported by Computer hardware task force.
 
Note icon
This article has been automatically rated by a bot or other tool as Stub-Class because it uses a stub template. Please ensure the assessment is correct before removing the |auto= parameter.

What's "Church-Turing" about the System 250?[edit]

It's referred to in several places as a "Church-Turing machine"; what is a "Church-Turing machine"?

This page from a lecture says "It should be mentioned at this point that the Church-Turing machine says nothing of efficiency, but only implies that the Turing machine can recognize anything that any other reasonable model of computation can, and vice versa.", but what it seems to mean is "the Church-Turing thesis says nothing of efficiency, but..." The Church-Turing thesis isn't a machine, it's a hypothesis saying that, to quote the article on it, a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine (or by a function in Church's lambda calculus).

A Turing machine is an abstract representation of computing; I know of no computing machines ever used in practice that operated exactly like Turing machines. Those machines might be Turing-equivalent (other than being finite, rather than having the infinite tape of a Turing machine), but that doesn't make them Turing machines in any useful sense of the term.

And I don't see how the lambda calculus enters into the way the System 250 ran (unless, for example, somebody wrote a Lisp implementation for it). Guy Harris (talk) 21:28, 14 February 2020 (UTC)

If "A Church-Turing Machine is a digital computer that encapsulates the symbols in a thread of computation as a chain of protected abstractions by enforcing the dynamic binding laws of Alonzo Church's Lambda Calculus.", then:
  • Again, where's the "Turing" part? I don't see any mention of tapes.
  • In what way does the System 250 "[encapsulate] the symbols in a thread of computation as a chain of protected abstractions by enforcing the dynamic binding laws of Alonzo Church's Lambda Calculus."? Is there some full manual for the System 250, describing all the instructions, and showing which instructions implement something resembling the lambda calculus? Guy Harris (talk) 07:18, 16 February 2020 (UTC)
PP250 machine code isolates independently assembled digital objects grouped into classes of related functions, protected by capability limited addressing. These digital objects are the variables, functions, abstractions, and applications of a Lambda Calculus Namespace. The data in a Namespace is private to the Namespace while the functions can be downloaded on demand from if an in-scope capability key exists. The laws of the Lambda Calculus are enforced by a handful of typed limited, functional Church-Instructions.
Under program controls the PP250 is bound to the capability context registers that constrain the machine instructions to the required least-authority and the minimum least-privilege. No default (machine) privileges exist, no superuser administrator exists and no centralized operating system process is needed for centralized memory-management, centralized timeshare scheduling or for input and output device drivers. Specifically, there are no direct access rights to any physical hardware. The binary machinery is hidden as a set of well-protected, object-oriented, functional modules called as protected frames of computation.
Hiding the binary machine is important because branded conventions associated with compiled binary data cannot be guaranteed in a binary computer network. Both accidents and sabotage usurp the unscientific binary conventions and cause a wide range of undetected Confused Deputy attacks.
Instead, access rights are limited to the available Capability keys held in type limited C-Lists and context limited context registers. Need to know and the least privilege is enforced in one uniform way to match the symbol in pure mathematical expressions, one frame at a time. Using this frame-based, object-capability protection mechanism, system functions can be accessed by in-line calls. A parallel thread can fork when it is advantageous. No default superuser or software privileges exist, only capability privileges distributed by the needs of mathematical expressions. Malware attacks and hackers have no ability to bypass these checks.
Access rights are unlocked by inserting an in-scope capability key into correct Church-Instruction. The frames of computation navigate an arbitrarily complex tree of information that defines the DNA of the Namespace. The capability keys are symbols substituted with variables at run-time exactly as required by the Lambda Calculus.
PP250 added a meta-machine to shelter a RISC (single tape Turing compliant) machine within the type limited access rights of the Lambda Calculus context registers. These context registers confine the object-oriented machine code using Lambda Calculus concepts of Variables, Functions, Abstractions, and Applications as a coherent Lambda Calculus Namespace. The navigation instructions share no common failure modes with the RISC machine. As a result, the frames of the object-oriented machine code can be chained together as fail-safe and future-safe scientific calculations, (the same computations taught at school and university) by simply passing capability keys between the frames as call and return parameters. Kjhamerhodges (talk) 14:32, 20 February 2020 (UTC)
So, in an entry in Ian Cottam's CV, he says:

I implemented the Plessey System 250 operating system generation tools (basically, a macro assembler). I also implemented the secure, separate compilation and library system for Plessey’s CORAL programming language compiler.

If "CORAL programming language" refers to Coral 66 or a version thereof, and if that's a programming language used on the System 250, then, at least according to this Coral 66 reference manual, it's a fairly conventional imperative programming language - nothing lambda-calculus-like about it.
He also uses the term "operating system", as others have, so, if it doesn't have a "centralized operating system process", it still appears that it has an operating system. It may not have the conventional structure of a kernel that runs in privileged mode, with all privileges granted to it, with, instead, particular modules granted particular privileges by which capabilities they have, but it still appears to have an operating system, at least according to some who worked it. As Levy says in the chapter on the System 250 in his book:

The Plessey 250 System, unlike most traditional computers, has no privileged mode of operation. The operating system relies only on the protected procedure mechanism for its protection. This mechanism is available to any process and allows a processto add to the facilities supplied by the standard operating system.

Cottam also speaks of "a macro assembler" - was some of the software written in macro assembler language? That's even further from the lambda calculus.
So where is there a machine-language manual describing precisely how the "handful of typed limited, functional Church-Instructions" operate? I presume they are something more than just the CALL instruction Levy mentions; that instruction performs what appears to be a name space switch, in which the C6 register's value changes from one that points to a table of the capabilities available to the calling code to one that points to a table of the capabilities available to the called code, so only the objects for which there are capabilities in registers C0 through C4 are available to both the called and calling code. In what way does that correspond to the lambda calculus?
And, yes, anything that a conventional computer (CISC or RISC) can compute can also be computed by a single-tape Turing machine, but that doesn't make the "Turing" part of "Church-Turing machine" particularly useful; you're presumably not stuck in the Turing tarpit when writing code for the System 250, any more than for most other machine/programming language/available software. Guy Harris (talk) 20:26, 20 February 2020 (UTC)
The two sides of the Church-Turing Thesis represent reinforcing foundation theories of computation, one logical in nature (the software) the other physical in nature - the hardware. For example, the Lambda Calculus includes rules for symbolic computation, dynamic binding, and infinite scalability, while a single tape Turing machine defines a physical form for a statically bound atomic assembly of software and hardware as a machine. Specifically, this atomic machine is the implementation of the theory of Lambda Calculus as Alonzo Church defined an 'Application.' Rule-3 'Applying a function to an argument' found here, and at the same time, Rule-1 on Variables and Rule-2 on Abstractions are also satisfied. However, this equivalence when statically bound falls apart as soon as the physical machine is scaled by the von Neumann architecture that not only stretches the instruction address range beyond the atomic need, rules are broken by adding hardware privileges that include superusers. These human distortions override the scientific laws of symbolic binding defined by Alonzo Church with dangerous physical conditions. Kjhamerhodges (talk) 18:13, 21 February 2020 (UTC)
A conceptual Church-Turing Machine combines both ideas to scale without any loss. PP250 made the first attempt to implement such a machine. The resulting naked computer without any software, still embodies every fundamental need of the two alternative views, one for hardware and the other for software, without any loss. So for example, programs are just variables and can be passed as parameters between functions, so are Applications (PP250 called Threads), so are Abstractions (PP250 called Packages - object-oriented machine code), indeed using a Capability Key, an abstraction however complex, and however remote can be passed as a machine coded statement without any language support or any compiler. These things help but they are not necessary. Indeed, a simple macro assembler served the purpose for PP250 and allowed the symbolic names of the capability keys to make the assembly code, if not the machine code, readable. This machine-level characteristic is required by the laws of the Lambda Calculus. However, it is not needed and is not available with the von Neumann architecture or in a Gener-Purpose Computer. It does exist in PP250 as a distinguishing architectural feature that is not found in other computer hardware. The value goes infinitely further. For example, it allows a network of Namespace machines to interwork atomically, and scientifically using the object-capability model without any compromise or loss from monolithic compilations and statically defined centrally administered privileges. Two machines interwork by dynamically encapsulating an atomic, single tape Turing compliant machine within a Lamda Calculus compliant machine. Kjhamerhodges (talk) 18:13, 21 February 2020 (UTC)
So for example, PP250 instructions are only bound to locally accessible symbols instead of a stretched address to span a large linear memory system, perhaps sequential pages arranged by a privileged memory manager. The most important point being all proprietary (human) choice is removed leaving only the essence of computer science as a machine for atomic computations. This is the minimal science needed as the hardware foundation of fail-safe, future safe computer science. Anything added detracts from the pure science and leads to corruption, malware, and hacking. With the onset of AI malware and AI hacking it is vital for the survival of the nation is a globally conflicted cyber-society.Kjhamerhodges (talk) 18:13, 21 February 2020 (UTC)
So what does the word "compliant" mean in "Turing compliant machine" or "Lamda Calculus compliant machine"? Clearly it does not mean that there is any direct correspondence whatsoever between the machine and either 1) a 7-tuple of sets or 2) strings of lambdas, dots, parentheses, and variable names, combined with rules for how to reduce those strings.
And in what way do no other capability-based machines implement this "you can't access any data to which you don't have a capability" model (which is what you're really talking about here)? Have you not, for example, read the BiiN manuals to which I've provided links? Guy Harris (talk) 19:14, 21 February 2020 (UTC)

What about the BiiN/Intel 960MX systems?[edit]

The BiiN systems had processors (versions of the Intel 960) with tagged pointers, as per the BiiN CPU Architecture Reference Manual. As section 2.5.1 says:

In most computer systems, a pointer is simply an arbitrary bit pattern used as an address. Such pointers can be corrupted without detection by the hardware or OS. ADs are specially tagged memory words that can only be created or modified in carefully controlled ways. A memory word in a BiiN computer system is actually 33 bits, the 33rd bit being the tag bit that distinguishes ADs fro data. Changing an AD in an unauthorized way invalidates the AD (by turning off the tag bit).
An AD contains both addressing information, used to find the object in memory, and also access rights, that indicate what operations are possible with the AD (Figure 2-1).
Different users or programs may have ADs with different rights to the same object. Mary may have an AD with both read and write rights to an object and John may have an AD with only read rights.
To reference a particular field within an object, a program can use a two-part virtual address': a 32-bit offset to a byte within the object plus an AD to the object.

An object has an entry in an object table; an object can be simple, paged (made up of 4K pages, with a one-level page table), or bipaged (made up of 4K pages, with a two-level page table). An object table entry contains an entry type (valid/invalid plus simple/paged/bipaged), an object length, an object type, some additional information, and:

  • the base physical address of a simple object;
  • the base physical address of the page table of a paged object;
  • the base physical address of the "page directory" (pointing to page tables) of a bipaged object.

An access descriptor contains the index of the object in the object table, along with the access rights.

The versions of the 960 with the full architecture, complete with access descriptors, have memory-reference instructions that involve an access descriptor contained in a register pair. They either have a constant offset relative to the beginning of the object or a displacement that, when added to an index register (the value of which can be scaled by a power of 2), gives the offset relative to the beginning of the object.

See chapter 7 for more details on cross-domain procedure calls, etc..

That sounds like a capability-based system to me. Are there any forms of protection, domain crossing, etc. provided by the System 250 that the BiiN systems don't support? If so, what are the low-level details of how that's done in System 250? Guy Harris (talk) 07:07, 16 February 2020 (UTC)

What is "the stretched address range of the von Neumann Architecture"?[edit]

To what does "stretched" refer?

And is this just speaking of the flat linear address space that most von Neumann architecture (or, alternatively, Turing architecture) machines have? If so, then there are von Neumann/Turing architecture machines with "two-dimensional" address spaces, along the lines of segmented address spaces, in which a memory operand is referred to by an object and an offset within the object (the object and the offset within the object being independent coordinates). The Plessey System 250 has such an address space, with the first coordinate being a capability for an object and the second coordinate being an offset within the object.

The Cambridge CAP computer also has such an address space - as chapter 1 of "The Cambridge CAP Computer and Its Operating System"] says, "The address of a word in memory must specify the identity of the capability segment containing the capability for the segment concerned, the offset of the slot in the capability segment containing that capability, and finally the offset in the segment of the word required.", and the first two such items constitute the capability of the object being referred to.

The BiiN machines also have such an address space, with the first coordinate being an access descriptor (AD) for an object and the second coordinate being an offset within the object.

CHERI could be thought of as having such an address space, with a capability being the first coordinate - all traditional MIPS memory-reference instructions use the capability in register C0 for data references and the capability in the PCC register for instruction references - but those "capabilities" probably give full access (other than page table entry-based restrictions) to a large linear address space, thus not protecting anything within that space from access by any code with a capability for that address space in C0 or PCC. However, as The CHERI capability model: Revisiting RISC in an age of risk notes, "Capability-aware code can use sandboxed legacy code by restricting the default instruction and data capabilities (PCC and C0).", so even legacy code can have its abilities somewhat restricted, although it can't be given more than two capabilities. Guy Harris (talk) 08:56, 17 February 2020 (UTC)

Instead of looking at Turing's a-machine through von Neumann's eyes, his machine should be understood as the realization of Alonzo Church's 'Application'. As such a single tape Turing-Machine as defined in 1936 is the atomic implementation of a Lambda Calculus function-abstraction with a set of substituted variables (Turing included the Variables on the same tape with the algorithm). This atomic form and atomic function is the dead center of the Church-Turing Thesis where the two independently reasoned doctrines match both in form and in function. The fact that the tape might be considered to be infinite is a physical phenomenon. It has nothing to do with the functional needs of the computation as an atomic Lambda Calculus Application. Indeed, the PP250 rejected any need for any large physical memory segments beyond 2K in size, above that size a function abstraction as a collection of segments is used, preserving the atomic boundary checks but such an object was never required at that time.
Thus, any extended instruction address range larger than that needed for an atomic computation is by definition oversized or 'stretched' beyond the atomic need of the computation. This overstretched condition is the default situation of the von Neumann architecture and it grants undeserved and dangerous privileges to machine code that threatens (deliberately or accidentally) breakout to attack outside the task at hand. It is this default power that tolerates malware breakout to take place and remain undetected. Even a single powerful instruction or a small loop in some machines can wipe the physical memory clean. Thus for security reasons, the PP250 confined each and every machine instruction to the atomic scope of a chosen (in-scope) mathematical symbol represented by a Capability Key. PP250 removed all default hardware privileges to prevent break-out attack vectors that remain unchecked at the machine level in a General-Purpose Computer. Kjhamerhodges (talk) 19:49, 20 February 2020 (UTC)
Mathematical equivalence doesn't imply equivalence in form and function. A set of ordered pairs (X, Y) of real numbers, with a "line" defined as any set of ordered pairs such that Y = aX + b, for some values of a and b, and a "circle" defined as any set of ordered pairs (X, Y) such that (X - a)2 + (y - b)2 = r2, for some values of a, b, and r, etc. is mathematically equivalent to Euclidean geometry, but that's just because you can express any concept in one of them in the other one. All the Church–Turing thesis says is that any function on the natural numbers that is a general recursive function can also be expressed as a lambda function on the Church numerals, and vice versa, and can also be computed by a Turing machine using some encoding of the natural numbers, and vice versa. It doesn't mean those mechanisms look the same - in fact, not looking the same may be a feature, in that some ways of asking questions about a mechanism might be more easily represented, and more easily answered, when expressed in terms of an equivalent mechanism.
What is "Turing's a-machine"? If it's the ACE, then it's a machine with a single chunk of memory, with instructions being able to read and write any location in that memory, directly giving the physical address of that location, just as was the case in von Neumann's design, so they both have what you refer to as a "stretched" address space, although nothing has been "stretched" or "extended" if you view it from the point of view of the machine - it's just "not limited".
Early computers followed that pattern, but, over time, more and more machines didn't have instructions that directly referred to physical memory during normal operation - they might boot up with that ability, but, once some initial setup was done, most code didn't operate in that fashion. Some examples of this are:
  • the modified IBM 7090s and IBM 7094s used to run the Compatible Time-Sharing System, which had two memory units of 32K 36-bit words each, with a program-controlled flag to indicate from which unit instruction fetches should be performed and another program-controlled flag to indicate from which unit data loads and to which unit data stores should be performed,[1], and a "protection mode", in which all memory addresses had the value of a "relocation register" added to them before being used to refer to physical memory, and had the value checked against a "lower register" and an "upper register" to make sure it's within the bounds set by those registers before the reference was allowed to take place, and in which certain instructions (including the ones to set the values of those registers, and to switch the aforementioned program-controlled flags) would cause an error trap if an attempt was made to execute them.[2]
  • the PDP-6, which had a "relocation register" similar to the modified 7090/7094 and a "protection register" corresponding to the "upper register" of those machines (the lower bound for addresses was wired to 0), and an "executive mode" flag (like the inverse of "protection mode" - if the flag is off, the relocation and bounds checking is done, and some instructions, including the ones that can modify the flag and the relocation and protection register, cause an error trap if an attempt is made to execute them.[3] Earlier models of the PDP-10 were similar, except that they had two separate relocation and protection registers (to allow multiple processes to share code from a program but to have separate data spaces). There may have been other machines that used base/bounds protection, along with a supervisor/user mode.
  • IBM System/360 models with the storage protection feature, in which, in addition to a supervisor/user mode flag in the Program Status Word controlling which instructions could be executed, each 2K byte block of memory had a 4-bit "storage key" associated with it, and the machine had a storage key field in the Program Status word, such that, in user mode, an attempt to store into a memory location would succeed only if the two storage keys are equal or if either of the keys is zero. An additional "fetch protection" feature adds a bit to the storage keys associated with 2K byte memory blocks to indicate whether the same rules should apply to attempts to fetch data from a memory location.[4]
  • the Burroughs B5000/B5500/B5700 systems, which had "normal state" and "control state" (i.e., user and supervisor mode) governing whether certain instructions can be executed, and data accesses take place through "descriptors", where the descriptors for code and data contain physical memory addresses, and can be constructed only by "control state" code.
  • various systems with paged MMUs.
Those systems, with the exception of the Burroughs machines, had a very coarse protection granularity, in which the supervisory code had access to all of memory and all machine instructions, and user-mode code had full access to all the code and data to which the supervisory code gave it access.
So, even without capabilities, there are plenty of von Neumann-architecture machines limits the "undeserved and dangerous privileges" handed out to machine code, at least to the extent that, without accidentally-buggy or malicious code in the supervisor itself, user-mode machine code doesn't have sufficient privileges to affect the supervisor.
This does, however, allow code within a user-mode task to damage or attack code elsewhere in the task. Capability-based systems, including but hardly limited to the System 250, are an attempt to provide finer granularity, to at least address some of that issue. Not all of them directly implemented capabilities in hardware.
By the way, how is the System Capability Table constructed and updated? Is it updated by software that has a capability giving it the ability to write to that table? If so, what prevents it from constructing an SCT entry with a base address of 0, and a limit equal to the size of main memory, and handing a capability referring to that entry, with all access rights, to code? If the answer is "the limit field isn't large enough to cover all of main memory", what prevents it from constructing a set of SCT entries, such that the base address of SCT entry N+1 is the word right after the last word covered by SCT entry N, and the limit is either the maximum value that fits in an SCT entry or the value that runs up to the end of memory, and handing out multiple all-access-rights capabilities referring to those entries? In either case, you now have code running on a System 250 with "address range larger than that needed for an atomic computation". Guy Harris (talk) 22:33, 20 February 2020 (UTC)

References

  1. ^ "IBM 7090 and 7094 Data Processing Systems Additional Core Storage - RPO E02120 (7090) Dr RPO E15724 (7094)" (PDF). IBM.
  2. ^ "IBM 7090-7094 Multiprogramming Package RPO E07291 (7090) or RPO 880287 (7094)" (PDF). IBM.
  3. ^ Programmed Data Processor-6 Handbook (PDF). Digital Equipment Corporation. 1964.
  4. ^ IBM System/360 Principles of Operation (PDF) (Seventh ed.). IBM. A22-6821-6.

Is the "default address range" a physical address range?[edit]

If "They retain a default address range that can access every word of memory in the von Neumann Architecture" is referring to the physical address space, the Plessey System 250's physical memory subsystem presumably had addresses of some sort presented to it for load and store operations; according to Levy's book, capabilities in the SCT contain a base address, which is presumably a physical address.

And is "and every word in every page managed by the virtual memory using a Memory management unit" referring to the virtual address range of addresses presented to the memory management unit, or to the physical address range of addresses produced as output by the memory management unit? Guy Harris (talk) 19:12, 18 February 2020 (UTC)

The default address range is the accessible memory range. It might be a page-based virtual memory space of some considerable size. Kjhamerhodges (talk) 19:54, 20 February 2020 (UTC)
So it's "every word in the address space of the process", which is less limited than "only those words to which this particular module needs access", but more limited than "every word of every process's address space, including words used by the supervisor".
Note here that CHERI doesn't have such a default address range in hardware. Instructions that don't explicitly use a capability register implicitly use capability register C0 for data fetches and stores and the PCC capability register for instruction fetches; in CHERI, all MIPS effective addresses are offsets within a (virtual) address range defined by some capability register, so even though those effective addresses are in the very large range 0 to 264-1, most of that range is probably inaccessible due to the limit value in the capability register being used.
By the way, if the index registers in the System 250 are the same as the data registers, they're 24-bit long, which means that the offsets are in the range 0 to 224-1, which is smaller than 0 to 264-1, but is still big - but, just as in CHERI, an OS can ensure that the regions specified by capabilities don't go beyond the data to which the capability refers, so the size is irrelevant. Guy Harris (talk) 22:44, 20 February 2020 (UTC)

At least one paper about the System 250 speaks of an operating system for it[edit]

If "The PP250 removed not only virtual memory and the operating system", why is there a paper by D. M. England called "Operating System of System 250" in the Proceedings of the June 1972 International Switching Symposium? And why does Henry Levy's paper say

The Plessey 250 operating system provides a virtual segment interface to programs; that is, a program can address its segments independent of whether they are located in primary or secondary memory. Secondary storage is totally transparent to the program. The operating system determines which segments are held in primary memory and which are held on disk storage. When a program attempts to access a segment that is not in primary memory, a trap occurs and the operating system then loads the segment from disk.

given that this is virtual memory (it's not paged virtual memory, but virtual memory doesn't have to be paged - it can be segmented, with segments having a present/absent bit, with segments being read into memory if a reference to them traps because the present/absent bit is set to "absent", and with segments being written back (if modified) and removed from memory, and their present/absent bit set to "absent", if main memory is needed for another segment and some segments were chosen for removal. Guy Harris (talk) 10:08, 17 February 2020 (UTC)

We did indeed have an operating system development team however their work was the creation of services implemented as object-oriented machine code, as always, activated by a Church-Instruction with chosen Capability Key(s). For example, class-based services for segment management, thread management, I/O services and the like. However, no rings of privilege or page-based memory management registers exist as with centralized timesharing machines. These common service function abstractions were assembled one by one. They ran 'inline' within an application thread. No dangerous hardware privileges existed, all power resolved to the possession of a Capability Key to the appropriate service. Thus operating system service existed (or not) as function abstractions defined by individual Capability Keys. These keys used the Enter-Only access right that hides the implementation details of the object-oriented service while allowing an in-line 'call-return' protocol to access the service. Finally, local and remote access to memory as implemented by PP250 corresponds to the internet web cache mechanisms rather than to how privileged virtual memory works in time-shared computers. Kjhamerhodges (talk) 20:23, 20 February 2020 (UTC)
So it's an operating system that uses capabilities, rather than protection rings, for protection, providing finer-grained protection. That is, as far as I know, not unique to the System 250; at least some other capability-based (such as the BiiN/OS operating system for the BiiN systems[1][2] and, I infer, the iMAX 432 operating system for the Intel iAPX 432 systems.[3]
Note, by the way, that the BiiN systems supported demand paging; large objects had either 1-level or 2-level page tables. Another perfectly acceptable alternative is to have the base addresses of objects be addresses within a demand-paged linear address space; that does not inherently make the entire linear address space accessible to code.
And please give details of how "local and remote access to memory as implemented by PP250 corresponds to the internet web cache mechanisms rather than to how privileged virtual memory works in time-shared computers." How is it at all like a Web cache? Guy Harris (talk) 23:07, 20 February 2020 (UTC)

What makes a Church-Turing Machine[edit]

The PP250 was a first attempt to apply all the laws in both sides of the Church-Turing Thesis. One side, the Turing side leads to the realization of the architecture of a network of atomic (single tape/single function) software machines where all the hard and soft mechanisms for a fail-safe, future safe computational machine exist. The other side is the realization of the typed variables as a Lambda Calculus Application for reliable software abstractions. This is a very different architecture to the von Neumann Architecture. It is founded on a small set of new machine instructions Hamer-Hodges calls Church-Instructions in his book on this subject.[4].

The unique feature of a Church-Turing Machine (as exemplified by the PP250) is program control over cybersecurity. This is achieved by object-oriented, capability controlled machine-code. A handful of Church-Instructions provide direct (incorruptible) programmed control over the immutable capability keys. The Capability Keys limit access rights, both as least authority and as least privilege, to the digital objects as the working components of a Lambda Calculus Namespace. This includes any Threads (as Lambda-Calculus Applications), any programmed Abstractions (a Class), an (executable) Function, and named Variables (including Read or Write digital objects. The immutable Capability Keys implement the symbols as Lambda Calculus Variables[5]. The set of Capability Keys that form a Namespace hide every hardware and software implementation detail from external and unauthorized manipulation. Using this simple immutable notation of Capability Keys, functions and application, are combined in a Universal Model of Computation. The main advantage is the structure of symbols relates functions to arguments as object-oriented functions abstractions. Because the syntax of the λ-calculus is sparse, it both elegant and easily implemented by a computer using the Church-Turing Architecture. The result is an extension of hardened machine code with sheltered function abstractions that obey the theory as rules of computation. The expressiveness and flexibility of the λ-calculus make it the perfect computational mechanical form of intrinsic logic and abstract mathematics.

The implementation details of the functions are hidden from misuse and abuse as named function abstractions. This starts all the way down as the binary data framed for any purpose. For example, the binary machine types for numbers, strings, integers all the way to floating-point numbers, as well as connected device types as device drivers and communication channels. These implementation details are sheltered by functions within a programmed class, implemented as a named object-oriented digital object within a Namespace.

The object-oriented machine code is addressed symbolically by a Capability key. Access rights are guarded by a capability key that corresponds to an authorized Lambda-Calculus component. Keys are distributed scientifically as authorized by the relationship between symbols in functional expressions. Software modules, small or large, only interact if an approved need exists in the expression as authorized by a symbol. The in-scope capability key is listed in a C-List object for each node of computation. Importantly, the hardware details hidden by a Namespace abstraction include the physical location and limited controls reserved for private use. Software interactions are always scientific expression among function abstractions. This includes functional Threads implementing the Lambda Calculus Applications.

The capability machine acts as a Lambda-Calculus Meta-Machine. It hides what happens when a capability key is trapped. Resolution can take place, either on load or on use (two different cases of Lazy evaluation). In each case, a function abstraction responsible for Trap handling is called. For example, the object may no longer exist because it was released by the owner or perhaps it is at-rest in a distant (near or far) home-location as defined by what Bob Fabry called an 'out-form address'. The Capability Key acts like a protected URI resolved by the appropriate function abstraction that fetches the object from the Web and caches the download for direct access. To clarify, a Church-Turing Machine automates the Lambda Calculus as fail-safe computational objects. This hides every hardware detail and prevent the default powers that are abused by malware and hackers in a monolithically compiled von Neumann architecture.

When function abstraction exists as object-oriented machine code as with the PP250, machine-level Church-Instructions control the computational objects as scientific expressions of Lambda-Calculus. The Variables, Functions, Abstractions, Applications, and Namespace objects are individually protected components with a guaranteed Mean time between failures (MTBF). PP250 achieved a software MTBF in excess of five decades since the Capability Keys guard access rights even at the machine level. Security cannot be bypassed and can be program-controlled for improved dynamic response to any attack.

This clockwork machine level approach is the same technique Charles Babbage used to hide his machine code (mechanical gears and leavers) from Ada Lovelace. Ada programmed using pure mathematics as written by a teacher on a chalkboard. More than this, Ada's Bernoulli Function abstractions used 'functional' variables (e.g 2n+1) that cannot exist with the von Neumall Architecture using static linear memory[6]. This is a unique characteristic of the dynamically bound machine code in a Church-Turing Machine that conforms to the laws of the Lambda-Calculus as a hardware machine.

Compilation creates statically bound machine code and statically bound machine level parameters. To a lesser degree, the frameworks of an Abacus and a Slide Rule are the same. In these earlier computers, the software is in a human head but simplified to remove (hide) all proprietory best practices and other technical details [7]. This prevents the wide range of malware and hacking attacks that build on the Confused Deputy attacks. Even the exposure of binary data is a threat. When the hardware is exposed it degenerates into a pile of stone with proprietory conventions replacing pure science. Kjhamerhodges (talk) 18:56, 23 February 2020 (UTC)

"This clockwork machine level approach is the same technique Charles Babbage used to hide his machine code (mechanical gears and leavers) from Ada Lovelace. Ada programmed using pure mathematics as written by a teacher on a chalkboard." It looks, from this diagram, as if she programmed using assembler language with comments. :-) The second column is the opcode, the third and fourth columns are the operands, and the rest of the columns are comments describing what that particular operation is doing. (AgreedKjhamerhodges (talk) 20:21, 29 February 2020 (UTC))
So that's "pure" in the sense that she doesn't indicate what particular gears and levers do what, but assembly-language code is "pure" in the sense that it doesn't indicate what particular vacuum tubes or transistors turn on or off. That's not unique to the System 250, it's the case with most if not all machines. (What is unique is the ability to express mathematical expressions directly from a chalkboard, as the individual symbols in a single (Church) machine instruction, using Capability Keys to represent the symbols as a DNA tree of relationships. This DNA tree protects the software as a reliable software machine of many moving parts.Kjhamerhodges (talk) 20:21, 29 February 2020 (UTC))
"To clarify, BiiN and all other Capability-Based computers (except PP250) do not automate the Lambda Calculus computational objects to hide all hardware details that can be abused by malware and hackers." For example, hardware detail cannot be hidden when the software image is statically bound to a monolithic compilation. The System 250 does not support ant statically bound compilations needs for any computer following the von Neumann architecture using the default rings of privilege needed for page-based virtual memory, an OS or a supervisor.
Yes, the System 250 has "enter" capabilities that allow a CALL instruction to invoke code for a particular object, with that code being given capabilities to access raw binary data not available to the caller so that only code for that object can access or modify the object's raw binary data.
While, the BiiN systems, at least, have such a call mechanism, encapsulation is incomplete because a privileged supervisor mode still exists. See chapter 7 of the BiiN CPU Architecture Reference.
Thus, in a Church-Turing Machine, there is no supervisor hardware mode and no special default machine mechanisms or complex instruction settings to turn this mode on and off. Power and privilege are allocated incrementally by immutable, tokenized access rights allocated by the owner of a capability key. There are no default privileges such as exist if a computer offers a supervisor mode. A Church-Turing Machine operates at the dead center of the Church-Turing Thesis on the computational boundary between abstraction and reality. This is where the distinction between capability systems as a comprehensive, fail-safe machine and capabilities using an incomplete (hybrid) memory protection model. The capability keys that can be handed around are of every valid kind defined by the Lambda Calculus. The "Enter-Only" access right defines an abstraction of any kind, not just "read raw data", "write raw data", or "execute raw data as code". The capability keys have the ability to pass a key to run this method of an object and passing Variables as parameters. That's all that is needed, malware cannot look inside any object. In other words, capabilities enforce object-oriented programming-style of information hiding, both as a protection mechanism and an application/object-oriented program-structuring mechanism. Kjhamerhodges (talk) 20:21, 29 February 2020 (UTC) to address the questions raised by (talk) 21:16, 23 February 2020 (UTC)
Sections 2.3.7 and 2.3.8 of Capability Hardware Enhanced RISC Instructions: CHERI Instruction-Set Architecture (Version 7), clearly states "CHERI is a hybrid capability-system architecture that adds new capability-system primitives to commodity 64-bit RISC ISAs, enabling software to efficiently implement fine-grained memory protection and scalable software compartmentalization". As such CHERI supports capabilities to isolate data in protection domains and to make cross-domain calls, but this is still limited by the hybrid architecture that can still undermine the security of the system and allow malware and hacking to take place.
I.e., both the current version of CHERI, and the BiiN systems, only offer capabilities within monolithic software systems where a compilation acts as protection domain but subject to the insecure von Neumann architecture. For example, the second paragraph of the "Instruction Sets" section of Fabry's 1974 paper only expresses a Call mechanism. To correctly represent the Lambda-Calculus the abstractions for representing parallel threads, and other, perhaps remote Namespace objects. The Lambda Calculus computation object must be supported at machine level. The PP250 used reserved capability context registers as function abstractions where the Church-Instruction are the methods for these reserved types. I am sure all hybrid systems including the iAPX 432 did not do this.Kjhamerhodges (talk) 20:21, 29 February 2020 (UTC) in response to Guy Harris (talk) 22:12, 23 February 2020 (UTC)

References

  1. ^ "BiiN Object Computing" (PDF). BiiN.
  2. ^ "BiiN Systems Overview" (PDF). BiiN.
  3. ^ Elliott I. Organick. A Programmer's View Of The Intel 432 System (PDF). ISBN 0-07-047719-1.
  4. ^ Hamer-Hodges, kenneth (1 Jan 2020). Civilizing Cyberspace; The Fight for Digital Democracy (1 ed.). Self. p. 410. ISBN 987-1-95-163044-7 Check |isbn= value: invalid prefix (help).
  5. ^ Church, Alonzo. "Stanford Encyclopedia of Philosophy". The Lambda Calculus. Retrieved 29 February 2020. CS1 maint: discouraged parameter (link)
  6. ^ Menabre (With notes upon the Memoir by the Translator Ada Agusta, Countess of Lovelace, L.F. "Sketch of the Analytical Engine". Fourmilab Switzerland. John Walker. Retrieved 23 February 2020. CS1 maint: discouraged parameter (link)
  7. ^ Hamer-Hodges, Kenneth (1 January 2020). Civilizing Cyberspace: The Fight for Digital Democracy. USA: Self. p. 209. ISBN 978-1-95-163044-7. Retrieved 23 February 2020. CS1 maint: discouraged parameter (link)