DNET

From Wikipedia, the free encyclopedia
Jump to: navigation, search

DNET is a proprietary software suite of network protocols created by Swedish Dataindustrier AB DIAB, originally deployed on their Databoard products. It was based upon X.25, which was particularly popular in European telecommunications circles at that time. In that incarnation it was rated at 1 Mbit/s over RS-422.

In the 80's, ISC Systems Corporation (ISC) purchased DNET as part of their purchase of DNIX, and ported it to run over Ethernet. ISC's choice of DNET over TCP/IP was in part due to the relative light weight of the DNET protocol stack, allowing it to run more efficiently on the target machinery. DNET was also auto-configuring so there was no manual configuration of the local network, all that was required was that each machine in a network be given a unique name. This simplicity was advantageous in ISC's market.

Being based on X.25, DNET was connection-oriented, datagram-based (as opposed to a byte stream), supported out-of-band (interrupt) messages, and provided link-down notifications to its clients and servers so that applications did not have to provide their own heartbeats. In the financial community these were all considered advantages over, say, TCP/IP. DNET also supported Wide Area Networks (WAN) using X.25 point-to-point communication links, either leased line or dialup (see also Data link). (WAN support did require manual configuration of the gateway machines.)

DNET provided named network services, and supported a multicast protocol for finding them. Clients would ask for a named service, and the first respondent (of potentially many) would get the connection. Servers could either be resident, in which case they registered their service name(s) with the protocol stack when they were started, or transient, in which case a fresh server was forked/execed for each client connection.

DNET at ISC consisted of the following services:

  • netman (the main networking client/server support handler)
  • raccess (remote file access via /net/machine/path/from/raccess/root...)
  • rx (remote execution)
  • ncu (network login)
  • bootserver (diskless workstation boot service)
  • dmap (ruptime analog)

There were many more services than these at a typical DNET installation - these are representative.

netman[edit]

netman was the main component of DNET. It was a DNIX Handler, usually mounted on /netphys, and was responsible for providing all Layer 2 and Layer 3 X.25 protocol handling. It talked to the Ethernet and HDLC device drivers. It also provided the service name registry, and the WAN gateway functionality. Resident servers could also utilize, at their instigation, a Layer 3 protocol stack (called 'serverprot') between themselves and netman, allowing them to support up to 4095 client connections through one file descriptor (to netman). Such servers were called complex resident servers, so named in honor of the relatively complicated (though not large) bit of protocol code that had to be included to handle the multiplexing and flow control. Simple resident and transient servers consumed a file descriptor per client connection. It was possible to run more than one netman process, for testing or other special purposes. (Such a process would be configured to use different Ethertype and handler mount points, at a minimum.) The /usr/lib/net/servtab file was the usual location for the configuration file controlling WAN configuration and transient servers.

Client applications would open /netphys/servicename, this would normally result in an open connection to a server somewhere, possibly even on the same machine. Resident servers would open /netphys/listen/servicename, this would register their service name with netman. Transient servers were pre-registered via their entry in servtab, and would be forked/execed with their connection already established by netman. Machine-specific services (such as ncu---network login) would contain the machine name as part of the service name, installation-specific services (such as dmap---a site's machine status servers) would not.

Service name resolution was handled entirely between netman processes. A client's representative would multicast the desired service name to the network using a MUI [Multicast Unnumbered Information] extension to X.25. Responses indicating server availability would be directed (not multicast) back by potential server representatives. When there was more than one respondent to the multicast (as was normal for, say, dmap) the first one would be selected for opening a connection. Only one server was ever contacted per client service request. As with all UI-class messages in X.25, packet loss was possible, so the MUI process was conducted up to three times if there was no response.

The X.25-ness of connections, namely datagram control, was exposed to applications (both client and server) as an extra control byte at the beginning of each read and write through a connection. As was customary in network header processing, this byte was usually accessed at a -1 offset within any application's networking code, only the buffer allocation and the read(2)/write(2) calls were usually aware of it. This byte contained the X.25 M, D, and Q bits (for More, Delivery, and Qualifier). DNET never implemented the D (delivery confirmation) bit, but the other two were useful, particularly the M bit. The M bits were how datagrams were delimited. A byte-stream application could safely ignore them. Any read with a clear M bit indicated that the read result contained an entire datagram and could be safely processed. Reads that were too small to contain an entire datagram would get the part that would fit into the buffer, with the M bit set. M bits would continue to be set on reads until a read contained the end of the original datagram. Datagrams were never packed together, you could get at most one per read. Any write with the M bit set would propagate to the other end with the M bit set, indicating to the other end that it should not process the data yet as it was incomplete. (The network was free to coalesce M'd data at its discretion.) The usual application merely wrote an entire datagram at once with a clear M bit, and was coupled with a small read loop to accumulate entire datagrams before delivery to the rest of an application. (Though not often required due to automatic fragmentation and reassembly within the protocol stack, this protective loop ensured that allowable exposed fragmentation was never harmful.) The Q bit was a simple marker, and could be used to mark 'special' datagrams. In effect it was a single header bit that could be used to mark metadata.

Out-of-band (OOB) data, which bypassed all buffering, flow control, and delivery confirmation was accomplished via DNIX's ioctl mechanism. It was limited (per X.25) to 32 bytes of data. (Asynchronous I/O reads were usually utilized so that out of band data could be caught at any time.) As with UDP, it was possible to lose OOB data, but this normally could only happen if it was overutilized. (The lack of a reader waiting for it resulted in OOB data being discarded.)

flow control was accomplished within the network (between netman processes, and possibly involving external X.25 WAN links) using the usual X.25 mechanisms. It was exposed to the applications only insofar as whether the network data reads and writes blocked or not. If a request could be satisfied via the buffering abilities of the netman handler and/or the current state of the connection it would be satisfied immediately without blocking. If the buffering were exceeded the request would block until the buffers could satisfy what remained of the request. Naturally, Asynchronous I/O could be used to insulate the process from this blocking if it would be a problem. Also, complex resident servers used the 'serverprot' X.25 flow control mechanisms internally to avoid ever blocking on their single network file descriptor, this was vital considering that the file descriptor was shared by up to 4095 client connections.

raccess[edit]

raccess provided a distributed filesytem, usually mounted on /net. Shell-level applications could access files on remote machines via /net/machine.domain/path file names. Raccess was a DNIX handler (for its clients), a netman client (for packaging up the filesystem requests), and a netman server (for executing the requests on the remote machine). The usual reference point for remote files was '/', the root of the remote machine's filesystem, though it could be anything that was required. (Changing this reference point was one way of providing a facility analogous to chroot jails for network file accesses.) Raccess supported user ID translation and security facilities in a manner analogous to TCP/IP's .rhosts file. It was possible to run more than one raccess process, for testing or other special purposes. Examples:

cat /net/grumpy/usr/adm/errmessages
vi /net/sneezy/etc/passwd
rm /net/dopey.on.weekends.com/etc/nologin
mv /net/doc/tmp/log /net/doc/tmp/log-
cd /net/bashful/tmp && ls

rx[edit]

rx provided remote command execution in a manner analogous to TCP/IP's rsh (or remsh) facility. It was a netman client (for passing standard I/O to the remote machine), a netman server (for receiving standard I/O on the remote machine), a parent process for hosting the remote process, and a DNIX handler (so that remote programs believed themselves to be connected to tty devices). Rx supported user ID translation and security facilities in a manner analogous to TCP/IP's .rhosts file. Some examples:

rx machine!who
rx machine!vi /etc/passwd
tar cf - . | rx -luser:pass machine.far.far.away.com!tar xf -

ncu[edit]

ncu (networked call unix) was the usual network remote login procedure, analogous to TCP/IP's telnet or rlogin protocols. Like rx, it was a netman client (for passing standard I/O to the remote machine), a netman server (for receiving standard I/O on the remote machine), a parent process for hosting the remote login procedure, and a DNIX handler (so that remote programs believed themselves to be connected to tty devices).

bootserver[edit]

The bootserver handled boot and dump requests from the diskless workstations. It was a simple process that talked directly to the Ethernet driver. Technically not really part of DNET, in that it was a satellite protocol merely associated with DNET installations. (As was the X.25 'safelink' file server protocol used to communicate between these same diskless workstations and their file servers.)

dmap[edit]

dmap provided a facility analogous to TCP/IP's ruptime facility. Dmap servers, one per disk-based machine, connected directly to Ethernet and periodically broadcast (multicast, actually, so that non-participants never even saw the messages) their presence. The same process also collected these broadcasts and (as a server) advertised the availability of the list of senders through netman. To control the load on the servers, the broadcast frequency was affected by the current size of the list in order to limit the network messages to an average of one per second. Dmap clients would contact their nearest dmap server (as determined by who responded first to the service name enquiry) to get the current list of machines, then would contact each machine in turn (usually maintaining four [configurable] connections in parallel for speed) to get the specific machine status they were interested in. (Unlike most other transient servers, the dmap client program was not also the transient server. The convention for DNET transient servers was that the same program was used for both sides of the link. netman automatically passed a -B command-line argument to any transient server it spawned, indicating to the process that it was the B-side process and that its standard input file descriptor was a network service connection. The reason for splitting dmap into A- and B-side programs was the desire to push as much of the processing [such as display formatting] onto the client as possible, a 'thick' client in other words. In this case because the client was run infrequently, and usually manually, the division was made in order to minimize the load on the servers. This extended even to minimizing the memory footprint of the transient server, which necessitated the split into A- and B-side programs.)