However, unlike some of the "specifications" given in other April 1 RFCs, they are actually technically possible to implement, and have in fact been implemented in PDP-10 assembly language. They are however not endorsed by the Unicode Consortium.
Like the 8-bit code commonly called variable-length quantity, UTF-9 uses a system of putting an octet in the low 8 bits of each nonet and using the high bit to indicate continuation. This means that ASCII and Latin 1 characters take one nonet each, the rest of the BMP characters take two nonets each and non-BMP code points take three. Code points that require multiple nonets are stored starting with the most significant non-zero nonet.
UTF-18 is a fixed length encoding using an 18 bit integer per code point. This allows representation of 4 planes, which are mapped to the 4 planes currently used by Unicode (planes 0–2 and 14). This means that the two private use planes (15 and 16) and the currently unused planes (3–13) are not supported. The UTF-18 specification does not say why they did not allow surrogates to be used for these code points, though when talking about UTF-16 earlier in the RFC, it says "This transformation format requires complex surrogates to represent code points outside the BMP". After complaining about their complexity, it would have looked a bit hypocritical to use surrogates in their new standard. It is unlikely that planes 3–13 will be assigned by Unicode any time in the foreseeable future. Thus, UTF-18, like UCS-2 and UCS-4, guarantees a fixed width for all code points (although not for all glyphs).
UTF-9 and UTF-18 are not likely to be put to practical use on modern computer systems, whose memory structure and communication protocols are based on octets rather than nonets. As such, these systems will generally use UTF-8, UTF-16 or UTF-32 instead to store and transmit Unicode text. However, UTF-9 and UTF-18 may be of interest to retrocomputing enthusiasts, who may use these schemes to represent Unicode text on PDP-10 and similar systems.
Furthermore, both UTF-9 and UTF-18 have specific problems of their own:
UTF-9 requires special care when searching, as a shorter sequence can be found at the end of a longer sequence. This means that it is necessary to search backwards before the start of the sequence in order to find the actual start of the sequence, because only the highest bit of each nonet indicates continuation when it is set, but not the start of the sequence (this problem does not occur with UTF-8 where you can safely determine the start of the sequence from a random position without having to scan before the start position).
UTF-18 cannot represent all Unicode code points (although unlike UCS-2 it can represent all the planes that currently have non-private use code point assignments, i.e. characters in the 4 planes 0, 1, 2, and 14, but not planes 3 though 13, which are currently unused, nor planes 15 or 16, which are for private use) making it a bad choice for a system that may need to support new languages (or rare CJKideographs that are added after the SIP fills up) in the future: plane 3 will very likely be used for newer CJK extensions, and other planes may be used for other ideographic scripts or pictographic sets still not encoded, so UTF-18 would not support these characters (UTF-18 provides no surrogates mechanism like UTF-16: it not only prohibits the use of the range U+D800–U+DBFF not just for encoding the supported supplementary planes 1, 2, and 14, but also prohibits using them for all other standard planes 3 through 13, and supplementary private use planes 15 and 16).