Unicode in Microsoft Windows
This article needs additional citations for verification. (June 2011) (Learn how and when to remove this template message)
Microsoft was one of the first companies to implement Unicode (back then UCS-2, that evolved into UTF-16) in their products, while they are still in 2019 improving their operating system support for UTF-8. Windows NT was the first operating system that used "wide characters" in system calls. Using the UCS-2 encoding scheme at first, it was upgraded to UTF-16 starting with Windows 2000, allowing a representation of additional planes with surrogate pairs.
In various Windows families
Windows NT based systems
Current Windows versions and all back to Windows XP and prior Windows NT (3.x, 4.0) are shipped with system libraries which support string encoding of two types: 16-bit "Unicode" (UTF-16 since Windows 2000) and a (sometimes multibyte) encoding called the "code page" (or incorrectly referred to as ANSI code page). 16-bit functions have names suffixed with -W (from "wide") such as
SetWindowTextW. Code page oriented functions use the suffix -A for "ANSI" such as
SetWindowTextA (some other conventions were used for APIs copied from other systems such as
wcslen/strlen). This split was necessary because many languages, including C, did not provide a clean way to pass both 8-bit and 16-bit strings to the same function.
Most 'A' functions are implemented as a wrapper that translates the text using the current code page to UTF-16 and then calls the 'W' function. 'A' functions that return strings do the opposite conversion, and apparently put a '?' in for characters that don't exist in the current locale.
Microsoft attempted to support Unicode "portably" by providing a "UNICODE" switch to the compiler, that switches unsuffixed "generic" calls from the 'A' to the 'W' interface and converts all string constants to "wide" UTF-16 versions. This does not actually work because it does not translate UTF-8 outside of string constants, resulting in code that attempts to open files just not compiling.
Earlier, and independent of the "UNICODE" switch, Windows also provides the "MBCS" API switch. This changes some functions that don't work in MBCS such as
strrev to an MBCS-aware one such as
Notice that a lot of Microsoft documentation uses the term "Unicode" to mean "not 8-bit text".
In Windows CE UTF-16 was used almost exclusively, with the 'A' API mostly missing. A limited set of ANSI API is available in Windows CE 5.0, for use on a reduced set of locales that may be selectively built onto the runtime image.
This section needs expansion. You can help by adding to it. (June 2011)
In 2001, Microsoft released a special supplement to Microsoft’s old Windows 9x systems. It includes a dynamic link library unicows.dll (only 240 KB) containing the 16-bit flavor (the ones with the letter W on the end) of all the basic functions of Windows API.
Microsoft Windows has a code page designated for UTF-8, code page 65001. Prior to Windows 10 insider build 17035 (November 2017), it was impossible to set the locale code page to 65001, leaving this code page only available for:
- Explicit conversion functions such as MultiByteToWideChar
- The Win32 console command
chcp 65001to translate stdin/out between UTF-8 and UTF-16.
Microsoft claimed a UTF-8 locale might break some functions (a possible example is _mbsrev) as they were written to assume multibyte encodings used no more than 2 bytes per character, thus until now code pages with more bytes such as GB 18030 (cp54936) and UTF-8 could not be set as the locale.
This means that "narrow" functions, in particular
fopen (which opens files), cannot be called with UTF-8 strings, and in fact there is no way to open all possible files using
fopen no matter what the locale is set to and/or what bytes are put in the string, as none of the available locales can produce all possible UTF-16 characters. This problem also applies to all other api that takes or returns 8 bit strings, including Windows ones such as
On all modern non-Windows platforms, the string passed to
fopen is effectively UTF-8. This produces an incompatibility between other platforms and Windows. The normal work-around is to add Windows-specific code to convert UTF-8 to UTF-16 using MultiByteToWideChar and call the "wide" function. Another popular work-around is to convert the name to the 8.3 filename equivalent, this is necessary if the
fopen is inside a library function that takes a string filename and cannot be altered.
There were proposals to add new APIs to portable libraries such as Boost to do the necessary conversion, by adding new functions for opening and renaming files. These functions would pass filenames through unchanged on Unix, but translate them to UTF-16 on Windows. This would allow code to be "portable", but required just as many code changes as calling the wide functions.
With insider build 17035 and the April 2018 update (nominal build 17134) for Windows 10, a "Beta: Use Unicode UTF-8 for worldwide language support" checkbox appeared for setting the locale code page to UTF-8.[a] This allows for calling "narrow" functions, including
SetWindowTextA, with UTF-8 strings.
Microsoft's compilers often fail at producing UTF-8 string constants from UTF-8 source files. The most reliable method is to turn off UNICODE, not mark the input file as being UTF-8 (i.e. do not use a BOM), and arrange the string constants to have the UTF-8 bytes. If a BOM was added, a Microsoft compiler will interpret the strings as UTF-8, convert them to UTF-16, then convert them back into the current locale, thus destroying the UTF-8. Without a BOM and using a single-byte locale, Microsoft compilers will leave the bytes in a quoted string unchanged.
- Found under control panel, "Region" entry, "Administrative" tab, "Change system locale" button.
- "Unicode in the Windows API". Retrieved 7 May 2018.
- "Conventions for Function Prototypes (Windows)". MSDN. Retrieved 7 May 2018.
- "Support for Multibyte Character Sets (MBCSs)".
- "Double-byte Character Sets". MSDN. Retrieved 7 May 2018.
our applications use DBCS Windows code pages with the "A" versions of Windows functions.
- "Differences Between the Windows CE and Windows NT Implementations of TAPI". MSDN. Retrieved 7 May 2018.
Windows CE is Unicode-based. You might have to recompile source code that was written for a Windows NT-based application.
- "Code Pages (Windows CE 5.0)". Microsoft Docs. Retrieved 7 May 2018.
- "Code Page Identifiers (Windows)". msdn.microsoft.com.
- "Windows10 Insider Preview Build 17035 Supports UTF-8 as ANSI". Hacker News. Retrieved 7 May 2018.
- _strrev, _wcsrev, _mbsrev, _mbsrev_l Microsoft Docs
- [MSDN forums]
- "UTF-8 in Windows". Stack Overflow. Retrieved July 1, 2011.
- UTF-8 Everywhere FAQ: How do I write UTF-8 string literal in my C++ code?