This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Unicode in Microsoft Windows" – news ·newspapers ·books ·scholar ·JSTOR(June 2011) (Learn how and when to remove this message) |
Microsoft was one of the first companies to implementUnicode in their products.Windows NT was the first operating system that used "wide characters" insystem calls. Using the (now obsolete)UCS-2 encoding scheme at first, it was upgraded to thevariable-width encodingUTF-16 starting withWindows 2000, allowing a representation of additional planes with surrogate pairs. However Microsoft did not supportUTF-8 in its API until May 2019.
Before 2019, Microsoft emphasized UTF-16 (i.e. -W API), but has since recommended to useUTF-8 (at least in some cases),[1] on Windows andXbox (and in other of its products), even states "UTF-8 is the universal code page for internationalization [and] UTF-16 [... is] a unique burden that Windows places on code that targets multiple platforms. [..] Windows [is] moving forward to support UTF-8 to remove this unique burden [resulting] in fewer internationalization issues in apps and games".[2]
A large amount of Microsoft documentation uses the word "Unicode" to refer explicitly to the UTF-16 encoding. Anything else, including UTF-8, is not "Unicode" in Microsoft's outdated language (while UTF-8 and UTF-16 are both Unicode according tothe Unicode Standard, or encodings/"transformation formats" thereof).
Current Windows versions and all back toWindows XP and priorWindows NT (3.x, 4.0) are shipped withsystem libraries that support stringencoding of two types: 16-bit "Unicode" (UTF-16 sinceWindows 2000) and a (sometimes multibyte) encoding called the "code page" (or incorrectly referred to asANSI code page). 16-bit functions have names suffixed with 'W' (from"wide") such asSetWindowTextW. Code page oriented functions use the suffix 'A' for "ANSI" such asSetWindowTextA (some other conventions were used for APIs that were copied from other systems, such as_wfopen/fopen orwcslen/strlen). This split was necessary because many languages, includingC, did not provide a clean way to pass both 8-bit and 16-bit strings to the same function.
Microsoft attempted to support Unicode "portably" by providing a "UNICODE" switch to the compiler, that switches unsuffixed "generic" calls from the 'A' to the 'W' interface and converts all string constants to "wide" UTF-16 versions.[3][4] This does not actually work because it does not translate UTF-8 outside of string constants, resulting in code that attempts to open files just not compiling.[citation needed]
Earlier, and independent of the "UNICODE" switch, Windows also provided the Multibyte Character Sets (MBCS) API switch.[5] This changes some functions that don't work in MBCS such asstrrev to an MBCS-aware one such as_mbsrev.[6][7]
In (the now discontinued)Windows CE, UTF-16 was used almost exclusively, with the 'A' API mostly missing.[8] A limited set of ANSI API is available in Windows CE 5.0, for use on a reduced set of locales that may be selectively built onto the runtime image.[9]
In 2001, Microsoft released a special supplement to Microsoft's oldWindows 9x systems, which was described as providing "a layer over theWin32 API on Windows 95/98/Me so that a software developer can write a single Unicode version of their application and have it run properly on all platforms."[10] It includes a dynamic link library, 'unicows.dll', (only 240 KB) containing the 16-bit flavor (the ones with the letter W on the end) of all the basic functions of Windows API. It is merely a translation layer:SetWindowTextW will simply convert its input using the current codepage and callSetWindowTextA on Windows 9x, while Windows NT systems will pass through to the OS version ofSetWindowTextW.
Alternatives exist, among themOPENCOW.DLL, "The Open Layer for Unicode for Windows", afree (MPL 1.1/GPL 2.0/LGPL 2.1 licensed) re-implementation of the MSLU byMozilla. Open source versions of theUNICOWS.LIB link library are also available, which can be used with the originalUNICOWS.DLL orOPENCOW.DLL.[11]
Microsoft Windows (Windows XP and later) has a code page designated forUTF-8, code page 65001[12] orCP_UTF8. For a long time, it was impossible to set the locale code page to 65001, leaving this code page only available for a) explicit conversion functions such as MultiByteToWideChar and/or b) theWin32 console commandchcp 65001 to translate stdin/out between UTF-8 and UTF-16. This meant that "narrow" functions, in particularfopen (which opens files), couldn't be called with UTF-8 strings, and in fact there was no way to open all possible files usingfopen no matter what the locale was set to and/or what bytes were put in the string, as none of the available locales could produce all possible UTF-16 characters. This problem also applied to all other APIs that take or return 8-bit strings, including Windows ones such asSetWindowText.
Programs that wanted to use UTF-8, in particular code intended to be portable to other operating systems, needed a workaround for this deficiency. The usual work-around was to add new functions to open files that convert UTF-8 to UTF-16 usingMultiByteToWideChar and call the "wide" function instead offopen.[13] Dozens of multi-platform libraries added wrapper functions to do this conversion on Windows (and pass UTF-8 through unchanged on others), an example is a proposed addition toBoost,Boost.Nowide.[14] Another popular work-around was to convert the name to the8.3 filename equivalent, this is necessary if thefopen is inside a library. None of these workarounds are considered good, as they require changes to the code that works on non-Windows.
In April 2018 (or possibly November 2017[15]), with insider build 17035 (nominal build 17134) for Windows 10, a "Beta: Use Unicode UTF-8 for worldwide language support" checkbox appeared for setting the locale code page to UTF-8.[a] This allows for calling "narrow" functions, includingfopen andSetWindowTextA, with UTF-8 strings. However this is a system-wide setting and a program cannot assume it is set.
In May 2019, Microsoft added the ability for a program to set the code page to UTF-8 itself,[1][16] allowing programs written to use UTF-8 to be run by non-expert users.
As of 2019[update], Microsoft recommends programmers use UTF-8 (e.g. instead of any other 8-bit encoding),[1] on Windows andXbox, and may be recommending its use instead of UTF-16, even stating "UTF-8 is the universal code page for internationalization [and] UTF-16 [..] is a unique burden that Windows places on code that targets multiple platforms."[2] Microsoft does appear to be transitioning to UTF-8, stating it previously emphasized its alternative, and inWindows 11 some system files are required to use UTF-8 and do not require a Byte Order Mark.[17] Notepad can now recognize UTF-8 without the Byte Order Mark, and can be told to write UTF-8 without a Byte Order Mark.[citation needed] Some other Microsoft products are using UTF-8 internally, including Visual Studio[18][19] and theirSQL Server 2019, with Microsoft claiming 35% speed increase from use of UTF-8, and "nearly 50% reduction in storage requirements."[20]
Before 2019 Microsoft's compilers could not produce UTF-8 string constants from UTF-8 source files. This is due to them converting all strings to the locale code page (which could not be UTF-8). At one time the only method to work around this was to turnoffUNICODE, andnot mark the input file as being UTF-8 (i.e. do not use aBOM).[21] This would make the compiler think both the input and outputs were in the same single-byte locale, and leave strings unmolested.
As of Windows version 1903 (May 2019 update), you can use the ActiveCodePage property in the appxmanifest for packaged apps, or the fusion manifest for unpackaged apps, to force a process to use UTF-8 as the process code page. [...]CP_ACPequates toCP_UTF8only if running on Windows version 1903 (May 2019 update) or above and the ActiveCodePage property described above is set to UTF-8. Otherwise, it honors the legacy system code page. We recommend usingCP_UTF8explicitly.
By operating in UTF-8, you can ensure maximum compatibility [..] Windows operates natively in UTF-16 (or WCHAR), which requires code page conversions by using MultiByteToWideChar and WideCharToMultiByte. This is a unique burden that Windows places on code that targets multiple platforms. [..] The Microsoft Game Development Kit (GDK) and Windows in general are moving forward to support UTF-8 to remove this unique burden of Windows on code targeting or interchanging with multiple platforms and the web. Also, this results in fewer internationalization issues in apps and games and reduces the test matrix that's required to get it right.
our applications use DBCS Windows code pages with the "A" versions of Windows functions.
Windows CE is Unicode-based. You might have to recompile source code that was written for a Windows NT-based application.
Make sure your LayoutModification.json uses UTF-8 encoding.
At some point in the past, the Microsoft compiler was changed to use UTF-8 internally. So, as files are read from disk, they are converted into UTF-8 on the fly.
Visual Studio uses UTF-8 as the internal character encoding during conversion between the source character set and the execution character set.
For example, changing an existing column data type from NCHAR(10) to CHAR(10) using an UTF-8 enabled collation, translates into nearly 50% reduction in storage requirements. [..] In the ASCII range, when doing intensive read/write I/O on UTF-8, we measured an average 35% performance improvement over UTF-16 using clustered tables with a non-clustered index on the string column, and an average 11% performance improvement over UTF-16 using a heap.