![]() Example of Unicode character encoding through UTF-16 | |
Language(s) | International |
---|---|
Standard | Unicode Standard |
Classification | Unicode Transformation Format,variable-width encoding |
Extends | UCS-2 |
Transforms / Encodes | ISO/IEC 10646 (Unicode) |
UTF-16 (16-bitUnicode Transformation Format) is acharacter encoding that supports all 1,112,064 validcode points of Unicode.[1][a] The encoding isvariable-length as code points are encoded with one or two16-bitcode units. UTF-16 arose from an earlier obsolete fixed-width 16-bit encoding now known asUCS-2 (for 2-byte Universal Character Set),[2][3] once it became clear that more than 216 (65,536) code points were needed,[4] including most emoji and importantCJK characters such as for personal and place names.[5]
UTF-16 is used by theWindows API, and by many programming environments such asJava andQt. The variable length character of UTF-16, combined with the fact that most characters arenot variable length (so variable length is rarely tested), has led to many bugs in software, including in Windows itself.[6]
UTF-16 is the only encoding (still) allowed on the web that is incompatible with 8-bitASCII.[7][b] However it has never gained popularity on the web, where it is declared by under 0.004% of public web pages (and even then, the web pages are most likely also usingUTF-8).[9] UTF-8, by comparison, gained dominance years ago and accounted for 99% of all web pages by 2025.[10] TheWeb Hypertext Application Technology Working Group (WHATWG) considers UTF-8 "the mandatory encoding for all [text]" and that for security reasons browser applications should not use UTF-16.[11]
In the late 1980s, work began on developing a uniform encoding for a "Universal Character Set" (UCS) that would replace earlier language-specific encodings with one coordinated system. The goal was to include all required characters from most of the world's languages, as well as symbols from technical domains such as science, mathematics, and music. The original idea was to replace the typical 256-character encodings, which required 1 byte per character, with an encoding using 65,536 (216) values, which would require 2 bytes (16 bits) per character.
Two groups worked on this in parallel,ISO/IEC JTC 1/SC 2 and theUnicode Consortium, the latter representing mostly manufacturers of computing equipment. The two groups attempted to synchronize their character assignments so that the developing encodings would be mutually compatible. The early 2-byte encoding was originally called "Unicode", but is now called "UCS-2".[2][3][12]
When it became increasingly clear that 216 characters would not suffice,[13]IEEE introduced a larger 31-bit space and an encoding (UCS-4) that would require 4 bytes per character. This was resisted by theUnicode Consortium, both because 4 bytes per character wasted a lot of memory and disk space, and because some manufacturers were already heavily invested in 2-byte-per-character technology. The UTF-16 encoding scheme was developed as a compromise and introduced with version 2.0 of the Unicode standard in July 1996.[14] It is fully specified in RFC 2781, published in 2000 by theIETF.[15][16]
UTF-16 is specified in the latest versions of both the international standardISO/IEC 10646 and the Unicode Standard. "UCS-2 should now be considered obsolete. It no longer refers to an encoding form in either 10646 or the Unicode Standard."[2][3] UTF-16 will never be extended to support a larger number of code points or to support the code points that were replaced by surrogates, as this would violate the Unicode Stability Policy with respect to general category or surrogate code points.[17] (Any scheme that remains aself-synchronizing code would require allocating at least oneBasic Multilingual Plane (BMP) code point to start a sequence. Changing the purpose of a code point is disallowed.)
Each Unicodecode point is encoded either as one or two 16-bitcode units. Code points less than 216 ("in the BMP") are encoded with a single 16-bit code unit equal to the numerical value of the code point, as in the older UCS-2. Code points greater than or equal to 216 ("above the BMP") are encoded usingtwo 16-bit code units. These two 16-bit code units are chosen from theUTF-16 surrogate range0xD800–0xDFFF which had not previously been assigned to characters. Values in this range are not used as characters, and UTF-16 provides no legal way to code them as individual code points. A UTF-16 stream, therefore, consists of single 16-bit codes outside the surrogate range, and pairs of 16-bit values that are within the surrogate range.
Both UTF-16 and UCS-2 encode code points in this range as single 16-bit code units that are numerically equal to the corresponding code points. These code points in theBasic Multilingual Plane (BMP) are theonly code points that can be represented in UCS-2.[citation needed] As of Unicode 9.0, some modern non-Latin Asian, Middle-Eastern, and African scripts fall outside this range, as do mostemoji characters.
Code points from the otherplanes are encoded as two 16-bitcode units called asurrogate pair. The first code unit is ahigh surrogate and the second is alow surrogate (These are also known as "leading" and "trailing" surrogates, respectively, analogous to the leading and trailing bytes of UTF-8.[18]):
Low High | DC00 | DC01 | ... | DFFF |
---|---|---|---|---|
D800 | 010000 | 010001 | ... | 0103FF |
D801 | 010400 | 010401 | ... | 0107FF |
⋮ | ⋮ | ⋮ | ⋱ | ⋮ |
DBFF | 10FC00 | 10FC01 | ... | 10FFFF |
Illustrated visually, the distribution ofU' betweenW1 andW2 looks like:[19]
U' = yyyyyyyyyyxxxxxxxxxx // U - 0x10000W1 = 110110yyyyyyyyyy // 0xD800 + yyyyyyyyyyW2 = 110111xxxxxxxxxx // 0xDC00 + xxxxxxxxxx
Since the ranges for thehigh surrogates (0xD800–0xDBFF),low surrogates (0xDC00–0xDFFF), and valid BMP characters (0x0000–0xD7FF, 0xE000–0xFFFF) aredisjoint, it is not possible for a surrogate to match a BMP character, or for two adjacentcode units to look like a legalsurrogate pair. This simplifies searches a great deal. It also means that UTF-16 isself-synchronizing on 16-bit words: whether a code unit starts a character can be determined without examining earlier code units (i.e. the type ofcode unit can be determined by the ranges of values in which it falls). UTF-8 shares these advantages, but many earlier multi-byte encoding schemes (such asShift JIS and other Asian multi-byte encodings) did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string. UTF-16 is not self-synchronizing if one byte is lost or if traversal starts at a random byte.
Because the most commonly used characters are all in the BMP, handling of surrogate pairs is often not thoroughly tested. This leads to persistent bugs and potential security holes, even in popular and well-reviewed application software (e.g.CVE-2008-2938, CVE-2012-2135).
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(August 2023) (Learn how and when to remove this message) |
The official Unicode standard says that no UTF forms, including UTF-16, can encode the surrogate code points. Since these will never be assigned a character, there should be no reason to encode them. However, Windows allows unpaired surrogates in filenames[20] and other places, which generally means they have to be supported by software in spite of their exclusion from the Unicode standard.
UCS-2, UTF-8, andUTF-32 can encode these code points in trivial and obvious ways, and a large amount of software does so, even though the standard states that such arrangements should be treated as encoding errors.
It is possible to unambiguously encode anunpaired surrogate (a high surrogate code point not followed by a low one, or a low one not preceded by a high one) in the format of UTF-16 by using a code unit equal to the code point. The result is not valid UTF-16, but the majority of UTF-16 encoder and decoder implementations do this when translating between encodings.[citation needed]
To encode U+10437 (𐐷) to UTF-16:
To decode U+10437 (𐐷) from UTF-16:
The following table summarizes this conversion, as well as others. The colors indicate how bits from the code point are distributed among the UTF-16 bytes. Additional bits added by the UTF-16 encoding process are shown in black.
Character | Binary code point | Binary UTF-16 | UTF-16 hex code units | UTF-16BE hex bytes | UTF-16LE hex bytes | |
---|---|---|---|---|---|---|
$ | U+0024 | 0000 0000 0010 0100 | 0000 0000 0010 0100 | 0024 | 00 24 | 24 00 |
€ | U+20AC | 0010 0000 1010 1100 | 0010 0000 1010 1100 | 20AC | 20 AC | AC 20 |
𐐷 | U+10437 | 0001 0000 0100 0011 0111 | 1101 1000 0000 0001 1101 1100 0011 0111 | D801DC37 | D8 01DC 37 | 01 D837 DC |
𤭢 | U+24B62 | 0010 0100 1011 0110 0010 | 1101 1000 0101 0010 1101 1111 0110 0010 | D852DF62 | D8 52DF 62 | 52 D862 DF |
UTF-16 and UCS-2 produce a sequence of 16-bit code units. Since most communication and storage protocols are defined for bytes, and each unit thus takes two 8-bit bytes, the order of the bytes may depend on theendianness (byte order) of the computer architecture.
To assist in recognizing the byte order of code units,UTF-16 allows abyte order mark (BOM), a code point with the value U+FEFF, to precede the first actual coded value.[c] (U+FEFF is the invisiblezero-width non-breaking space/ZWNBSP character).[d] If the endian architecture of the decoder matches that of the encoder, the decoder detects the 0xFEFF value, but an opposite-endian decoder interprets the BOM as thenoncharacter value U+FFFE reserved for this purpose. This incorrect result provides a hint to perform byte-swapping for the remaining values.
If the BOM is missing, RFC 2781 recommends[e] that big-endian (BE) encoding be assumed. In practice, due to Windows using little-endian (LE) order by default, many applications assume little-endian encoding. It is also reliable to detect endianness by looking for null bytes, on the assumption that characters less than U+0100 are very common. If more even bytes (starting at 0) are null, then it is big-endian.
The standard also allows the byte order to be stated explicitly by specifyingUTF-16BE orUTF-16LE as the encoding type. When the byte order is specified explicitly this way, a BOM is specificallynot supposed to be prepended to the text, and a U+FEFF at the beginning should be handled as a ZWNBSP character. Most applications ignore a BOM in all cases despite this rule.
ForInternet protocols,IANA has approved "UTF-16", "UTF-16BE", and "UTF-16LE" as the names for these encodings (the names are case insensitive). The aliasesUTF_16 orUTF16 may be meaningful in some programming languages or software applications, but they are not standard names in Internet protocols.
Similar designations,UCS-2BE andUCS-2LE, are used to show versions ofUCS-2.
A "character" may use any number of Unicode code points.[21] For instance anemoji flag character takes 8 bytes, since it is "constructed from a pair of Unicode scalar values"[22] (and those values are outside the BMP and require 4 bytes each). UTF-16 in no way assists in "counting characters" or in "measuring the width of a string".
UTF-16 is often claimed to be more space-efficient thanUTF-8 for East Asian languages, since it uses two bytes for characters that take 3 bytes in UTF-8. Since real text contains many spaces, numbers, punctuation, markup (for e.g. web pages), and control characters, which take only one byte in UTF-8, this is only true for artificially constructed dense blocks of text.[citation needed] A more serious claim can be made forDevanagari andBengali, which use multi-letter words and all the letters take 3 bytes in UTF-8 and only 2 in UTF-16.
In addition the Chinese Unicode encoding standardGB 18030 always produces files the same size or smaller than UTF-16 for all languages, not just for Chinese (it does this by sacrificing self-synchronization).
UTF-16 is used for text in the OS API of all currently supported versions ofMicrosoft Windows[23] (and including at leastWindows CE sinceWindows CE 5.0[24] andWindows NT sinceWindows 2000[25]). Windows NT prior to Windows 2000 only supported UCS-2.[26][27] Microsoft has stated that "UTF-16 [..] is a unique burden that Windows places on code that targets multiple platforms"[28] and it has been possible to use UTF-8 API since Windows 10insider build 17035 in May 2019.[29] Files and network data tend to be a mix of UTF-16, UTF-8, and legacy byte encodings.
SMS text messaging effectively uses UTF-16. The documentation specifies UCS-2 but UTF-16 is necessary for Emoji to work.[30]
TheIBM i operating system designatesCCSID (code page) 13488 for UCS-2 encoding and CCSID 1200 for UTF-16 encoding, though the system treats them both as UTF-16.[31]
UTF-16 is used by theQualcomm BREW operating systems; the.NET environments; and theQt cross-platform graphicalwidget toolkit.
Symbian OS used in Nokia S60 handsets and Sony EricssonUIQ handsets uses UCS-2.iPhone handsets use UTF-16 forShort Message Service instead of UCS-2 described in the3GPP TS 23.038 (GSM) and IS-637 (CDMA) standards.[32]
TheJoliet file system, used inCD-ROM media, encodes file names using UCS-2BE (up to sixty-four Unicode characters per file name).
Python version 2.0 officially only used UCS-2 internally, but the UTF-8 decoder to "Unicode" produced correct UTF-16. There was also the ability to compile Python so that it used UTF-32 internally, this was sometimes done on Unix. Python 3.3 switched internal storage to use one ofISO-8859-1, UCS-2, or UTF-32 depending on the largest code point in the string.[33] Python 3.12 drops some functionality (for CPython extensions) to make it easier to migrate toUTF-8 for all strings.[34]
Java originally used UCS-2, and added UTF-16 supplementary character support inJ2SE 5.0. Despite awareness of UTF-8[35] all strings are still UTF-16 (as of Java 9 strings containing only codes less than 256 can be "compressed" to bytes, in ISO-8859-1 encoding[36]).
JavaScript may use UCS-2 or UTF-16.[37] As of ES2015, string methods and regular expression flags have been added to the language that permit handling strings from an encoding-agnostic perspective.
UEFI uses UTF-16 to encode strings by default.
Swift, Apple's preferred application language, used UTF-16 to store strings until version 5 which switched to UTF-8.[38]
Quite a few languages make the encoding part of the string object, and thus store and support a large set of encodings including UTF-16. Most consider UTF-16 and UCS-2 to be different encodings. Examples are thePHP language[39] andMySQL.[40]
A method to determine what encoding a system is using internally is to ask for the "length" of string containing a single non-BMP character. If the length is 2 then UTF-16 is being used. 4 indicates UTF-8. 3 or 6 may indicateCESU-8. 1may indicate UTF-32, but more likely indicates the language decodes the string to code points before measuring the "length".
In many languages, quoted strings need a new syntax for quoting non-BMP characters, as the C-style"\uXXXX"
syntax explicitly limits itself to 4 hex digits. The following examples illustrate the syntax for the non-BMP characterU+1D11E 𝄞MUSICAL SYMBOL G CLEF:
"\U0001D11E"
.[41]"\x{1D11E}"
."\u{1D11E}"
."\uD834\uDD1E"
.Each encoding form maps the Unicode code points U+0000..U+D7FF and U+E000..U+10FFFF
[...] the term UCS-2 should now be considered obsolete. It no longer refers to an encoding form in either 10646 or the Unicode Standard.
UCS-2 is obsolete terminology which refers to a Unicode implementation up to Unicode 1.1 [...]
UTF-16 uses a single 16-bit code unit to encode over 60,000 of the most common characters in Unicode
I first came up with the idea for this Top Ten List over 10 years ago, which was prompted by some environments that still supported only BMP code points. The idea, of course, was to motivate the developers of such environments to support code points beyond the BMP by providing an enumerated list of reasons to do so. And yes, there are still some environments that support only BMP code points, such as the VivaDesigner app.
File names editing in Window dialogs in broken (delete required 2 presses on backspace)
UTF-16 encodings are the only encodings that this specification needs to treat as not being ASCII-compatible encodings.
The UTF-8 encoding is the most appropriate encoding for interchange of Unicode, the universal coded character set. Therefore for new protocols and formats, as well as existing formats deployed in new contexts, this specification requires (and defines) the UTF-8 encoding. [..] The problems outlined here go away when exclusively using UTF-8, which is one of the many reasons that UTF-8 is now the mandatory encoding for all text things on the Web.
[…] the file system treats path and file names as an opaque sequence of WCHARs
These functions use UTF-16 (wide character) encoding (…) used for native Unicode encoding on Windows operating systems.
Windows 2000 introduces support for basic input, output, and simple sorting of supplementary characters. However, not all system components are compatible with supplementary characters.
By operating in UTF-8, you can ensure maximum compatibility [..] Windows operates natively in UTF-16 (or WCHAR), which requires code page conversions by using MultiByteToWideChar and WideCharToMultiByte. This is a unique burden that Windows places on code that targets multiple platforms. [..] The Microsoft Game Development Kit (GDK) and Windows in general are moving forward to support UTF-8 to remove this unique burden of Windows on code targeting or interchanging with multiple platforms and the web. Also, this results in fewer internationalization issues in apps and games and reduces the test matrix that's required to get it right.
As of Windows version 1903 (May 2019 update), you can use the ActiveCodePage property in the appxmanifest for packaged apps, or the fusion manifest for unpackaged apps, to force a process to use UTF-8 as the process code page. [...]CP_ACP
equates toCP_UTF8
only if running on Windows version 1903 (May 2019 update) or above and the ActiveCodePage property described above is set to UTF-8. Otherwise, it honors the legacy system code page. We recommend usingCP_UTF8
explicitly.