Movatterモバイル変換


[0]ホーム

URL:


FAQ

Why create a new standard when there is already asm.js?

… especially since pthreads (Mozilla pthreads,Chromium pthreads) andSIMD (simd.js,Chromium SIMD,simd.js in asm.js) are coming toJavaScript.

There are two main benefits WebAssembly provides:

  1. The kind of binary format being considered for WebAssembly can be nativelydecoded much faster than JavaScript can be parsed (experiments show morethan 20× faster). On mobile, large compiled codes can easily take 20–40secondsjust to parse, so native decoding (especially when combined withother techniques likestreaming for better-than-gzip compression) iscritical to providing a good cold-load user experience.

  2. By avoiding the simultaneous asm.js constraints ofAOT-compilabilityand good performance even on engines withoutspecific asm.js optimizations, a new standard makes itmuch easier toadd thefeatures:unicorn: required to reach nativelevels of performance.

Of course, every new standard introduces new costs (maintenance, attack surface,code size) that must be offset by the benefits. WebAssembly minimizes costs byhaving a design that allows (though not requires) a browser to implementWebAssembly inside itsexisting JavaScript engine (thereby reusing theJavaScript engine’s existing compiler backend, ES6 module loading frontend,security sandboxing mechanisms and other supporting VM components). Thus, incost, WebAssembly should be comparable to a big new JavaScript feature, not afundamental extension to the browser model.

Comparing the two, even for engines which already optimize asm.js, the benefitsoutweigh the costs.

What are WebAssembly’s use cases?

WebAssembly was designed witha variety of use cases in mind.

Can WebAssembly be polyfilled?

We think so. There was an earlyprototype with demos [1,2], which showedthat decoding a binary WebAssembly-like format into asm.js can be efficient.And as the WebAssembly design has changed there have beenmoreexperimentswith polyfilling.

Overall, optimism has been increasing for quick adoption of WebAssembly inbrowsers, which is great, but it has decreased the motivation to work on apolyfill.

It is also the case that polyfilling WebAssembly to asm.js is less urgentbecause of the existence of alternatives, for example, a reverse polyfill -compilingasm.js to WebAssembly -exists, and it allows shipping a single build that can run as eitherasm.js or WebAssembly. It is also possible to build a project intotwo parallel asm.js and WebAssembly builds by justflipping a switchin emscripten, which avoids polyfill time on the client entirely. A thirdoption, for non-performant code, is to use a compiled WebAssembly interpretersuch asbinaryen.js.

However, a WebAssembly polyfill is still an interesting idea and should inprinciple be possible.

Is WebAssembly only for C/C++ programmers?

As explained in thehigh-level goals, to achieve a MinimumViable Product, the initial focus is onC/C++.

However, byintegrating with JavaScript at the ES6 Module interface,web developers don’t need to write C++ to take advantage of libraries that others have written; reusing a modular C++ library can be as simple asusing a module from JavaScript.

Beyond the MVP, anotherhigh-level goalis to improve support for languages other than C/C++. This includesallowing WebAssembly code toallocate and access garbage-collected (JavaScript, DOM, Web API) objects:unicorn:.Even before GC support is added to WebAssembly, it is possible to compile a language’s VM to WebAssembly (assuming it’s written in portable C/C++) and this has already been demonstrated (1,2,3). However, “compile the VM” strategies increase the size of distributed code, lose browser devtools integration, can have cross-languagecycle-collection problems and miss optimizations that require integration with the browser.

Which compilers can I use to build WebAssembly programs?

WebAssembly initially focuses onC/C++, and a new, cleanWebAssembly backend is being developed in upstream clang/LLVM, which can then beused by LLVM-based projects likeEmscripten andPNaCl.

As WebAssembly evolves it will support more languages than C/C++, and we hopethat other compilers will support it as well, even for the C/C++ language, forexampleGCC. The WebAssembly working group found it easier to start withLLVM support because they had more experience with that toolchain from theirEmscripten andPNaCl work.

We hope that proprietary compilers also gain WebAssembly support, but we’ll letvendors speak about their own platforms.

TheWebAssembly Community Group would be delighted to collaborate with morecompiler vendors, take their input into consideration in WebAssembly itself, andwork with them on ABI matters.

Will WebAssembly support View Source on the Web?

Yes! WebAssembly defines atext format to be rendered whendevelopers view the source of a WebAssembly module in any developer tool. Also,a specific goal of the text format is to allow developers to write WebAssemblymodules by hand for testing, experimenting, optimizing, learning and teachingpurposes. In fact, by dropping all thecoercions required by asm.js validation,the WebAssembly text format should be much more natural to read and write thanasm.js. Outside the browser, command-line and online tools that convert betweentext and binary will also be made readily available. Lastly, a scalable form ofsource maps is also being considered as part of the WebAssemblytooling story.

What’s the story for Emscripten users?

Existing Emscripten users will get the option to build their projects toWebAssembly, by flipping a flag. Initially, Emscripten’s asm.js output would beconverted to WebAssembly, but eventually Emscripten would use WebAssemblythroughout the pipeline. This painless transition is enabled by thehigh-level goal that WebAssembly integrate well with theWeb platform (including allowing synchronous calls into and out of JavaScript)which makes WebAssembly compatible with Emscripten’s current asm.js compilationmodel.

Is WebAssembly trying to replace JavaScript?

No! WebAssembly is designed to be a complement to, not replacement of,JavaScript. While WebAssembly will, over time, allow many languages to becompiled to the Web, JavaScript has an incredible amount of momentum and willremain the single, privileged (as describedabove) dynamic language of theWeb. Furthermore, it is expected that JavaScript and WebAssembly will be usedtogether in a number of configurations:

  • Whole, compiled C++ apps that leverage JavaScript to glue things together.
  • HTML/CSS/JavaScript UI around a main WebAssembly-controlled center canvas,allowing developers to leverage the power of web frameworks to buildaccessible, web-native-feeling experiences.
  • Mostly HTML/CSS/JavaScript app with a few high-performance WebAssembly modules(e.g., graphing, simulation, image/sound/video processing, visualization,animation, compression, etc., examples which we can already see in asm.jstoday) allowing developers to reuse popular WebAssembly libraries just likeJavaScript libraries today.
  • When WebAssemblygains the ability to access garbage-collected objects:unicorn:,those objects will be shared with JavaScript, and not live in a walled-offworld of their own.

Why not just use LLVM bitcode as a binary format?

TheLLVM compiler infrastructure has a lot of attractive qualities: it has an existing intermediate representation (LLVM IR) and binaryencoding format (bitcode). It has code generation backends targeting many architectures and is actively developed and maintained by a large community. Infact PNaCl already uses LLVM as a basis for its binaryformat. However, the goals and requirements that LLVM was designed to meet aresubtly mismatched with those of WebAssembly.

WebAssembly has several requirements and goals for its Instruction SetArchitecture (ISA) and binary encoding:

  • Portability: The ISA must be the same for every machine architecture.
  • Stability: The ISA and binary encoding must not change over time (or changeonly in ways that can be kept backward-compatible).
  • Small encoding: The representation of a program should be as small as possiblefor transmission over the Internet.
  • Fast decoding: The binary format should be fast to decompress and decode forfast startup of programs.
  • Fast compiling: The ISA should be fast to compile (and suitable for eitherAOT- or JIT-compilation) for fast startup of programs.
  • Minimalnondeterminism: The behavior of programs shouldbe as predictable and deterministic as possible (and should be the same onevery architecture, a stronger form of the portability requirement statedabove).

LLVM IR is meant to make compiler optimizations easy to implement, and torepresent the constructs and semantics required by C, C++, and other languageson a large variety of operating systems and architectures. This means that bydefault the IR is not portable (the same program has different representationsfor different architectures) or stable (it changes over time as optimization andlanguage requirements change). It has representations for a huge variety ofinformation that is useful for implementing mid-level compiler optimizations butis not useful for code generation (but which represents a large surface area forcodegen implementers to deal with). It also has undefined behavior (largelysimilar to that of C and C++) which makes some classes of optimization feasibleor more powerful, but can lead to unpredictable behavior at runtime. LLVM’sbinary format (bitcode) was designed for temporary on-disk serializationof the IR for link-time optimization, and not for stability or compressibility(although it does have some features for both of those).

None of these problems are insurmountable. For example PNaCl defines a smallportablesubsetof the IR with reduced undefined behavior, and a stable version of the bitcodeencoding. It also employs several techniques to improve startupperformance. However, each customization, workaround, and special solution meansless benefit from the common infrastructure. We believe that by taking ourexperience with LLVM and designing an IR and binary encoding for our goals andrequirements, we can do much better than adapting a system designed for otherpurposes.

Note that this discussion applies to use of LLVM IR as a standardizedformat. LLVM’s clang frontend and midlevel optimizers can still be used togenerate WebAssembly code from C and C++, and will use LLVM IR in theirimplementation similarly to how PNaCl and Emscripten do today.

Why is there no fast-math mode with relaxed floating point semantics?

Optimizing compilers commonly have fast-math flags which permit the compiler torelax the rules around floating point in order to optimize moreaggressively. This can include assuming that NaNs or infinities don’t occur,ignoring the difference between negative zero and positive zero, makingalgebraic manipulations which change how rounding is performed or when overflowmight occur, or replacing operators with approximations that are cheaper tocompute.

These optimizations effectively introduce nondeterminism; it isn’t possible todetermine how the code will behave without knowing the specific choices made bythe optimizer. This often isn’t a serious problem in native code scenarios,because all the nondeterminism is resolved by the time native code isproduced. Since most hardware doesn’t have floating point nondeterminism,developers have an opportunity to test the generated code, and then count on itbehaving consistently for all users thereafter.

WebAssembly implementations run on the user side, so there is no opportunity fordevelopers to test the final behavior of the code. Nondeterminism at this levelcould cause distributed WebAssembly programs to behave differently in differentimplementations, or change over time. WebAssembly does havesome nondeterminism in cases where the tradeoffs warrantit, but fast-math flags are not believed to be important enough:

  • Many of the important fast-math optimizations happen in the mid-leveloptimizer of a compiler, before WebAssembly code is emitted. For example,loop vectorization that depends on floating point reassociation can still bedone at this level if the user applies the appropriate fast-math flags, soWebAssembly programs can still enjoy these benefits. As another example,compilers can replace floating point division with floating pointmultiplication by a reciprocal in WebAssembly programs just as they do forother platforms.
  • Mid-level compiler optimizations may also be augmented by implementing themin aJIT library in WebAssembly. This would allow them toperform optimizations that benefit from havinginformation about the target and information about thesource program semantics such as fast-math flags at the same time. Forexample, if SIMD types wider than 128-bit are added, it’s expected that therewould be feature tests allowing WebAssembly code to determine which SIMDtypes to use on a given platform.
  • When WebAssemblyadds an FMA operator:unicorn:,folding multiply and add sequences into FMA operators will be possible.
  • WebAssembly doesn’t include its own math functions likesin,cos,exp,pow, and so on. WebAssembly’s strategy for such functions is to allow themto be implemented as library routines in WebAssembly itself (note that x86’ssin andcos instructions are slow and imprecise and are generally avoidedthese days anyway). Users wishing to use faster and less precise mathfunctions on WebAssembly can simply select a math library implementationwhich does so.
  • Most of the individual floating point operators that WebAssembly does havealready map to individual fast instructions in hardware. Tellingadd,sub, ormul they don’t have to worry about NaN for example doesn’t makethem any faster, because NaN is handled quickly and transparently in hardwareon all modern platforms.
  • WebAssembly has no floating point traps, status register, dynamic roundingmodes, or signalling NaNs, so optimizations that depend on the absence ofthese features are all safe.

What aboutmmap?

Themmapsyscall has many useful features. While these are all packed into one overloadedsyscall in POSIX, WebAssembly unpacks this functionality into multipleoperators:

  • the MVP starts with the ability to grow linear memory via amemory.grow operator;
  • proposedfuture features:unicorn: wouldallow the application to change the protection and mappings for pages in thecontiguous range0 tomemory.size.

A significant feature ofmmap that is missing from the above list is theability to allocate disjoint virtual address ranges. The reasoning for thisomission is:

  • The above functionality is sufficient to allow a user-level libc to implementfull, compatiblemmap with what appears to be noncontiguous memoryallocation (but, under the hood is just coordinated use ofmemory_resize andmprotect/map_file/map_shmem/madvise).
  • The benefit of allowing noncontiguous virtual address allocation would be ifit allowed the engine to interleave a WebAssembly module’s linear memory withother memory allocations in the same process (in order to mitigate virtualaddress space fragmentation). There are two problems with this:

    • This interleaving with unrelated allocations does not currently admitefficient security checks to prevent one module from corrupting data outsideits heap (see discussion in#285).
    • This interleaving would require making allocation nondeterministic andnondeterminism is something that WebAssembly generallytries to avoid.

Why have wasm32 and wasm64, instead of just an abstractsize_t?

The amount of linear memory needed to hold an abstractsize_t would then alsoneed to be determined by an abstraction, and then partitioning the linear memoryaddress space into segments for different purposes would be more complex. Thesize of each segment would depend on how manysize_t-sized objects are storedin it. This is theoretically doable, but it would add complexity and there wouldbe more work to do at application startup time.

Also, allowing applications to statically know the pointer size can allow themto be optimized more aggressively. Optimizers can better fold and simplifyinteger expressions when they have full knowledge of the bitwidth. And, knowingmemory sizes and layouts for various types allows one to know how many trailingzeros there are in various pointer types.

Also, C and C++ deeply conflict with the concept of an abstractsize_t.Constructs likesizeof are required to be fully evaluated in the front-endof the compiler because they can participate in type checking. And even beforethat, it’s common to have predefined macros which indicate pointer sizes,allowing code to be specialized for pointer sizes at the very earliest stages ofcompilation. Once specializations are made, information is lost, scuttlingattempts to introduce abstractions.

And finally, it’s still possible to add an abstractsize_t in the future ifthe need arises and practicalities permit it.

Why have wasm32 and wasm64, instead of just using 8 bytes for storing pointers?

A great number of applications don’t ever need as much as 4 GiB of memory.Forcing all these applications to use 8 bytes for every pointer they store wouldsignificantly increase the amount of memory they require, and decrease theireffective utilization of important hardware resources such as cache and memorybandwidth.

The motivations and performance effects here should be essentially the same asthose that motivated the development of thex32 ABI for Linux.

Even Knuth found it worthwhile to give us his opinion on this issue at some point,a flame about 64-bit pointers.

Will I be able to access proprietary platform APIs (e.g. Android / iOS)?

Yes but it will depend on theWebAssembly embedder. Inside a browser you’ll get access to the same HTML5 and other browser-specific APIs which are also accessible through regular JavaScript. However, if a wasm VM is provided as an“app execution platform” by a specific vendor, it might provide access toproprietary platform-specific APIs of e.g. Android / iOS.

Documentation

[8]ページ先頭

©2009-2026 Movatter.jp