Before the widespread adoption of interpreters, the execution ofcomputer programs often relied oncompilers, which translate and compile source code into machine code. Early runtime environments forLisp andBASIC could parse source code directly. Thereafter, runtime environments were developed for languages (such asPerl,Raku,Python,MATLAB, andRuby), which translated source code into an intermediate format before executing to enhanceruntime performance.
Code that runs in an interpreter can be run on any platform that has acompatible interpreter. The same code can be distributed to any such platform, instead of anexecutable having to be built for each platform. Although each programming language is usually associated with a particular runtime environment, a language can be used in different environments. Interpreters have been constructed for languages traditionally associated withcompilation, such asALGOL,Fortran,COBOL,C andC++.
In the early days of computing, compilers were more commonly found and used than interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation.[1]
Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed.[2] The first interpreted high-level language wasLisp. Lisp was first implemented bySteve Russell on anIBM 704 computer. Russell had readJohn McCarthy's paper, "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I", and realized (to McCarthy's surprise) that the Lispeval function could be implemented in machine code.[3] The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions".
The development of editing interpreters was influenced by the need for interactive computing. In the 1960s, the introduction of time-sharing systems allowed multiple users to access a computer simultaneously, and editing interpreters became essential for managing and modifying code in real-time. The first editing interpreters were likely developed for mainframe computers, where they were used to create and modify programs on the fly. One of the earliest examples of an editing interpreter is the EDT (Editor and Debugger for the TECO) system, which was developed in the late 1960s for the PDP-1 computer. EDT allowed users to edit and debug programs using a combination of commands and macros, paving the way for modern text editors and interactive development environments.[citation needed]
While some types of sandboxes rely on operating system protections, an interpreter (virtual machine) can offer additional control such as blocking code that violatessecurity rules.[citation needed]
Interpretive overhead is the runtime cost of executing code via an interpreter instead of as native (compiled) code. Interpreting is slower because the interpreter executes multiple machine-code instructions for the equivalent functionality in the native code. In particular, access to variables is slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than atcompile time.[4] But faster development (due to factors such as shorter edit-build-run cycle) can outweigh the value of faster execution speed; especially when prototyping and testing when the edit-build-run cycle is frequent.[4][5]
An interpreter may generate anintermediate representation (IR) of the program from source code in order to achieve goals such as fast runtime performance. A compiler may also generate an IR, but the compiler generates machine code for later execution whereas the interpreter prepares to execute the program. These differing goals lead to differing IR design. ManyBASIC interpreters replacekeywords with singlebytetokens which can be used to find the instruction in ajump table.[4] A few interpreters, such as thePBASIC interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in avariable-length code requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation.
There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler. Some systems (such as someLisps) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed.[citation needed]
Since the early stages of interpreting and compiling are similar, an interpreter might use the samelexical analyzer andparser as a compiler and then interpret the resultingabstract syntax tree.
importstd;usingstd::runtime_error;usingstd::unique_ptr;usingstd::variant;// data types for abstract syntax treeenumclassKind:char{VAR,CONST,SUM,DIFF,MULT,DIV,PLUS,MINUS,NOT};// forward declarationclassNode;classVariable{public:int*memory;};classConstant{public:intvalue;};classUnaryOperation{public:unique_ptr<Node>right;};classBinaryOperation{public:unique_ptr<Node>left;unique_ptr<Node>right;};usingExpression=variant<Variable,Constant,BinaryOperation,UnaryOperation>;classNode{public:Kindkind;Expressione;};// interpreter procedure[[nodiscard]]intexecuteIntExpression(constNode&n){intleftValue;intrightValue;switch(n->kind){caseKind::VAR:returnstd::get<Variable>(n.e).memory;caseKind::CONST:returnstd::get<Constant>(n.e).value;caseKind::SUM:caseKind::DIFF:caseKind::MULT:caseKind::DIV:constBinaryOperation&bin=std::get<BinaryOperation>(n.e);leftValue=executeIntExpression(bin.left.get());rightValue=executeIntExpression(bin.right.get());switch(n.kind){caseKind::SUM:returnleftValue+rightValue;caseKind::DIFF:returnleftValue-rightValue;caseKind::MULT:returnleftValue*rightValue;caseKind::DIV:if(rightValue==0){throwruntime_error("Division by zero");}returnleftValue/rightValue;}caseKind::PLUS:caseKind::MINUS:caseKind::NOT:constUnaryOperation&un=std::get<UnaryOperation>(n.e);rightValue=executeIntExpression(un.right.get());switch(n.kind){caseKind::PLUS:return+rightValue;caseKind::MINUS:return-rightValue;caseKind::NOT:return!rightValue;}default:std::unreachable();}}
Just-in-time (JIT) compilation is the process of converting an intermediate format (i.e. bytecode) to native code at runtime. As this results in native code execution, it is a method of avoiding the runtime cost of using an interpreter while maintaining some of the benefits that lead to the development of interpreters.
Some interpreters processbytecode which is an intermediate format of logic compiled from a high-level language. For example,Emacs Lisp is compiled to bytecode which is interpreted by an interpreter. One might say that this compiled code is machine code for a virtual machine – implemented by the interpreter. Such an interpreter is sometimes called acompreter.[6][7]
Threaded code interpreter
Athreaded code interpreter is similar to bytecode interpreter but instead of bytes, uses pointers. Each instruction is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. One example of threaded code is theForth code used inOpen Firmware systems. The source language is compiled into "F code" (a bytecode), which is then interpreted by avirtual machine.[citation needed]
Abstract syntax tree interpreter
An abstract syntax tree interpreter transforms source code into anabstract syntax tree (AST), then interprets it directly, or uses it to generate native code via JIT compilation.[8] In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation.[9] Thus, using AST has been proposed as a better intermediate format than bytecode. However, for interpreters, AST results in more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree.[10]
Template interpreter
Rather than implement the execution of code by virtue of a large switch statement containing every possible bytecode, while operating on a software stack or a tree walk, a template interpreter maintains a large array of bytecode (or any efficient intermediate representation) mapped directly to corresponding native machine instructions that can be executed on the host hardware as key value pairs (or in more efficient designs, direct addresses to the native instructions),[11][12] known as a "Template". When the particular code segment is executed the interpreter simply loads or jumps to the opcode mapping in the template and directly runs it on the hardware.[13][14] Due to its design, the template interpreter very strongly resembles a JIT compiler rather than a traditional interpreter, however it is technically not a JIT due to the fact that it merely translates code from the language into native calls one opcode at a time rather than creating optimized sequences of CPU executable instructions from the entire code segment. Due to the interpreter's simple design of simply passing calls directly to the hardware rather than implementing them directly, it is much faster than every other type, even bytecode interpreters, and to an extent less prone to bugs, but as a tradeoff is more difficult to maintain due to the interpreter having to support translation to multiple different architectures instead of a platform independent virtual machine/stack. To date, the only template interpreter implementations of widely known languages to exist are the interpreter within Java's official reference implementation, the Sun HotSpot Java Virtual Machine,[11] and the Ignition Interpreter in the GoogleV8 JavaScript execution engine.
Microcode
Microcode provides an abstraction layer as a hardware interpreter that implements machine code in a lower-level machine code.[15] It separates the high-level machine instructions from the underlyingelectronics so that the high-level instructions can be designed and altered more freely. It also facilitates providing complex multi-step instructions, while reducing the complexity of computer circuits.
^Bennett, J. M.; Prinz, D. G.; Woods, M. L. (1952). "Interpretative sub-routines".Proceedings of the ACM National Conference, Toronto.
^According to what reported byPaul Graham inHackers & Painters, p. 185, McCarthy said: "Steve Russell said, look, why don't I program thiseval..., and I said to him, ho, ho, you're confusing theory with practice, thiseval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled theeval in my paper intoIBM 704 machine code, fixingbug, and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today..."
^Kühnel, Claus (1987) [1986]. "4. Kleincomputer - Eigenschaften und Möglichkeiten" [4. Microcomputer - Properties and possibilities]. In Erlekampf, Rainer; Mönk, Hans-Joachim (eds.).Mikroelektronik in der Amateurpraxis [Micro-electronics for the practical amateur] (in German) (3 ed.). Berlin:Militärverlag der Deutschen Demokratischen Republik [de], Leipzig. p. 222.ISBN3-327-00357-2. 7469332.
^Heyne, R. (1984). "Basic-Compreter für U880" [BASIC compreter for U880 (Z80)].radio-fernsehn-elektronik [de] (in German).1984 (3):150–152.