During the 2018 Microsoft Hack Week, members of the Mono team explored the ideaof replacing the Mono’s code generation engine written in C with a codegeneration engine written in C#.
In this blog post we describe our motivation, the interface between the nativeMono runtime and the managed compiler and how we implemented the new managedcompiler in C#.
Motivation
Mono’s runtime and JIT compiler are entirely written in C, a highly portablelanguage that has served the project well.Yet, we feel jealous of our own users that get to write code in a high-levellanguage and enjoy the safety, the luxury and reap the benefits of writing codein a high-level language, while the Mono runtime continues to be written in C.
We decided to explore whether we could make Mono’s compilation engine pluggableand then plug a code generator written entirely in C#.If this were to work, we could more easily prototype, write new optimizationsand make it simpler for developers to safely try changes in the JIT.
This idea has been explored by research projects like theJikesRVM,Maxime andGraal for Java.In the .NET world, the Unity team wrote an IL compiler to C++ compiler calledil2cpp.They also experimented with amanagedJITrecently.
In this blog post, we discuss the prototype that we built.The code mentioned in this blog post can be found here:https://github.com/lambdageek/mono/tree/mjit/mcs/class/Mono.Compiler
Interfacing with the Mono Runtime
The Mono runtime provides various services, just-in-time compilation, assemblyloading, an IO interface, thread management and debugging capabilities.The code generation engine in Mono is calledmini
and is used both for staticcompilation and just-in-time compilation.
Mono’s code generation has a number of dimensions:
- Code can be either interpreted, or compiled to native code
- When compiling to native code, this can be done just-in-time, or it can be batch compiled, also known as ahead-of-time compilation.
- Mono today has two code generators, the light and fast
mini
JIT engine, and the heavy duty engine based on the LLVM optimizing compiler. These two are not really completely unaware of the other, Mono’s LLVM support reuses many parts of themini
engine.
This project started with a desire to make this division even more clear, andto swap up the native code generation engine in ‘mini’ with one that could becompletely implemented in a .NET language.In our prototype we used C#, but other languages like F# or IronPython could be used as well.
To move the JIT to the managed world, we introduced theICompiler
interfacewhich must be implemented by your compilation engine, and it is invoked ondemand when a specific method needs to be compiled.
This is the interface that you must implement:
interface ICompiler { CompilationResult CompileMethod (IRuntimeInformation runtimeInfo, MethodInfo methodInfo, CompilationFlags flags, out NativeCodeHandle nativeCode); string Name { get; }}
TheCompileMethod ()
receives aIRuntimeInformation
reference, whichprovides services for the compiler as well as aMethodInfo
that representsthe method to be compiled and it is expected to set thenativeCode
parameterto the generated code information.
TheNativeCodeHandle
merely represents the generated code address and its length.
This is theIRuntimeInformation
definition, which shows the methods availableto theCompileMethod
to perform its work:
interface IRuntimeInformation { InstalledRuntimeCode InstallCompilationResult (CompilationResult result, MethodInfo methodInfo, NativeCodeHandle codeHandle); object ExecuteInstalledMethod (InstalledRuntimeCode irc, params object[] args); ClassInfo GetClassInfoFor (string className); MethodInfo GetMethodInfoFor (ClassInfo classInfo, string methodName); FieldInfo GetFieldInfoForToken (MethodInfo mi, int token); IntPtr ComputeFieldAddress (FieldInfo fi); /// For a given array type, get the offset of the vector relative to the base address. uint GetArrayBaseOffset(ClrType type);}
We currently have one implementation ofICompiler
, we call it the the “BigStep
” compiler.When wired up, this is what the process looks like when we compile a method with it:

Themini
runtime can call into managed code viaCompileMethod
upon acompilation request.For the code generator to do its work, it needs to obtain some informationabout the current environment.This information is surfaced by theIRuntimeInformation
interface.Once the compilation is done, it will return a blob of native instructions tothe runtime.The returned code is then “installed” in your application.
Now there is a trick question: Who is going to compile the compiler?
The compiler written in C# is initially executed with one of the built-inengines (either theinterpreter,or the JIT engine).
The BigStep Compiler
Our firstICompiler
implementation is called theBigStepcompiler.
This compiler was designed and implemented by a developer (Ming Zhou) notaffiliated with Mono Runtime Team.It is a perfect showcase of how the work we presented through this project canquickly enable a third-party to build their own compiler without much hassleinteracting with the runtime internals.
The BigStep compiler implements an IL to LLVM compiler.This was convenient to build the proof of concept and ensure that the designwas sound, while delegating all the hard compilation work to the LLVM compilerengine.
A lot can be said when it comes to the design and architecture of a compiler,but our main point here is to emphasize how easy it can be, with what we havejust introduced to Mono runtime, to bridge IL code with a customized backend.
The IL code is streamed into to the compiler interface through an iterator,with information such as op-code, index and parameters immediately available tothe user.See below for more details about the prototype.
Hosted Compiler
Another beauty of moving parts of the runtime to the managed side is that wecan test the JIT compilerwithout recompiling the native runtime, soessentially developing a normal C# application.
TheInstallCompilationResult ()
can be used to register compiled method withthe runtime and theExecuteInstalledMethod ()
are can be used to invoke amethod with the provided arguments.
Here is an example how this is used code:
public static int AddMethod (int a, int b) { return a + b;}[Test]public void TestAddMethod (){ ClassInfo ci = runtimeInfo.GetClassInfoFor (typeof (ICompilerTests).AssemblyQualifiedName); MethodInfo mi = runtimeInfo.GetMethodInfoFor (ci, "AddMethod"); NativeCodeHandle nativeCode; CompilationResult result = compiler.CompileMethod (runtimeInfo, mi, CompilationFlags.None, out nativeCode); InstalledRuntimeCode irc = runtimeInfo.InstallCompilationResult (result, mi, nativeCode); int addition = (int) runtimeInfo.ExecuteInstalledMethod (irc, 1, 2); Assert.AreEqual (addition, 3);}
We can ask the host VM for the actual result, assuming it’s our gold standard:
int mjitResult = (int) runtimeInfo.ExecuteInstalledMethod (irc, 666, 1337);int hostedResult = AddMethod (666, 1337);Assert.AreEqual (mjitResult, hostedResult);
This eases development of a compiler tremendously.
We don’t need to eat our own dog food during debugging, but when we feel readywe can flip a switch and use the compiler as our system compiler.This is actually what happens if you runmake -C mcs/class/Mono.Compiler run-test
in themjit branch: We use thisAPI to test the managed compiler while running on the regular Mini JIT.
Native to Managed to Native: Wrapping Mini JIT intoICompiler
As part of this effort, we also wrapped Mono’s JIT in theICompiler
interface.

MiniCompiler
calls back into native code and invokes the regular Mini JIT.It works surprisingly well, however there is a caveat: Once back in the nativeworld, the Mini JIT doesn’t need to go throughIRuntimeInformation
and justuses its old ways to retrieve runtime details.Though, we can turn this into an incremental process now: We can identify thoseparts, add them toIRuntimeInformation
and change Mini JIT so that it usesthe new API.
Conclusion
We strongly believe in a long-term value of this project.A code base in managed code is more approachable for developers and thus easierto extend and maintain.Even if we never see this work upstream, it helped us to better understand theboundary between runtime and JIT compiler, and who knows, it might will help usto integrate RyuJIT into Mono one day 😉
We should also note thatIRuntimeInformation
can be implemented by any other.NET VM: HelloCoreCLR
folks 👋
If you are curious about this project, ping us on ourGitterchannel.
Appendix: Converting Stack-Based OpCodes into Register Operations
Since the target language was LLVM IR, we had to build a translator thatconverted the stack-based operations from IL into the register-based operationsof LLVM.
Since many potential target are register based, we decided to design aframework to make it reusable of the part where we interpret the IL logic. Tothis goal, we implemented an engine to turn the stack-based operations into theregister operations.
Consider theADD
operation in IL.This operation pops two operands from the stack, performing addition and pushing back the result to the stack. This is documented in ECMA 335 as follows:
Stack Transition: ..., value1, value2 -> ..., result
The actual kind of addition that is performed depends on the types of thevalues in the stack.If the values are integers, the addition is an integer addition.If the values are floating point values, then the operation is a floating pointaddition.
To re-interpret this in a register-based semantics, we treat each pushed framein the stack as a different temporary value.This means if a frame is popped out and a new one comes in, although it has thesame stack depth as the previous one, it’s a new temporary value.
Each temporary value is assigned a unique name.Then an IL instruction can be unambiguously presented in a form using temporary names instead of stack changes.For example, theADD
operation becomes
Temp3 := ADD Temp1 Temp2
Other than coming from the stack, there are other sources of data duringevaluation: local variables, arguments, constants and instruction offsets (usedfor branching).These sources are typed differently from the stack temporaries, so that thedownstream processor (to talk in a few) can properly map them into theircontext.
A third problem that might be common among those target languages is thejumping target for branching operations.IL’s branching operation assumes an implicit target should the result be taken:The next instruction.But branching operations in LLVM IR must explicitly declare the targets forboth taken and not-taken paths.To make this possible, the engine performs a pre-pass before the actualexecution, during which it gathers all the explicit and implicit targets.In the actual execution, it will emit branching instructions with both targets.
As we mentioned earlier, the execution engine is a common layer that merelytranslates the instruction to a more generic form.It then sends out each instruction toIOperationProcessor
, an interface thatperforms actual translation.Comparing to the instruction received fromICompiler
, the presentation here,OperationInfo
, is much more consumable:In addition to the op codes, it has an array of the input operands, and a result operand:
public class OperationInfo{ ... ... internal IOperand[] Operands { get; set; } internal TempOperand Result { get; set; } ... ...}
There are several types of the operands:ArgumentOperand
,LocalOperand
,ConstOperand
,TempOperand
,BranchTargetOperand
, etc.Note that the result, if it exists, is always aTempOperand
.The most important property onIOperand
is itsName
, which unambiguouslydefines the source of data in the IL runtime.If an operand with the same name comes in another operation, it unquestionablytells us the very same data address is targeted again.It’s paramount to the processor to accurately map each name to its own storage.
The processor handles each operand according to its type.For example, if it’s an argument operand, we might consider retrieving thevalue from the corresponding argument.An x86 processor may map this to a register.In the case of LLVM, we simply go to fetch it from a named value that ispre-allocated at the beginning of method construction.The resolution strategy is similar for other operands:
LocalOperand
: fetch the value from pre-allocated addressConstOperand
: use the const value carried by the operandBranchTargetOperand
: use the index carried by the operand
Since the temp value uniquely represents an expression stack frame from CLRruntime, it will be mapped to a register.Luckily for us, LLVM allows infinite number of registers, so we simply name anew one for each different temp operand.If a temp operand is reused, however, the very same register must as well.
We useLLVMSharp binding tocommunicate with LLVM.