LLVM 22.x tile for building compilers, language runtimes, and out-of-tree tooling
88
83%
Does it follow best practices?
Impact
96%
1.23xAverage score across 5 eval scenarios
Passed
No known issues
Reference: ORCv2 design | BuildingAJIT tutorial
ORC JIT v2 is the only supported JIT in LLVM 22. MCJIT is removed — use
LLJITorLLLazyJITfor new projects.
JITDylib — a dynamic library of JIT'd symbols; like a shared object
│
├── IRCompileLayer — compiles LLVM IR → object code
├── ObjectLayer — links object files; resolves relocations
└── DefinitionGenerator — searches external libs for missing symbols
ExecutionSession — the root; owns all JITDylibs; manages threading
MaterializationUnit — lazily or eagerly provides symbols
SymbolStringPool — interned symbol namesTwo ready-made JIT types:
| Type | Use when |
|---|---|
LLJIT | Eager compilation — compile all IR up front when added |
LLLazyJIT | Lazy compilation — compile a function only on first call |
#include "llvm/ExecutionEngine/Orc/LLJIT.h"
#include "llvm/ExecutionEngine/Orc/ThreadSafeModule.h"
#include "llvm/Support/InitLLVM.h"
#include "llvm/Support/TargetSelect.h"
using namespace llvm;
using namespace llvm::orc;
int main(int argc, char **argv) {
InitLLVM X(argc, argv);
// Initialize native target (must be called before creating JIT)
InitializeNativeTarget();
InitializeNativeTargetAsmPrinter();
InitializeNativeTargetAsmParser();
// Create LLJIT instance
auto JIT = LLJITBuilder().create();
if (!JIT) {
errs() << toString(JIT.takeError()) << "\n";
return 1;
}
// Add an LLVM module
LLVMContext Ctx;
auto M = buildMyModule(Ctx); // returns std::unique_ptr<Module>
// Wrap in ThreadSafeModule (required by ORC — owns context + module)
ThreadSafeModule TSM(std::move(M), std::make_unique<LLVMContext>());
if (auto Err = (*JIT)->addIRModule(std::move(TSM))) {
errs() << toString(std::move(Err)) << "\n";
return 1;
}
// Look up and call a function
auto Sym = (*JIT)->lookup("my_function");
if (!Sym) {
errs() << toString(Sym.takeError()) << "\n";
return 1;
}
// Cast to function pointer and call
auto *FnPtr = Sym->toPtr<int(int)>();
int Result = FnPtr(42);
return 0;
}// LLLazyJIT is declared in LLJIT.h — there is no separate LLLazyJIT.h in LLVM 22
#include "llvm/ExecutionEngine/Orc/LLJIT.h"
auto LazyJIT = LLLazyJITBuilder().create();
if (!LazyJIT) { /* handle error */ }
// Add module — functions compiled only on first call
ThreadSafeModule TSM(std::move(M), std::make_unique<LLVMContext>());
if (auto Err = (*LazyJIT)->addLazyIRModule(std::move(TSM))) { /* handle */ }
// Lookup triggers compilation of that function only
auto Sym = (*LazyJIT)->lookup("expensive_function");ORC is thread-safe. ThreadSafeModule wraps a Module + LLVMContext with a mutex so multiple threads can compile different modules concurrently.
// Option A: dedicated context per module (recommended for concurrency)
auto TSM = ThreadSafeModule(
std::move(M),
std::make_unique<LLVMContext>() // owns a fresh context
);
// Option B: shared context (simpler, but serializes compilation)
auto SharedCtx = std::make_shared<ThreadSafeContext>(std::make_unique<LLVMContext>());
auto TSM2 = ThreadSafeModule(std::move(M2), SharedCtx);
// Operate on the module with the lock held:
SharedCtx->withModuleDo(*TSM2.getModule(), [](Module &M) {
// safe to access M here
});ExecutionSession &ES = (*JIT)->getExecutionSession();
// Get the main JITDylib (created by LLJIT automatically)
JITDylib &MainJD = (*JIT)->getMainJITDylib();
// Create a new JITDylib (like loading a shared library)
JITDylib &LibJD = ES.createBareJITDylib("mylib");
// Add module to a specific dylib
(*JIT)->addIRModule(LibJD, std::move(TSM));
// Search order: look in MainJD first, then LibJD
MainJD.addToLinkOrder(LibJD);By default, JIT'd code cannot call printf, malloc, etc. Add a generator:
#include "llvm/ExecutionEngine/Orc/DynamicLibrarySearchGenerator.h"
// Expose all symbols from the current process
auto &MainJD = (*JIT)->getMainJITDylib();
MainJD.addGenerator(
cantFail(DynamicLibrarySearchGenerator::GetForCurrentProcess(
(*JIT)->getDataLayout().getGlobalPrefix()))
);
// Now JIT'd code can call malloc, printf, etc.To expose only a specific shared library:
MainJD.addGenerator(
cantFail(DynamicLibrarySearchGenerator::Load(
"/path/to/myruntime.so",
(*JIT)->getDataLayout().getGlobalPrefix()))
);Inject a host C++ function as a JIT-visible symbol:
#include "llvm/ExecutionEngine/Orc/AbsoluteSymbols.h"
// Expose a host function to JIT'd code
int myRuntimeFunc(int x) { return x * 2; }
auto &ES = (*JIT)->getExecutionSession();
auto &MainJD = (*JIT)->getMainJITDylib();
cantFail(MainJD.define(
absoluteSymbols({
{ES.intern("my_runtime_func"),
{ExecutorAddr::fromPtr(&myRuntimeFunc),
JITSymbolFlags::Exported | JITSymbolFlags::Callable}}
})
));#include "llvm/Passes/PassBuilder.h"
// Wrap LLJIT's IR transform layer with an optimization pipeline
(*JIT)->getIRTransformLayer().setTransform(
[](ThreadSafeModule TSM, const MaterializationResponsibility &R)
-> Expected<ThreadSafeModule> {
TSM.withModuleDo([](Module &M) {
PassBuilder PB;
LoopAnalysisManager LAM;
FunctionAnalysisManager FAM;
CGSCCAnalysisManager CGAM;
ModuleAnalysisManager MAM;
PB.registerModuleAnalyses(MAM);
PB.registerCGSCCAnalyses(CGAM);
PB.registerFunctionAnalyses(FAM);
PB.registerLoopAnalyses(LAM);
PB.crossRegisterProxies(LAM, FAM, CGAM, MAM);
ModulePassManager MPM =
PB.buildPerModuleDefaultPipeline(OptimizationLevel::O2);
MPM.run(M, MAM);
});
return std::move(TSM);
});ORC uses llvm::Expected<T> and llvm::Error everywhere:
// Helper to exit on error (for simple tools)
template <typename T>
T exitOnErr(Expected<T> E) {
if (!E) {
errs() << toString(E.takeError()) << "\n";
exit(1);
}
return std::move(*E);
}
auto JIT = exitOnErr(LLJITBuilder().create());
auto Sym = exitOnErr(JIT->lookup("foo"));
auto *Fn = Sym.toPtr<void()>();
// LLJIT provides its own ExitOnError:
llvm::ExitOnError ExitOnErr;
auto JIT2 = ExitOnErr(LLJITBuilder().create());llvm_map_components_to_libnames(LLVM_LIBS
Core Support OrcJIT ExecutionEngine
X86CodeGen X86AsmParser X86Desc X86Info # native target
)llvm/ExecutionEngine/MCJIT.h) — it is deprecated; use ORC v2.LLVMContext across threads without ThreadSafeContext — data races will corrupt IR.InitializeNativeTarget() after constructing a JIT — call it first in main().InitializeNativeTargetAsmPrinter() — without it, the JIT can't emit code.JITDylib or ExecutionSession is destroyed — they become dangling.DynamicLibrarySearchGenerator if JIT'd code calls any C stdlib functions.ExitOnError or properly handle llvm::Error — ORC never silently fails.docs
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
skills
add-alias-analysis
add-attributes-metadata
add-calling-convention
add-debug-info
add-exception-handling
add-gc-statepoints
add-intrinsic
add-lto
add-sanitizer
add-vectorization-hint
frontend-to-ir
jit-setup
lit-filecheck
lower-struct-types
new-target
out-of-tree-setup
tessl-llvm
version-sync