At Dria, we're building a distributed, crowdsourced hyperscaler-a movement led by everyday people that unlocks faster, more affordable inference for everyone.
Dria powers scalable, high-performance inference across diverse CPU and GPU platforms. Our mission is to deliver accessible, cutting-edge performance anytime, anywhere.
We're developing an inference engine optimized for heterogeneous devices, along with an open-source, crowdsourced AI inference SDK tailored for distributed AI workloads.
Our research is focused on delivering high-quality AI for 8 billion unique lives, with an emphasis on compilers, sharding, peer-to-peer networks, CPU/GPU inference, and data generation.
About the Job:
We are seeking a highly skilled Compiler Engineer to join our team and contribute to the development and optimization of cutting-edge compiler technologies. This role involves working on compiler frameworks, performance optimization, and enabling efficient execution of machine learning workloads. If you have a deep understanding of compiler internals, low-level programming, and system optimization, we'd love to hear from you!
Key Responsibilities:
Design, develop, and optimize compiler components, including front-end parsing, intermediate representations, and code generation.
Work with compiler frameworks (LLVM, Clang, GCC, MLIR, TVM, ONNX) to improve code efficiency and execution performance.
Implement performance optimizations such as parallelization, vectorization, and memory management techniques.
Collaborate with ML engineers to enhance inference engine performance for deep learning workloads.
Debug and profile compiler-generated code to identify inefficiencies and enhance execution speed.
Stay updated on the latest advancements in compiler technologies, performance engineering, and ML acceleration.
Qualifications:
Programming & Computer Science Fundamentals:
Proficiency in C/C++ is essential. A solid foundation in computer science fundamentals-including data structures, algorithms, systems-level programming, and memory management-is required.
Knowledge of Python is a plus.
Compiler Development Experience:
Understanding of compiler architecture (front-end parsing, analysis, intermediate representations, optimization, and back-end code generation) and practical experience with compiler codebases.
Compiler Frameworks & Low-Level Knowledge:
Familiarity with compiler infrastructures (like LLVM, Clang, GCC) and modern ML compiler frameworks (e.g., MLIR, TVM, ONNX) is highly valued.
Performance Optimization:
Strong skills in profiling and optimizing code are crucial. This includes parallel programming techniques (multithreading, GPU offloading using CUDA, OpenCL, or SYCL), vectorization, cache optimization, and effective memory management.
Inference Engines & Machine Learning:
Familiarity with neural network models and ML frameworks (like TensorFlow, PyTorch, ONNX) and understanding of how to optimize and execute model computation graphs.
Problem-Solving, Debugging & Collaboration:
Experience with debugging tools, performance profilers, version control (Git), and collaborative development practices is important.
What We Offer:
Top business contacts.
Direct cooperation with our founders/managing directors.
Diverse learning and training opportunities and personal coaching from experienced entrepreneurs.
Remote/Hybrid working opportunities.
Flexible working hours.
A dynamic work ecosystem where you can take the initiative and responsibility.