Scale our semantic reasoning pipelines to enterprise workloads. Build the infrastructure that powers governed AI at scale.
We're seeking an ML Infrastructure Engineer to design and build scalable systems for deploying and managing semantic reasoning pipelines. You'll work at the intersection of machine learning, distributed systems, and infrastructure engineering.
This role involves optimizing model inference, managing GPU resources, building data pipelines, and ensuring our ML systems can scale to handle enterprise workloads while maintaining governance and auditability requirements.
The ideal candidate brings a unique combination of technical depth and systems thinking. You thrive in high-growth environments and are energised by the challenge of building infrastructure that scales.
Experience with semantic reasoning or knowledge graph systems. Knowledge of NLP and language models. Experience with Kubernetes and container orchestration. Background in systems programming (Rust, Go, C++).
We offer a competitive compensation package designed to attract and retain exceptional talent:
We invest in our team with benefits designed to support your best work and life:
If you're excited about building the infrastructure that powers governed AI at scale, we'd love to hear from you.
Apply Now