1. About the Role

We're seeking an ML Infrastructure Engineer to design and build scalable systems for deploying and managing semantic reasoning pipelines. You'll work at the intersection of machine learning, distributed systems, and infrastructure engineering.

This role involves optimizing model inference, managing GPU resources, building data pipelines, and ensuring our ML systems can scale to handle enterprise workloads while maintaining governance and auditability requirements.

2. Key Responsibilities

  • Design and implement scalable ML infrastructure for semantic reasoning workloads
  • Optimize model inference performance and resource utilization
  • Build and maintain data pipelines for training and inference
  • Manage GPU clusters and distributed computing resources
  • Implement model versioning, deployment, and monitoring systems
  • Collaborate with research scientists to productionize ML models
  • Ensure ML systems meet governance and compliance requirements

3. Required Qualifications

  • Experience: 3+ years of experience building ML infrastructure or ML systems
  • Programming: Strong Python skills and experience with ML frameworks (PyTorch, TensorFlow, etc.)
  • Distributed Systems: Experience with distributed computing and GPU acceleration
  • ML Deployment: Knowledge of ML deployment patterns (model serving, batch inference, etc.)
  • Cloud Platforms: Experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, etc.)
  • MLOps: Understanding of ML operations (MLOps) best practices
  • Problem Solving: Strong problem-solving skills and ability to work independently

4. Ideal Candidate Profile

The ideal candidate brings a unique combination of technical depth and systems thinking. You thrive in high-growth environments and are energised by the challenge of building infrastructure that scales.

  • You've successfully built ML infrastructure for production AI/ML systems
  • You understand the nuances of deploying and scaling semantic reasoning workloads
  • You're comfortable operating in ambiguity and can create structure from complex technical requirements
  • You lead by example and aren't afraid to roll up your sleeves
  • You're passionate about the potential of governed AI infrastructure
  • You value reliability, performance, and building systems that teams can trust

Bonus Points

Experience with semantic reasoning or knowledge graph systems. Knowledge of NLP and language models. Experience with Kubernetes and container orchestration. Background in systems programming (Rust, Go, C++).

5. Compensation

We offer a competitive compensation package designed to attract and retain exceptional talent:

  • Base Salary: Competitive base commensurate with experience and market rates
  • Variable Compensation: Performance-based bonus structure tied to infrastructure reliability and performance metrics
  • Equity: Meaningful equity stake reflecting the importance of this role

6. Benefits & Perks

We invest in our team with benefits designed to support your best work and life:

Meaningful Equity
Remote Flexibility
Flexible Time Off
Learning Budget
Home Office Setup
Team Retreats

Ready to Build Scalable ML Infrastructure?

If you're excited about building the infrastructure that powers governed AI at scale, we'd love to hear from you.

Apply Now
View All Open Positions