infrastructure
3 hours ago*
Software Engineer, Model Serving Infrastructure
at Anyscale
📍 Bengaluru, India·🏢 On-site
You are nearing today's limit. Upgrade for unlimited access.
Responsibilities
- - Sub-millisecond Model Routing: Design and implement intelligent request routing systems that dynamically balance load across thousands of model replicas while maintaining strict latency SLAs - Zero-Downtime Model Updates: Build sophisticated traffic management systems that seamlessly transition between model versions at scale, handling terabytes of inference requests without dropping a single query - State Management at Scale: With many models and many replicas deployed into production, the control
- - Multi-Model Orchestration: Architect frameworks for complex ML pipelines where dozens of models need to communicate, share resources, and maintain end-to-end latency guarantees - Observability & Debugging: Build deep introspection tools that make it trivial to debug distributed ML applications—because "works on my laptop" doesn't cut it at scale The Tech You'll Work With: - Deep Systems Programming: You'll write performance-critical code in Python (with Cython optimization paths) and potentially C++ for
Requirements
- With Anyscale, we’re building the best place to run Ray, so that any developer or data scientist can scale an ML application from their laptop to the cluster without needing to be a distributed systems expert.
- About the role: Anyscale is actively seeking talented engineers to join our team and contribute to the development of next-generation, high-performance machine learning serving systems.
- Many existing ML serving tools are inherited from previous infrastructure generations, but emerging ML applications present new requirements, such as high compute demands, specialized hardware needs, and the integration of multiple models and business logic within a single request.
- At Anyscale, our mission is to provide a powerful yet simple set of tools that enable the seamless deployment of complex ML applications in production.
- The Challenge: What if you could build the infrastructure that powers AI applications for millions of users worldwide? Ray Serve is the production-grade serving framework that makes this possible—and we need exceptional engineers to push its boundaries.
- You'll be working on problems that sit at the intersection of distributed systems, machine learning, and high-performance computing.
- This isn't about maintaining CRUD apps or tweaking configurations—this is about solving fundamental computer science problems that directly impact how the world deploys AI.