Back to Jobs
Perplexity AI logo

Perplexity AI

We're hiring

AI Inference Engineer

San Francisco
full time

About this role

We are looking for an AI Inference engineer to join our growing team.

Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.

Responsibilities

  • Develop APIs for AI inference that will be used by both internal and external customers
  • Benchmark and address bottlenecks throughout our inference stack
  • Improve the reliability and observability of our systems and respond to system outages
  • Explore novel research and implement LLM inference optimizations

Qualifications

  • Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)
  • Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)
  • Understanding of GPU architectures or experience with GPU kernel programming using CUDA

The cash compensation range for this role is $190,000 - $250,000. Final offer amounts are determined by multiple factors, including, experience and expertise, and may vary from the amounts listed above.

Equity: In addition to the base salary, equity may be part of the total compensation package. Benefits: Comprehensive health, dental, and vision insurance for you and your dependents. Includes a 401(k) plan.

Check Your ATS Score

See how well your resume matches this AI Inference Engineer position and get instant optimization tips.

Check ATS Score →