Develops and maintains pipeline architecture for satellite data products to ensure efficient processing and delivery.
Design and implement data solutions for real-time trading and long-term research at scale.
Design and manage database architecture and data models.
Design and build scalable data models and pipelines using dbt to transform raw data into clean, reliable assets that power company-wide analytics and decision-making.
Design and analyze experiments to measure agent improvements.
Expand the Developer tools that capture business events, operational data and third party data into the Lakehouse.
Conduct interviews, facilitate workshops, and distribute surveys with client stakeholders at various hierarchical levels to comprehensively understand and document their business requirements.
Find and validate the quality of required data, write documentation and implement data governance bases.
Design and build data pipelines using Databricks Lakehouse.
Design and manage scalable Kafka-based data pipelines.
Design and implement scalable backend systems for Federal customers, leveraging Scale's modern and cloud-native AI infrastructure.
Build Scale's analytical and business-intelligence infrastructure.
Architect and launch innovative data solutions for large cross-functional teams.
Drive innovations in the development and optimization of distributed training frameworks.
Design and architect high-performance service APIs to power streaming data services, ensuring seamless integration across teams and products using Python and Golang.
Building and optimizing machine learning workflows for AI-driven products.
Help architect, build and launch scalable data pipelines to support Lyft’s growing data processing and analytics needs.
Build a scalable data platform to power Lyft's top-line metrics.
Collaborate with Operations, Science, and Finance teams to drive data-driven strategies.
Assist in building and maintaining Workday/People Tech platforms.