Data Platform Engineer

See more jobs from Lyft Inc.

about 1 month old

Apply Now

At Lyft, our purpose is to serve and connect. To do this, we start with our own community by creating an open, inclusive, and diverse organization.

Lyft is seeking a highly skilled Software Engineer to join our Data Platform team. The ideal candidate will have a strong understanding of the modern "Big Data" ecosystem, hands-on experience with Apache Big Data frameworks (Hadoop, Hive, Spark, Airflow, Flink, Iceberg), Presto/Trino and familiarity with AWS data infrastructure components (Glue, S3, Dynamo, EMR, Kubernetes / EKS, Kafka / Kinesis, SQS, Aurora, Athena).

As a team member of our Data Platform team, you will work at the intersection of backend, data, and infrastructure engineering. Your focus will be on building, scaling, optimizing, and managing the core storage systems and governance components of Lyft’s Data Platform. This role also includes participation in on-call rotations to ensure the platform’s stability and operational efficiency.

Responsibilities:

  • Design, build, and maintain scalable and reliable data storage solutions to support diverse data processing needs.
  • Optimize and scale the platform to accommodate increasing data volumes and user requests.
  • Improve data storage, retrieval, query performance, and overall system performance.
  • Collaborate with data scientists, analysts, and other stakeholders to understand requirements and deliver tailored solutions.
  • Work with engineering teams to ensure data pipelines, analytics tools, ETL processes, and other systems are properly integrated with the Lyft Data Platform.
  • Troubleshoot and resolve data platform issues in a timely manner.
  • Participate in on-call rotations.
  • Develop and maintain monitoring and alerting systems to ensure platform availability and reliability.
  • Participate in code reviews, design discussions, and other collaborative team activities to maintain high-quality standards.
  • Continuously evaluate new technologies and tools to enhance the data platform.
  • Contribute to platform documentation, knowledge sharing, and best practice development.

Experience:

  • 4+ years of experience in software/data engineering, data architecture, or a related field.
  • Strong programming skills in at least one language: Java, Scala, Python, or Go.
  • Experience with  SQL and data modeling.
  • Hands-on experience with Apache Big Data frameworks such as Hadoop, Hive, Spark, Airflow, Iceberg, etc.
  • Proficiency in AWS cloud services.
  • Strong understanding of distributed systems, large-scale data processing, and data storage/retrieval.
  • Experience with data governance, security, and compliance will be a plus.
  • Familiarity with CI/CD and DevOps practices will be a plus.
  • Excellent communication, problem-solving skills and ability to efficiently work independently or as part of a team.

Benefits:

  • Professional and stable working environment.
  • 28 calendar days for vacation and up to 5 paid days off.
  • 18 weeks of paid parental leave. Biological, adoptive and foster parents are all eligible.
  • Mental health benefits.
  • Family building benefits.

This role is fully remote in Ukraine, candidates for this role must be based in Ukraine.