Software Engineer II, Data Lake
Company details
Company: Klaviyo
Job type: Remote
Country: United States
City: Los Angeles
Experience: 4 years or more
Description of the offer
At Klaviyo, we love tackling tough engineering problems and look for engineers who specialize in certain areas but are passionate about building, owning & scaling features end to end from scratch and breaking through any obstacle or technical challenge in their way. We push each other to move out of our comfort zone, learn new technologies and work hard to ensure each day is better than the last.
Team Overview:
Klaviyo operates a real-time and offline data analytics platform that is built for massive scale and hosted on Amazon Web Services (AWS). The DataLake team ingest billions of data points per day to power core, Klaviyo functionality and generate data-driven insights. These interactions are ingested from disparate sources, persisted into the Data Lake, materialized into analytical storage, and used to power AI models.
As an Engineer on the Data Lake Team, you will have ownership over defining the evolutionary, technical vision for our large-scale data processing systems. You will be responsible for designing, implementing, and optimizing mission-critical data pipelines and storage solutions, leveraging technologies like EMR, Spark, and Flink. You’ll work on ensuring scalability, performance, and reliability while mentoring team members and driving technical excellence.
Please note: this role is based in Boston, MA and requires a hybrid, in-office component.
Team Tech Stack:
- Python (Node or Java)
- Apache Spark, Apache Flink
- Airflow
- Kafka, Apache Pulsar
- MySQL
- Kubernetes
- AWS (including EMR, S3, Redshift)
How You’ll Make a Difference
- Implement scalable, fault-tolerant data pipelines using distributed processing frameworks like Apache Spark and Flink on AWS EMR, optimizing for throughput and latency
- Design batch and real-time, event-driven data workflows to process billions of data points daily, leveraging streaming technologies like Kafka and Flink. – Optimize distributed compute clusters and storage systems (e.g., S3, HDFS) to handle petabyte-scale datasets efficiently, ensuring resource efficiency and cost-effectiveness.
- Develop robust failure recovery mechanisms, including checkpointing, replication, and automated failover, to ensure high availability in distributed environments
- Optimize data storage and processing systems to handle petabyte-scale datasets efficiently, ensuring performance and cost-effectiveness.
- Collaborate with cross-functional teams to deliver actionable datasets that power analytics and AI capabilities.
- Implement data governance policies and security measures to maintain data quality and compliance.
- Own the technical direction of highly visible data systems, improving monitoring, failure recovery, and performance.
- Mentor engineers, review technical documentation, and articulate phased approaches to achieving the team’s technical vision.
- Contribute to the evolution of internal data processing tools and frameworks, enhancing their scalability and usability
Must Have:
Sponsored ads
- 4+ years of experience in software development, with at least 2 years focused on data engineering and distributed systems.
- Hands on with Python and SQL, with experience in backend development.
- Experience with distributed data processing frameworks such as Apache Spark and Flink.
- Proven track record of designing and implementing scalable ETL/ELT pipelines, ideally using AWS services like EMR.
- Strong knowledge of cloud platforms, particularly AWS (e.g., EMR, S3, Redshift), and optimizing data workflows in the cloud.
- Experience with data pipeline orchestration tools like Airflow.
- Familiarity with real-time data streaming technologies such as Kafka or Pulsar.
- Understanding of data modeling, database design, and data governance best practices.
- Excellent problem-solving skills and the ability to thrive in a fast-paced, collaborative environment.
- Strong communication skills with experience mentoring or leading engineering teams.
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent experience.
- You’ve already experimented with AI in work or personal projects, and you’re excited to dive in and learn fast. You’re hungry to responsibly explore new AI tools and workflows, finding ways to make your work smarter and more efficient.
We use Covey as part of our hiring and / or promotional process. For jobs or candidates in NYC, certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on April 3, 2025.
Please see the independent bias audit report covering our use of Covey here
Location of employment
How to apply?
Click on the button to get the company email or employment application form.
Apply with External LinkSponsored ads
