Data Engineer
Company details
Company: nOps
Job type: Remote
Country: Germany
City: Berlin
Experience: 4 years or more
Description of the offer
At nOps, we envision a world where Finance, DevOps and Engineering teams can take control of their cloud & SaaS costs, so they only pay for what they use – not what’s provisioned. nOps’s AI & ML-powered cloud optimization platform processes over $2 billion dollars of cloud spend — and we’re just getting started. Our platform helps Finance, Engineering and DevOps teams automatically manage LLM usage, cloud spend, commitments, and third-party SaaS spend. This results in teams using less cloud resources and paying less for the compute resources they do use.
As an nOps team member, you’ll help solve the toughest problems in the exploding AI model landscape and cloud optimization with solutions that are engineering-forward and brilliantly simple. If building and scaling products while working with great people resonates with you, keep reading.
About the Role
We are looking for a talented Senior Data Engineer to join our dynamic team and work alongside our existing data team. This role focuses on building and optimizing data pipelines using AWS, Databricks, and Imply Polaris. You’ll be responsible for architecting scalable data solutions, managing real-time and batch data processing, and enabling analytics capabilities that drive our cloud optimization platform forward.
Key Responsibilities
- Design, build, and maintain robust data pipelines using AWS services, Databricks, and Imply Polaris
- Architect and optimize data ingestion, transformation, and storage solutions for large-scale cloud infrastructure data
- Develop and maintain real-time analytics capabilities using Imply Polaris (Apache Druid)
- Build scalable ETL/ELT processes in Databricks to support analytics and reporting needs
- Implement data quality frameworks and monitoring systems to ensure data reliability
- Optimize query performance and data models for efficient analysis of cloud cost and usage patterns
- Collaborate with cross-functional teams including engineers, product managers, and customer success to deliver data-driven solutions
- Document data architectures, pipelines, and best practices for the team
- Mentor team members and contribute to data engineering standards and practices
Required Qualifications
- Strong hands-on experience with AWS (e.g., S3, Glue, Lambda, Kinesis, Redshift, Athena, EMR)
- Proven expertise with Databricks including Delta Lake, Spark, and workflow orchestration
- Experience with Imply Polaris or Apache Druid for real-time analytics and OLAP workloads
- Proficiency in Python and SQL for data processing and analysis
- Strong understanding of data modeling, data warehousing concepts, and schema design
- Experience building and optimizing both batch and streaming data pipelines
- Knowledge of data orchestration tools (e.g., Airflow, Databricks Workflows)
- Solid understanding of data governance, security, and compliance best practices
- Excellent problem-solving skills and ability to work both independently and collaboratively
- Strong communication skills with ability to explain technical concepts clearly
- Personality fit: Self-motivated, curious, collaborative, and passionate about solving complex data challenges
Preferred Qualifications
- Experience in cloud cost optimization, FinOps, or cloud infrastructure domains
- Familiarity with infrastructure-as-code (Terraform, CloudFormation)
- Knowledge of machine learning pipelines and MLOps practices
- Contributions to open-source data engineering projects
Experience Level
While we value relevant experience (typically 6+ years in data engineering roles), we prioritize personality fit and demonstrated expertise over years of experience. We’re looking for someone passionate about data engineering who can hit the ground running with our tech stack.
Benefits
Sponsored ads
- Competitive salary
- Fully remote work environment
- North America time zones (for optimal team collaboration)
- Professional development opportunities and ongoing learning support
- Work with cutting-edge cloud and data technologies
- A collaborative and innovative work culture where your contributions make a real impact
- Opportunity to shape data architecture at a fast-growing cloud optimization platform
Why Join nOps?
You’ll work on challenging problems at scale, processing billions of dollars in cloud spending data. You’ll collaborate with talented engineers in a supportive environment where your expertise with AWS, Databricks, and Imply Polaris will directly impact how companies optimize their cloud infrastructure. If you’re a motivated data engineer who loves working with modern data technologies and wants to make a meaningful impact in the cloud management space, we’d love to hear from you.
To Apply
Please submit your resume along with any relevant portfolio work, GitHub profile, or projects that demonstrate your experience with AWS, Databricks, and/or Imply Polaris.
Location of employment
How to apply?
Click on the button to get the company email or employment application form.
Apply with External LinkSponsored ads
