Your opportunity
Do you enjoy driving a vision across multiple teams? Do you get excited about complex distributed systems? Do you love data?
We’re looking for an experienced software engineer for the Data observability team to grow our global engineering team and help our customers build better data pipelines. How do you know if this is the right opportunity for you?
We seek a skilled software engineer to help design and implement a groundbreaking data pipeline observability platform, enhancing the integrity and reliability of enterprise data flows.
This role requires a deep understanding of distributed systems, monitoring techniques, solid understanding of ELT, EDW and data pipelines. You'll combine technical skills to build user-centric products that enables data teams to establish reliability and trust in their data
What you'll do
- Work with a team of software and data engineers in the design, development, and deployment of ETL pipeline observability software.
- Implement best practices for logging, tracing, and debugging to enhance visibility into data flow and pipeline execution.
- Help define the technical architecture and roadmap for the ETL pipeline observability platform, ensuring scalability, reliability, and performance.
- Collaborate with product management to gather requirements, prioritize features, and define the product vision and strategy.
- Build services that communicate with SaaS data products from the ground up.
- Research various data tools and create integrations between them and the New Relic platform.
This role requires
- 2+ years of experience in software engineering, with a focus on building scalable and reliable software systems.
- Strong knowledge of software engineering best practices, including agile development methodologies, code review processes, and continuous integration/continuous deployment (CI/CD) pipelines.
- Deep understanding of ETL processes and technologies, with experience working with tools such as Apache Spark, Apache Kafka, and Airflow
- Significant experience building and managing data pipelines (ETL/ELT processes, data streaming, etc.) is highly valued.
- Strong experience in containerization technologies (e.g., Docker, Kubernetes) and building software in one of the one of the cloud platforms (e.g., AWS, Azure, GCP)
- Strong programming skills, preferably in Python, Java, or similar languages. Experience with relevant data processing libraries or frameworks (e.g., Apache Spark, Kafka, Airflow, etc.)
Bonus points if you have
- AI/ML models