Senior Data Engineer
We are looking for a Senior Data Engineer who can help maintain frameworks and systems that acquire, validate/cleanse, and load data into and out of our analytics systems. The systems that this role builds and maintains will allow our business partners to more accurately and reliably track and forecast sales, revenue, and usage/consumption metrics. Additionally, this position will lead the development of machine learning pipelines as we move to productionalize internal predictive models.
This position will have engagement across many parts of the company, including finance, revenue and cx operations, analytics teams, and analytic engineering. The frameworks and systems that you work on will integrate with and enhance our current stack that includes GCP, BigQuery, dbt, Prefect, Python, Fivetran, Rudderstack, Hightouch, and Atlan.
Examples of projects you’ll work on
Build and maintain production quality data pipelines between operational systems and BigQuery (ingress and egress).
Implement data quality and freshness checks and monitoring processes to ensure data accuracy and consistency.
Build and maintain machine learning pipelines to automate model validation and deployment
Create and maintain comprehensive documentation for data engineering processes, systems, and workflows
Maintain observability and monitoring of our internal data pipelines.
Troubleshoot and resolve data pipeline issues to ensure downstream data availability.
Contribute to our dbt systems by making sure the source and staging layers align with our standards, are efficient, cost-effective, and highly available.
What you bring
Software development skills (some combination of Python, Java, Scala, Go)
High proficiency in SQL
Experience building and maintaining data ingestion pipelines using a workflow orchestration system (e.g. Prefect, Dagster, Airflow)
Working knowledge of MLOps best practices
Working knowledge of dbt or similar data transformation tools
Highly motivated self-starter that is keen to make an impact and is unafraid of tackling large, complicated problems
Excellent communication skills, able to explain technical topics to non-technical audiences, and maintain many of the essential cross-team and cross-functional relationships necessary for the team’s success
A plus if you have
Experience working with Prefect, BigQuery, and GCP services
Knowledge about observability
Previous experience with Grafana visualization, or a desire to invest the time to learn
In the US, the Base compensation range for this role is $152,960 - $183,552 . Actual compensation may vary based on level, experience, and skillset as assessed in the interview process. Benefits include equity, bonus (if applicable) and other benefits listed here.
*Compensation ranges are country specific. If you are applying for this role from a different location than listed above, your recruiter will discuss your specific market’s defined pay range & benefits at the beginning of the process
About the job
Apply for this position
Senior Data Engineer
We are looking for a Senior Data Engineer who can help maintain frameworks and systems that acquire, validate/cleanse, and load data into and out of our analytics systems. The systems that this role builds and maintains will allow our business partners to more accurately and reliably track and forecast sales, revenue, and usage/consumption metrics. Additionally, this position will lead the development of machine learning pipelines as we move to productionalize internal predictive models.
This position will have engagement across many parts of the company, including finance, revenue and cx operations, analytics teams, and analytic engineering. The frameworks and systems that you work on will integrate with and enhance our current stack that includes GCP, BigQuery, dbt, Prefect, Python, Fivetran, Rudderstack, Hightouch, and Atlan.
Examples of projects you’ll work on
Build and maintain production quality data pipelines between operational systems and BigQuery (ingress and egress).
Implement data quality and freshness checks and monitoring processes to ensure data accuracy and consistency.
Build and maintain machine learning pipelines to automate model validation and deployment
Create and maintain comprehensive documentation for data engineering processes, systems, and workflows
Maintain observability and monitoring of our internal data pipelines.
Troubleshoot and resolve data pipeline issues to ensure downstream data availability.
Contribute to our dbt systems by making sure the source and staging layers align with our standards, are efficient, cost-effective, and highly available.
What you bring
Software development skills (some combination of Python, Java, Scala, Go)
High proficiency in SQL
Experience building and maintaining data ingestion pipelines using a workflow orchestration system (e.g. Prefect, Dagster, Airflow)
Working knowledge of MLOps best practices
Working knowledge of dbt or similar data transformation tools
Highly motivated self-starter that is keen to make an impact and is unafraid of tackling large, complicated problems
Excellent communication skills, able to explain technical topics to non-technical audiences, and maintain many of the essential cross-team and cross-functional relationships necessary for the team’s success
A plus if you have
Experience working with Prefect, BigQuery, and GCP services
Knowledge about observability
Previous experience with Grafana visualization, or a desire to invest the time to learn
In the US, the Base compensation range for this role is $152,960 - $183,552 . Actual compensation may vary based on level, experience, and skillset as assessed in the interview process. Benefits include equity, bonus (if applicable) and other benefits listed here.
*Compensation ranges are country specific. If you are applying for this role from a different location than listed above, your recruiter will discuss your specific market’s defined pay range & benefits at the beginning of the process