Specialist Solutions Architect - Data Engineering
FEQ425R190
This role can be remote and fluent-proficiency in English and Spanish is required.
As a Specialist Solutions Architect (SSA) - Data Engineering, you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with Apache Spark™ and expertise in other data technologies. SSAs help customers through design and successful implementation of essential workloads while aligning their technical roadmap for expanding the usage of the Databricks Data Intelligence Platform. As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be streaming, performance tuning, industry expertise, or more.
The impact you will have:
Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment
Architect production level data pipelines, including end-to-end pipeline load performance testing and optimization
Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows
Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing, and custom architectures
Provide tutorials and training to improve community adoption (including hackathons and conference presentations)
Contribute to the Databricks Community
What we look for:
Fluent-proficiency in English and Portuguese/Spanish
5+ years experience in a technical role with expertise in at least one of the following:
Software Engineering/Data Engineering: data ingestion, streaming technologies - such as Spark Streaming and Kafka, performance tuning, troubleshooting, and debugging Spark or other big data solutions
Data Applications Engineering: Build use cases that use data - such as risk modeling, fraud detection, customer life-time value
Extensive experience building big data pipelines
Experience maintaining and extending production data systems to evolve with complex needs
Deep Specialty Expertise in at least one of the following areas:
Experience scaling big data workloads (such as ETL) that are performant and cost-effective
Experience migrating Hadoop workloads to the public cloud - AWS, Azure, or GCP
Experience with large scale data ingestion pipelines and data migrations - including CDC and streaming ingestion pipelines
Expert with cloud data lake technologies - such as Delta and Delta Live
Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience
Production programming experience in SQL and Python, Scala, or Java
2 years of professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures
2 years of customer-facing experience in a pre-sales or post-sales role
Can meet expectations for technical training and role-specific outcomes within 6 months of hire
Can travel up to 30% when needed
About the job
Apply for this position
Specialist Solutions Architect - Data Engineering
FEQ425R190
This role can be remote and fluent-proficiency in English and Spanish is required.
As a Specialist Solutions Architect (SSA) - Data Engineering, you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with Apache Spark™ and expertise in other data technologies. SSAs help customers through design and successful implementation of essential workloads while aligning their technical roadmap for expanding the usage of the Databricks Data Intelligence Platform. As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be streaming, performance tuning, industry expertise, or more.
The impact you will have:
Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment
Architect production level data pipelines, including end-to-end pipeline load performance testing and optimization
Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows
Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing, and custom architectures
Provide tutorials and training to improve community adoption (including hackathons and conference presentations)
Contribute to the Databricks Community
What we look for:
Fluent-proficiency in English and Portuguese/Spanish
5+ years experience in a technical role with expertise in at least one of the following:
Software Engineering/Data Engineering: data ingestion, streaming technologies - such as Spark Streaming and Kafka, performance tuning, troubleshooting, and debugging Spark or other big data solutions
Data Applications Engineering: Build use cases that use data - such as risk modeling, fraud detection, customer life-time value
Extensive experience building big data pipelines
Experience maintaining and extending production data systems to evolve with complex needs
Deep Specialty Expertise in at least one of the following areas:
Experience scaling big data workloads (such as ETL) that are performant and cost-effective
Experience migrating Hadoop workloads to the public cloud - AWS, Azure, or GCP
Experience with large scale data ingestion pipelines and data migrations - including CDC and streaming ingestion pipelines
Expert with cloud data lake technologies - such as Delta and Delta Live
Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience
Production programming experience in SQL and Python, Scala, or Java
2 years of professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures
2 years of customer-facing experience in a pre-sales or post-sales role
Can meet expectations for technical training and role-specific outcomes within 6 months of hire
Can travel up to 30% when needed