Software Engineer - Data Infrastructure
The Data Infrastructure team at Figma builds and operates the foundational platforms that power analytics, AI, and data-driven decision-making across the company. We serve a diverse set of stakeholders, including AI Researchers, Machine Learning Engineers, Data Scientists, Product Engineers, and business teams that rely on data for insights and strategy. Our team owns and scales critical data platforms such as the Snowflake data warehouse, ML Datalake, and large-scale data movement and processing applications, managing all data flowing into and out of these platforms.
Despite being a small team, we take on high-scale, high-impact challenges. In the coming years, we’re focused on building the foundational infrastructure to support AI-powered products, developing streaming interconnects between our core systems, and revamping our orchestration and financial data architecture with a strong emphasis on data quality, reliability, and efficiency. If you're passionate about building scalable, high-performance data platforms that empower teams across Figma, we'd love to hear from you!
This is a full time role that can be held from one of our US hubs or remotely in the United States.
What you'll do at Figma:
Design and build large-scale distributed data systems that power analytics, AI/ML, and business intelligence.
Develop batch and streaming solutions to ensure data is reliable, efficient, and scalable across the company.
Manage data ingestion, movement, and processing through core platforms like Snowflake, our ML Datalake, and real-time streaming systems.
Improve data reliability, consistency, and performance, ensuring high-quality data for engineering, research, and business stakeholders.
Collaborate with AI researchers, data scientists, product engineers, and business teams to understand data needs and build scalable solutions.
Drive technical decisions and best practices for data ingestion, orchestration, processing, and storage.
We’d love to hear from you if you have:
6+ years of experience designing and building distributed data infrastructure at scale.
Strong expertise in batch and streaming data processing technologies such as Spark, Flink, Kafka, or Airflow/Dagster.
A proven track record of impact-driven problem-solving in a fast-paced environment.
A strong sense of engineering excellence, with a focus on high-quality, reliable, and performant systems.
Excellent technical communication skills, with experience working across both technical and non-technical counterparts.
The ability to navigate ambiguity, take ownership, and drive projects from inception to execution.
Experience mentoring and supporting engineers, fostering a culture of learning and technical excellence.
While it’s not required, it’s an added plus if you also have:
Experience with data governance, access control, and cost optimization strategies for large-scale data platforms.
Familiarity with our stack, including Golang, Python, SQL, frameworks such as dbt, and technologies like Spark, Kafka, Snowflake, and Dagster.
Experience designing data infrastructure for AI/ML pipelines.
At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you’re excited about this role but your past experience doesn’t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.
About the job
Apply for this position
Software Engineer - Data Infrastructure
The Data Infrastructure team at Figma builds and operates the foundational platforms that power analytics, AI, and data-driven decision-making across the company. We serve a diverse set of stakeholders, including AI Researchers, Machine Learning Engineers, Data Scientists, Product Engineers, and business teams that rely on data for insights and strategy. Our team owns and scales critical data platforms such as the Snowflake data warehouse, ML Datalake, and large-scale data movement and processing applications, managing all data flowing into and out of these platforms.
Despite being a small team, we take on high-scale, high-impact challenges. In the coming years, we’re focused on building the foundational infrastructure to support AI-powered products, developing streaming interconnects between our core systems, and revamping our orchestration and financial data architecture with a strong emphasis on data quality, reliability, and efficiency. If you're passionate about building scalable, high-performance data platforms that empower teams across Figma, we'd love to hear from you!
This is a full time role that can be held from one of our US hubs or remotely in the United States.
What you'll do at Figma:
Design and build large-scale distributed data systems that power analytics, AI/ML, and business intelligence.
Develop batch and streaming solutions to ensure data is reliable, efficient, and scalable across the company.
Manage data ingestion, movement, and processing through core platforms like Snowflake, our ML Datalake, and real-time streaming systems.
Improve data reliability, consistency, and performance, ensuring high-quality data for engineering, research, and business stakeholders.
Collaborate with AI researchers, data scientists, product engineers, and business teams to understand data needs and build scalable solutions.
Drive technical decisions and best practices for data ingestion, orchestration, processing, and storage.
We’d love to hear from you if you have:
6+ years of experience designing and building distributed data infrastructure at scale.
Strong expertise in batch and streaming data processing technologies such as Spark, Flink, Kafka, or Airflow/Dagster.
A proven track record of impact-driven problem-solving in a fast-paced environment.
A strong sense of engineering excellence, with a focus on high-quality, reliable, and performant systems.
Excellent technical communication skills, with experience working across both technical and non-technical counterparts.
The ability to navigate ambiguity, take ownership, and drive projects from inception to execution.
Experience mentoring and supporting engineers, fostering a culture of learning and technical excellence.
While it’s not required, it’s an added plus if you also have:
Experience with data governance, access control, and cost optimization strategies for large-scale data platforms.
Familiarity with our stack, including Golang, Python, SQL, frameworks such as dbt, and technologies like Spark, Kafka, Snowflake, and Dagster.
Experience designing data infrastructure for AI/ML pipelines.
At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you’re excited about this role but your past experience doesn’t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.