Principle Engineer - Testing
About The Role
We are seeking a highly experienced Principal Engineer – Testing to join our dynamic engineering team. The ideal candidate will bring deep expertise in quality engineering, automation, and test strategy for complex, distributed systems. As part of the core team building our AI platform, you’ll design and lead the testing architecture for autonomous workflows, intelligent agents, and high-reliability infrastructure.
What You’ll Do
Design, implement, and own the testing architecture across the AI platform, spanning backend, APIs, data pipelines, and AI models
Build and scale automation frameworks to support unit, integration, regression, performance, and AI-specific testing
Develop test strategies that balance reliability, speed, and coverage for complex, dynamic, and data-heavy systems
Create simulation and sandbox environments to test AI workflows and orchestration logic
Integrate automated testing into CI/CD pipelines, enabling confident, rapid deployments with safety guarantees
Lead root cause analysis and implement practices that prevent regressions across systems and features
Champion a quality-balanced culture and mentor engineers in best practices for building robust, testable systems
What We Need
10+ years of experience in software quality engineering, test automation, or systems testing roles
Expertise in test automation frameworks (e.g., Pytest, Playwright, etc.) and infrastructure-as-code pipelines
Strong programming skills in Python, JavaScript/TypeScript
Experience designing and validating tests in distributed, event-driven, and microservices architectures
Familiarity with testing in AI/ML systems, including model output validation and behavior-driven testing.
Experience with containerized environments (Docker), and cloud platforms (AWS/GCP/Azure)
Strong understanding of CI/CD systems (particularly Git,) and modern observability tooling (e.g., Grafana, Prometheus)
Excellent problem-solving, debugging, and documentation skills
A track record of technical leadership and mentoring in cross-functional environments
Nice to have
Experience testing large-scale AI frameworks, LLM integrations, or decision systems
Exposure to tools like dbt, Spark, or DataDog
Contributions to open-source testing tools or quality initiatives
Deep curiosity about LLM behavior, autonomous systems, and human-in-the-loop testing strategies
A desire to develop the most consequential AI software system of the future!
What’s In It For You
Compensation:
Invisible is committed to fair and competitive pay, ensuring that compensation reflects both market conditions and the value each team member brings. Our salary structure accounts for regional differences in cost of living while maintaining internal equity.
About the job
Apply for this position
Principle Engineer - Testing
About The Role
We are seeking a highly experienced Principal Engineer – Testing to join our dynamic engineering team. The ideal candidate will bring deep expertise in quality engineering, automation, and test strategy for complex, distributed systems. As part of the core team building our AI platform, you’ll design and lead the testing architecture for autonomous workflows, intelligent agents, and high-reliability infrastructure.
What You’ll Do
Design, implement, and own the testing architecture across the AI platform, spanning backend, APIs, data pipelines, and AI models
Build and scale automation frameworks to support unit, integration, regression, performance, and AI-specific testing
Develop test strategies that balance reliability, speed, and coverage for complex, dynamic, and data-heavy systems
Create simulation and sandbox environments to test AI workflows and orchestration logic
Integrate automated testing into CI/CD pipelines, enabling confident, rapid deployments with safety guarantees
Lead root cause analysis and implement practices that prevent regressions across systems and features
Champion a quality-balanced culture and mentor engineers in best practices for building robust, testable systems
What We Need
10+ years of experience in software quality engineering, test automation, or systems testing roles
Expertise in test automation frameworks (e.g., Pytest, Playwright, etc.) and infrastructure-as-code pipelines
Strong programming skills in Python, JavaScript/TypeScript
Experience designing and validating tests in distributed, event-driven, and microservices architectures
Familiarity with testing in AI/ML systems, including model output validation and behavior-driven testing.
Experience with containerized environments (Docker), and cloud platforms (AWS/GCP/Azure)
Strong understanding of CI/CD systems (particularly Git,) and modern observability tooling (e.g., Grafana, Prometheus)
Excellent problem-solving, debugging, and documentation skills
A track record of technical leadership and mentoring in cross-functional environments
Nice to have
Experience testing large-scale AI frameworks, LLM integrations, or decision systems
Exposure to tools like dbt, Spark, or DataDog
Contributions to open-source testing tools or quality initiatives
Deep curiosity about LLM behavior, autonomous systems, and human-in-the-loop testing strategies
A desire to develop the most consequential AI software system of the future!
What’s In It For You
Compensation:
Invisible is committed to fair and competitive pay, ensuring that compensation reflects both market conditions and the value each team member brings. Our salary structure accounts for regional differences in cost of living while maintaining internal equity.