Senior Cloud AI Engineer
To see similar active jobs please follow this link: Remote Development jobs
At G-P, our mission is to break down barriers to global business, enabling opportunities for everyone, everywhere.
This role is part of our AI Team building GIA, our AI-powered product that:
Acts as an expert employment lawyer on one shoulder and an HR leader on the other
Helps companies navigate complex global employment challenges with intelligence and ease
What makes this role unique:
You’ll join a small, startup-structured team within the larger G-P organization.
Passionate about building and defining early stage products.
You’ll be helping to build GIA from the ground up — shaping both the product and the underlying tech
We’re seeking individuals with a strong product mindset and expert-level engineering skills
You must thrive in fast-paced, ambiguous environments where direction isn’t always handed to you
You’ll need to be highly independent and self-directed
Must know how to identify what to build next, without always being told
Expect to wear multiple hats and contribute across the stack
While collaboration is essential, you must also lead as an individual contributor
Important note:
If you’re not prepared to operate at this level or aren’t excited by the challenge — this role may not be the right fit.
About the Role:
If you have a passion for AI and MlOps, are a deep innovator, and want to solve complex problems that lead to a world of positive results, consider G-P. Here, your knowledge and experience will be crucial to helping design and develop high-performing cloud-based software products using traditional Agile methodologies and modern frameworks.
Beyond a competitive compensation and benefits package, what we offer to all employees along the way is the clear and simple promise of Opportunity Made Possible. Come expand your skills in new ways and experience the thrill of your best innovations becoming reality.
Key Responsibilities:
Collaborate with data scientists and engineers to containerize, deploy, and maintain machine learning models and APIs within our cloud infrastructure.
Lead the design and development of cloud-native applications and services, using AWS offerings such as ECS, Lambda, and API Gateway.
Implement practical MLOps workflows to support model packaging, inference pipelines, observability, and versioning — focusing on performance, auditability, and scalability.
Build and manage Terraform modules to provision secure, cost-effective, and maintainable infrastructure.
Create CI/CD pipelines (e.g., GitHub Actions) to automate the deployment and monitoring of models, services, and supporting tools.
Design and support production-grade systems with robust monitoring, alerting, and metrics using CloudWatch or third-party tools like New Relic.
Work across Python, NodeJS, and React-based applications, ensuring model services integrate smoothly with internal APIs and UIs.
Requirements:
5+ years in software engineering, with a strong emphasis on cloud-native development and deployment practices.
Expert-level knowledge of Python, particularly for backend service development and data/ML tooling.
Hands-on experience with AWS services including ECS, Lambda, API Gateway, IAM, and CloudWatch.
Proficiency with infrastructure-as-code using Terraform, with a clear understanding of secure and cost-aware AWS architectures.
Strong experience in Docker and container orchestration patterns.
Familiarity with MLOps principles such as model versioning, inference APIs, logging, and data pipeline integration Competence in supporting full-stack applications and APIs, including services written in FastAPI (Python), Node.js and frontends in React.
Ability to work independently in a fast-moving environment, with strong collaboration and problem-solving skills.
Preferred Qualifications:
Experience deploying and maintaining machine learning models in production, including containerized inference APIs and event-driven serving with ECS or Lambda.
Familiarity with model packaging and serving workflows using tools like MLflow, BentoML, or by building custom inference APIs with FastAPI or Flask.
Hands-on experience working with Large Language Models (LLMs), including prompt design, API integration (e.g., OpenAI, Claude, or Cohere), and optimizing inference latency and token usage.
Strong understanding of logging, monitoring, and observability practices for ML-powered services (e.g., using CloudWatch, New Relic, or Prometheus).
Comfortable working across languages and stacks (Python, Node.js, React), particularly where backend ML services support user-facing applications.
Familiarity with vector stores (e.g., Pinecone, FAISS) and retrieval-augmented generation (RAG) patterns is a plus — bonus if you’ve worked on more advanced AI capabilities like multi-agent orchestration, memory-enabled chains, or MCP-like architectures.
AWS certifications or prior experience in high-compliance, production-scale AWS environments.
We will consider for employment all qualified applicants who meet the inherent requirements for the position. Please note that background checks are required, and this may include criminal record checks.
The annual gross base salary range for this position is $159,200 $199,000 plus variable compensation.
We will consider for employment all qualified applicants, including those with arrest records, conviction records, or other criminal histories, in a manner consistent with the requirements of any applicable state and local laws, including the City of Los Angeles’ Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, and the New York City Fair Chance Act.
Senior Cloud AI Engineer
To see similar active jobs please follow this link: Remote Development jobs
At G-P, our mission is to break down barriers to global business, enabling opportunities for everyone, everywhere.
This role is part of our AI Team building GIA, our AI-powered product that:
Acts as an expert employment lawyer on one shoulder and an HR leader on the other
Helps companies navigate complex global employment challenges with intelligence and ease
What makes this role unique:
You’ll join a small, startup-structured team within the larger G-P organization.
Passionate about building and defining early stage products.
You’ll be helping to build GIA from the ground up — shaping both the product and the underlying tech
We’re seeking individuals with a strong product mindset and expert-level engineering skills
You must thrive in fast-paced, ambiguous environments where direction isn’t always handed to you
You’ll need to be highly independent and self-directed
Must know how to identify what to build next, without always being told
Expect to wear multiple hats and contribute across the stack
While collaboration is essential, you must also lead as an individual contributor
Important note:
If you’re not prepared to operate at this level or aren’t excited by the challenge — this role may not be the right fit.
About the Role:
If you have a passion for AI and MlOps, are a deep innovator, and want to solve complex problems that lead to a world of positive results, consider G-P. Here, your knowledge and experience will be crucial to helping design and develop high-performing cloud-based software products using traditional Agile methodologies and modern frameworks.
Beyond a competitive compensation and benefits package, what we offer to all employees along the way is the clear and simple promise of Opportunity Made Possible. Come expand your skills in new ways and experience the thrill of your best innovations becoming reality.
Key Responsibilities:
Collaborate with data scientists and engineers to containerize, deploy, and maintain machine learning models and APIs within our cloud infrastructure.
Lead the design and development of cloud-native applications and services, using AWS offerings such as ECS, Lambda, and API Gateway.
Implement practical MLOps workflows to support model packaging, inference pipelines, observability, and versioning — focusing on performance, auditability, and scalability.
Build and manage Terraform modules to provision secure, cost-effective, and maintainable infrastructure.
Create CI/CD pipelines (e.g., GitHub Actions) to automate the deployment and monitoring of models, services, and supporting tools.
Design and support production-grade systems with robust monitoring, alerting, and metrics using CloudWatch or third-party tools like New Relic.
Work across Python, NodeJS, and React-based applications, ensuring model services integrate smoothly with internal APIs and UIs.
Requirements:
5+ years in software engineering, with a strong emphasis on cloud-native development and deployment practices.
Expert-level knowledge of Python, particularly for backend service development and data/ML tooling.
Hands-on experience with AWS services including ECS, Lambda, API Gateway, IAM, and CloudWatch.
Proficiency with infrastructure-as-code using Terraform, with a clear understanding of secure and cost-aware AWS architectures.
Strong experience in Docker and container orchestration patterns.
Familiarity with MLOps principles such as model versioning, inference APIs, logging, and data pipeline integration Competence in supporting full-stack applications and APIs, including services written in FastAPI (Python), Node.js and frontends in React.
Ability to work independently in a fast-moving environment, with strong collaboration and problem-solving skills.
Preferred Qualifications:
Experience deploying and maintaining machine learning models in production, including containerized inference APIs and event-driven serving with ECS or Lambda.
Familiarity with model packaging and serving workflows using tools like MLflow, BentoML, or by building custom inference APIs with FastAPI or Flask.
Hands-on experience working with Large Language Models (LLMs), including prompt design, API integration (e.g., OpenAI, Claude, or Cohere), and optimizing inference latency and token usage.
Strong understanding of logging, monitoring, and observability practices for ML-powered services (e.g., using CloudWatch, New Relic, or Prometheus).
Comfortable working across languages and stacks (Python, Node.js, React), particularly where backend ML services support user-facing applications.
Familiarity with vector stores (e.g., Pinecone, FAISS) and retrieval-augmented generation (RAG) patterns is a plus — bonus if you’ve worked on more advanced AI capabilities like multi-agent orchestration, memory-enabled chains, or MCP-like architectures.
AWS certifications or prior experience in high-compliance, production-scale AWS environments.
We will consider for employment all qualified applicants who meet the inherent requirements for the position. Please note that background checks are required, and this may include criminal record checks.
The annual gross base salary range for this position is $159,200 $199,000 plus variable compensation.
We will consider for employment all qualified applicants, including those with arrest records, conviction records, or other criminal histories, in a manner consistent with the requirements of any applicable state and local laws, including the City of Los Angeles’ Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, and the New York City Fair Chance Act.