Full-Time
Apply Now About the Role
We are seeking an Agentic AI / Generative AI Engineer to design, build, and deploy LLM-powered intelligent agents on AWS. The ideal candidate will have strong Python backend experience, hands-on exposure to LLMs and agent frameworks, and experience building production-ready AI applications using AWS cloud services.
This role focuses on developing autonomous and semi-autonomous AI agents, integrating Large Language Models with enterprise systems, and deploying scalable, secure AI solutions on AWS.
🔑 Key Responsibilities
• Design and develop agent-based AI systems using LLMs
• Build and maintain LLM-driven workflows using frameworks like LangChain, LangGraph, CrewAI, or AutoGen
• Implement RAG (Retrieval Augmented Generation) pipelines using vector databases
• Develop backend services and APIs using Python (FastAPI / Flask)
• Integrate LLMs with internal tools, APIs, and data sources
• Deploy, monitor, and scale AI solutions on AWS
• Optimize prompt design, embeddings, and agent decision logic
• Ensure security, reliability, and performance of AI applications
• Collaborate with product, ML, and cloud teams to deliver end-to-end solutions
✅ Required Skills & Qualifications
• Strong proficiency in Python
• Hands-on experience with Large Language Models (LLMs) such as GPT, Claude, LLaMA, or similar
• Experience building Generative AI or Agentic AI applications
• Knowledge of agent frameworks (LangChain, LangGraph, CrewAI, AutoGen, etc.)
• Experience with RAG architectures and vector databases (Pinecone, FAISS, Chroma, OpenSearch, etc.)
• Solid understanding of AWS services, including:
• AWS Bedrock
• Lambda
• S3
• API Gateway
• IAM
• Experience developing RESTful APIs and backend systems
• Understanding of cloud security, scalability, and monitoring
What you'll do
- This role focuses on developing autonomous and semi-autonomous AI agents, integrating Large Language Models with enterprise systems, and deploying scalable, secure AI solutions on AWS
- Design and develop agent-based AI systems using LLMs
- Build and maintain LLM-driven workflows using frameworks like LangChain, LangGraph, CrewAI, or AutoGen
- Implement RAG (Retrieval Augmented Generation) pipelines using vector databases
- Develop backend services and APIs using Python (FastAPI / Flask)
- Integrate LLMs with internal tools, APIs, and data sources
- Deploy, monitor, and scale AI solutions on AWS
- Optimize prompt design, embeddings, and agent decision logic
- Ensure security, reliability, and performance of AI applications
- Collaborate with product, ML, and cloud teams to deliver end-to-end solutions
Requirements
- The ideal candidate will have strong Python backend experience, hands-on exposure to LLMs and agent frameworks, and experience building production-ready AI applications using AWS cloud services
- ✅ Required Skills & Qualifications
- Strong proficiency in Python
- Hands-on experience with Large Language Models (LLMs) such as GPT, Claude, LLaMA, or similar
- Experience building Generative AI or Agentic AI applications
- Knowledge of agent frameworks (LangChain, LangGraph, CrewAI, AutoGen, etc.)
- Experience with RAG architectures and vector databases (Pinecone, FAISS, Chroma, OpenSearch, etc.)
- Solid understanding of AWS services, including:
- AWS Bedrock
- S3
- API Gateway
- IAM
- Experience developing RESTful APIs and backend systems
- Understanding of cloud security, scalability, and monitoring