DATAECONOMY | Full time

Python Developer – GenAI/MLOps Specialist

hyderabad, India | Posted on 04/08/2025

Job Information

  • Date Opened 04/08/2025
  • Job Type Full time
  • Industry IT Services
  • City hyderabad
  • State/Province Telangana
  • Country India
  • Zip/Postal Code 500001

About Us

About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership.

Job Description

We are seeking a highly skilled Python Developer with 5–10 years of experience specializing in FastAPI, Generative AI, and MLOps practices. The ideal candidate should have hands-on expertise in building scalable, AI-driven applications and deploying them into production environments. You will collaborate with cross-functional teams to deliver real-world AI solutions.

Responsibilities:

  • Design, build, and maintain high-performance APIs using FastAPI.

  • Develop and integrate solutions using LangChain, LlamaIndex, or Autogen.

  • Train, fine-tune, and deploy Generative AI models.

  • Implement MLOps practices such as model versioning, monitoring, retraining, and automation pipelines.

  • Containerize applications with Docker and deploy to orchestration platforms like AWS ECS.

  • Collaborate closely with ML engineers, data scientists, and DevOps teams.

  • Maintain clean, scalable, and version-controlled code using Git.


Requirements

Required Skills:

  • 5–10 years of experience in Python development and backend API frameworks, especially FastAPI.

  • Strong understanding of Machine Learning concepts and practical applications.

  • Hands-on experience with LangChain, LlamaIndex, or Autogen frameworks.

  • Proficiency in GenAI model training, fine-tuning, and deployment strategies.

  • Strong knowledge of MLOps practices including model lifecycle management.

  • Practical experience with Docker containerization and deployments (AWS ECS or similar).

  • Expertise in using Git for code collaboration and version control.

Nice to Have:

  • Knowledge of Natural Language Processing (NLP) and Deep Learning architectures.

  • Familiarity with cloud platforms like AWS, GCP, or Azure.

  • Experience with Vector Databases (e.g., FAISS, Pinecone).

  • Exposure to real-time data processing or event-driven architectures.

  • Added Advantage: Experience with PySpark for big data processing.

Educational Qualification:

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.



Benefits

What We Offer:


 • Opportunity to work with cutting-edge technologies in a collaborative environment.

 • Continuous learning and professional development opportunities.