At Efficio, we're not just a consulting firm; we're a team of dedicated experts committed to transforming procurement and supply chain functions. Our mission is to drive sustainable, measurable value for our clients through innovative solutions and deep industry insights. Join us and be part of a dynamic environment where your skills and ideas make a real impact.

Efficio is the world’s largest specialist procurement and supply chain consultancy, with offices across Europe, North America, and the Middle East. We’re a diverse team of over 1,000 individuals, representing more than 60 different nationalities and speaking 40+ languages – and we’re continuing to grow rapidly!

Efficio is the world’s largest procurement consultancy, but we are transforming into a technology-first company.  

We are moving from "data-based" to "data-driven," building the intelligent backbone that will power our new GenAI and LLM products.  

You won’t just be moving data from A to B; you will be architecting the engine that allows us to integrate AI into the core of our business. This is a chance to use your engineering skills to make global supply chains more sustainable, ethical, and efficient. 


Our Engineering culture
We trust our engineers to drive architectural decisions and own the platform's strategic direction, from selecting orchestration patterns to final deployment. We are dedicated to engineering excellence and invest significantly in your professional growth through sponsorship of AWS Certifications and advanced training. In this role, you will establish the foundational infrastructure for our AI capabilities, directly enabling the deployment of Large Language Models that address complex challenges for our global clients. 

Was wirst du machen?

  • Lead the design of scalable cloud infrastructure, utilizing Terraform to provision AWS resources (ECS, Lambda, S3) with a focus on resilience and reproducibility. 
  • Build high-performance, maintainable Python applications characterized by strict dependency management, comprehensive error handling, and high test coverage. 
  • Own the complete data journey: from ingestion to consumption; designing the API layers and backbones required to integrate GenAI into production. 
  • Drive the evolution of our stack by migrating legacy patterns to cloud-agnostic distributed systems that perform reliably at scale. 
  • Proficiency in Python
  • A Data Engineer who understands system performance and can optimize logic. 
  • Proven experience with AWS (or equivalent cloud providers) and familiarity with IaC concepts (like Terraform) and a willingness to adopt these patterns is essential. 
  • A track record of deploying reliable code using Git, Docker, and CI/CD pipelines to ensure automated delivery. 
  • A commitment to reliability, consistently implementing unit and functional tests to ensure the integrity of data pipelines. 
  • Experience deploying LLMs or working with RAG architectures. 
  • Experience with FastAPI or building data-serving APIs. 
  • Background in SRE or DevOps. 
  • Bewirb dich für diese Stelle

    Jetzt bewerben