Personetics - Applied AI Engineer
Applied AI Engineer
Personetics is leading the way in Cognitive Banking by using AI to help banks anticipate customer needs, provide actionable insights, and deliver personalized financial guidance. Our platform analyzes real-time transactional data to help financial institutions support their customers proactively, enabling them to manage their finances and achieve their goals.
We are proud to work with top-tier banks across 35 global markets, reaching over 150 million users every month. Our teams operate from major cities including New York, London, Tel Aviv, Singapore, and São Paulo.
About the Role
We are seeking a skilled Applied AI Engineer with a strong engineering mindset to help bring advanced AI models into production. In this position, you’ll work at the intersection of AI research and real-world systems, ensuring that models move smoothly from development to scalable, reliable production environments.
You’ll collaborate with engineers, data scientists, and product managers as part of our R&D team, playing a key role in validating, optimizing, deploying, and maintaining AI systems that provide long-term value.
If you thrive on optimizing performance, building robust solutions, and deploying AI at scale, this role is a great fit.
Responsibilities
AI System Design & Deployment
- Lead the transition of AI models from proof-of-concept to production-ready implementations.
- Ensure AI systems meet product requirements, scalability demands, and architectural standards.
- Collaborate with DevOps and infrastructure teams to deploy models in robust, scalable cloud environments.
AI Model Validation & Optimization
- Work with the Data Science team to review and validate AI models, assess assumptions, and test methodologies.
- Optimize models for efficiency, scalability, and performance in production settings.
- Integrate new features, improve model behavior, and implement versioning and drift detection mechanisms.
Requirements
- 3–5 years of experience in ML Engineering, AI deployment, or MLOps roles.
- Strong hands-on experience optimizing AI models for large-scale or real-time environments.
- Proficiency with ML libraries such as TensorFlow, PyTorch, or Scikit-Learn.
- Production-level coding experience in Python, Java, or other deployment-friendly languages.
- Experience deploying AI systems in cloud environments (AWS, Azure, or GCP).
- Familiarity with containerization tools like Docker and orchestration frameworks like Kubernetes.
- Strong analytical and troubleshooting skills, with a focus on long-term reliability and performance.
Nice to Have
- Experience with vector databases and generative AI APIs like OpenAI or Anthropic.
- Understanding of MLOps practices such as CI/CD for ML, training pipelines, and real-time inference.
- Knowledge of AI compliance, governance, security, and responsible AI frameworks.