AI Engineer
AI Engineer
About Us:
Our Research Lab is dedicated to pushing the operational, infrastructural, and intellectual
boundaries in the domain of cloud computing via interdiscplinary teams that work across the
full stack of Computer Science (CyberSecurity, IOT, Cloud-Native Software Engineering,
Artificial Intelligence, Networking, Data Center Operations)
Locations:
Position Overview:
We are seeking passionate and creative AI Engineers with 1 to 2 years of experience. This
role is primarily focused on leveraging Artificial Intelligence to enhance our research projects
and cloud-based products.
You will have the opportunity to work with an inter-disciplinary talented team, contribute to
pioneering research, and advance your expertise in the realm of generative AI within the
context of cloud computing.
• Programming Languages:
◦ Proficiency in Python.
◦ Knowledge of core libraries: scikit-learn, pandas, numpy, matplotlib.
• Generative AI:
• Prototyping:
◦ Ability to rapidly prototype solutions using existing LLM endpoints (OpenAI, etc.).
• Collaborative Tools:
Key Responsibilities:
• Design, implement, and fine-tune DL and ML models to support our research initiatives
and meet specific technological objectives.
• Stay on top of the latest advancements in AI, machine learning, and generative models
to apply cutting-edge techniques to our research and development efforts.
• Manage documentation, reporting, and presentation of research findings and project
progress.
Base Qualifications:
• Proficiency with Machine learning and Deep Learning Libraries (Pytorch, TensorFlow)
• Strong problem-solving skills and a keen interest in generative AI and its applications.
• Ability to tackle novel problem domains and explore different research paths, design
experiments around them, devise and execute development pathways quickly.
Additional Qualifications:
• Experience with rapidly prototyping with existing LLM endpoints (OpenAI, Vertex AI,
Azure ML, AWS Sagemaker) and using cloud platforms (GCP, Azure, AWS) for developing
and deploying AI solutions.
• Knowledge of APIs and microservices for AI model integration into cloud-based research
projects.
• Knowledge of MLOPs frameworks (MLFlow) and the cloud native deployment ecosystem
(Kubernetes)