• >
  • >
  • >
  • >
  • >
Neptune.ai > Case Studies > How Elevatus Uses Neptune to Check Experiment Results in 1 Minute

How Elevatus Uses Neptune to Check Experiment Results in 1 Minute

Neptune.ai Logo
Customer Company Size
SME
Region
  • Middle East
Country
  • Saudi Arabia
Product
  • Neptune
  • Google Cloud Storage
  • PyTorch Lightning
  • Optuna
  • Kubernetes
Tech Stack
  • Google Cloud
  • Oracle Cloud
  • Terraform
  • Python
  • FastAPI
Implementation Scale
  • Enterprise-wide Deployment
Impact Metrics
  • Digital Expertise
  • Innovation Output
  • Productivity Improvements
Technology Category
  • Analytics & Modeling - Machine Learning
  • Analytics & Modeling - Predictive Analytics
  • Platform as a Service (PaaS) - Data Management Platforms
Applicable Industries
  • Software
  • Professional Service
Applicable Functions
  • Product Research & Development
  • Business Operation
Use Cases
  • Machine Condition Monitoring
  • Remote Asset Management
Services
  • Software Design & Engineering Services
  • System Integration
About The Customer
Elevatus is a talent management platform headquartered in Riyadh, Saudi Arabia, with a focus on covering the entire pre-HR cycle of recruitment, hiring, and onboarding. The company operates within the Software & Technology industry and employs between 11 to 50 people. Elevatus is known for its innovative approach to simplifying hiring processes through intelligent systems. The platform's Video Assessment product allows applicants to apply via a one-way interview, showcasing its commitment to reducing complexity in recruitment. The company's Innovation department plays a crucial role in developing systems that cater to client needs, emphasizing the importance of seamless integration and user-friendly solutions. Elevatus's dedication to innovation and efficiency is evident in its approach to managing machine learning models and optimizing recruitment processes.
The Challenge
The team at Elevatus faced significant challenges in managing their machine learning models due to the lack of an observability layer. This absence led to inefficiencies, such as the need to reanalyze data and reimplement workflows whenever evidence for language models was requested. The team recognized the necessity of a mature observability layer to facilitate large-scale training jobs and involve stakeholders in auditing AI systems. They sought a model tracking solution that could provide historical performance tracking and integrate well with their existing tools, such as PyTorch Lightning and Optuna. The goal was to enhance model performance tracking and optimize compute utilization, enabling quick iteration across experiments.
The Solution
To address the challenges faced, Elevatus integrated Neptune into their workflow as a model tracking solution. This integration allowed the team to monitor functional and operational processes post-training script execution. Neptune's configuration was hardcoded into their YAML manifest file, enabling seamless tracking of model performance over time. The solution provided a centralized view for the team, facilitating better model evaluation, compute utilization optimization, and rapid iteration across experiments. With Neptune, the team could scale experiment monitoring as dataset sizes grew, ensuring no computational bottlenecks. The customizable user interface offered by Neptune allowed the team to design their own metrics and dashboards, enhancing their ability to visualize training and conduct in-depth investigations. This flexibility empowered the team to achieve significantly better results, with Neptune serving as a crucial component in their machine learning operations.
Operational Impact
  • The integration of Neptune allowed the Elevatus team to optimize compute utilization and iterate quickly across experiments, enhancing their ability to evaluate models effectively.
  • Neptune provided a shared, centralized view for the team, enabling them to address issues, debug, and understand model performance with greater clarity.
  • The customizable user interface of Neptune empowered the team to design their own metrics and dashboards, facilitating advanced analytics and in-depth investigations.
  • The observability layer provided by Neptune allowed the team to scale experiment monitoring as dataset sizes grew, ensuring successful training on tera-scale data with minimal effort.
  • Neptune's integration into the workflow enabled the team to establish a thorough training history, generate charts, and perform advanced analytics, leading to significantly better results.
Quantitative Benefit
  • Reduced model metadata retrieval time to under one minute.
  • Enabled 15-minute iteration cycles for rapid model experimentation.
  • Achieved MSE of ~0.043 and MAE of ~0.16 across trials.

Case Study missing?

Start adding your own!

Register with your work email and create a new case study profile for your business.

Add New Record

Related Case Studies.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that AGP may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from AGP.
Submit

Thank you for your message!
We will contact you soon.