Case Studies.
Add Case Study
Our Case Study database tracks 22,657 case studies in the global enterprise technology ecosystem.
Filters allow you to explore case studies quickly and efficiently.
Download Excel
Filters
-
(4)
- (4)
-
(3)
- (2)
- (1)
-
(1)
- (1)
- (2)
- (1)
- (1)
- (3)
- (3)
- (1)
- (1)
- (1)
- View all 6 Functional Areas
- (3)
- (2)
- (1)
- (1)
- (1)
- View all 5 Use Cases
- (5)
- (4)
- (1)
- (5)
Selected Filters
![]() |
How HNI Drives Manufacturing Digital Transformation with Data Pipelines
HNI Corporation, a global leader in workplace furnishings and residential building products, was in the midst of a planned five-year transformation from seasonal bulk orders by big distributors to customized orders by dealers, individuals, and enterprises. This required a refactoring of the management of the supply chain by taking control of the data from ordering systems, ERP, and fulfillment systems. The COVID-19 pandemic and disrupted office and work-from-home environments forced HNI to speed up changes to how it does business, requiring a solution with flexibility and speed as a cornerstone for transformation. The data science and analytics team at HNI needed a platform that could scale with them, minimize cross-functional dependencies, reduce time-to-pipeline production, and refocus on the logic versus the infrastructure.
|
|
|
![]() |
Lumiata Case Study: Intelligent Pipeline Orchestration & Automation with Ascend
Lumiata, a company focused on making healthcare smarter, was facing challenges with their data pipelines. They were using a mix of Apache Airflow, Apache Spark, Python and over 100,000 lines of custom code to create a Curated Table, which is the basis for their Data Science team to develop the Lumiata Insights. However, with increasing volumes of client data and faster SLA requirements, the process began to strain. Onboarding each new client required bespoke development and the over-extended data engineering team was responsible not only for this development, but also for maintaining and monitoring the pipelines, as well as the health and performance of the underlying Apache Spark jobs. The data science team required a certain amount of experimentation and iteration to develop the Lumiata Insights, but were completely dependent on data engineering to provide necessary adjustments to the Curated Table. This whole process would take six weeks or more, and induced a heavy maintenance burden to keep everything running. As the company looked to scale to take on more clients with their existing team, they needed a new approach.
|
|
|
![]() |
Supercharging Time to Analysis at Harry’s
Harry’s, a leader in the direct-to-consumer industry, faced a challenge with the explosion of new, disparate data feeds due to the diversity of product lines, retail outlets, and customers. The data science team needed to expedite the process of ingesting, transforming, and delivering these data feeds into a robust shared data model that connects all brand information across every retail delivery model. The retail analytics team needed a faster, simpler way to get new analytics up and running, and a platform to ingest and transform these disparate data feeds in a low-code sandbox environment. With a current ecosystem of mostly homegrown and open source solutions that rely on a heavily burdened data engineering team, it could take weeks to get new, critical retail data sources connected to Looker, the company’s business intelligence and analytics tool of choice.
|
|
|
![]() |
Styling Data Pipelines for Analytics Success at Mayvenn
Mayvenn, a company that provides high-quality beauty products and aims to connect customers with the right stylists, relies heavily on data for its operations. The company moves a variety of data, including ad and marketing spend, email, text, and customer service data, from Amazon S3 to Amazon Redshift using Python for analysis and into Looker for reporting. However, the company faced challenges with its previous data orchestration tool, Alooma, which hindered fast iteration of ETLT. The data team at Mayvenn often found themselves blocked on projects due to dependency on the engineering team, which often had a full queue.
|
|
|
![]() |
Reading from a Single Source of Data Truth with the New York Post
The New York Post, a highly data-driven publisher, was faced with the challenge of accelerating time-to-market for internal reporting, financial, and other data initiatives. The upcoming crackdown from Google on third-party cookie data in the Chrome browser accelerated the need to drive more data-driven personalization and engagement across the New York Post sites. The team at the New York Post required a faster way to ingest, aggregate, transform, and write out a variety of critical new data feeds in order to meet various business demands and requirements.
|
|