Case Studies.
Add Case Study
Our Case Study database tracks 22,657 case studies in the global enterprise technology ecosystem.
Filters allow you to explore case studies quickly and efficiently.
Download Excel
Filters
-
(407)
- (253)
- (84)
- (60)
- View all
-
(324)
- (201)
- (103)
- (25)
- View all
-
(265)
- (139)
- (39)
- (31)
- View all
-
(263)
- (120)
- (84)
- (74)
- View all
-
(251)
- (212)
- (70)
- (32)
- View all
- View all 13 Technologies
- (159)
- (114)
- (103)
- (101)
- (98)
- View all 42 Industries
- (1,315)
- (322)
- (154)
- (138)
- (124)
- View all 13 Functional Areas
- (558)
- (433)
- (294)
- (182)
- (153)
- View all 71 Use Cases
- (915)
- (405)
- (380)
- (231)
- (78)
- View all 8 Services
- (64)
- (64)
- (64)
- (51)
- (44)
- View all 191 Suppliers
Selected Filters
![]() |
Accelerate: Media Broadcast
WRN Broadcast, an international broadcast managed services company, was experiencing a 30% year-on-year growth. However, their underpowered storage was running slower than many of the systems accessing it, which was hurting productivity. They needed to replace their existing storage solution with a more flexible, scalable, and secure infrastructure. The company had grown 30% year-on-year, over the last two years alone, and it more than doubled the number of television channels that it serves. In order to facilitate such rapid growth, and also plan for future expansion, WRN Broadcast needed to replace its existing storage solution with something more flexible, scalable, and absolutely secure, with an important additional criterion of being a leader in its field to match WRN Broadcast’s own standards of excellence.
|
|
|
![]() |
Accelerate: HD Broadcast
Fox Network’s Engineering & Operations team was tasked with designing a file-based workflow solution for SPEED, specifically addressing the requirements to move from SD to HD content throughout the process, ingesting content directly into the Storage Area Network (SAN), creating low-res copies for easy editing, production and advanced editing, supporting Dalet transfers to and from video servers – while enabling 75 concurrent Dalet users to go about their everyday tasks, from logging content to rundown preparation. The team quickly began to leverage their experience from other, similar, projects and turned to high performance DDN® storage and Dalet for the key workflow components. The biggest challenge was timing – only a few weeks after signing the PO the entire system had to be on air in a brand new, purpose-built, 55,000 square foot facility.
|
|
|
![]() |
ACCELERATE: NATIONAL LABORATORIES North German Supercomputing Alliance (HLRN) Accelerates Scientific Breakthroughs with Peta-Scale Computing and DataDirect Networks High-Performance Storage
The North German Supercomputing Alliance (HLRN) provides scientists across seven North-German states with state-of-the-art storage and compute resources to accelerate scientific breakthroughs in the fields of physics, chemistry, fluid dynamics, engineering and the environment. The scientists, many of whom come from North German universities and other scientific institutions, have combined resources and funding from their respective states and the German federal government to create a powerful, distributed supercomputer system. HLRN’s ability to drive advanced scientific research requires the highest levels of compute power as well as high-bandwidth storage capacity. Given the wide range of data-intensive applications supported by the institute, HLRN sought a Big Data solution that could deliver a significant increase in storage capacity while scaling bandwidth and performance as needed. HLRN also needed to ensure that data could be accessed easily from different geographic locations.
|
|
|
![]() |
ACCELERATE: MEDIA BROADCAST - Czech Television Speeds and Simplifies FastGrowing Media Broadcast and Production Demands with Fully Integrated, Future-Proof DDN MEDIAScaler Storage Platform
Czech Television, the public television broadcaster in the Czech Republic, was facing a surge in digital video content and high-end video equipment which spurred the need for fast, scalable, robust, and reliable storage. The move to 4K uncompressed DPX workflows created a major spike in storage requirements, as the size of native raw files quadrupled as well as the performance required by the storage. Despite the massive data influx, Czech TV still had to guarantee flawless real-time workflows of concurrent UHD video streams—across ingest, editing, transcoding, distributing, and archiving. They needed to accommodate multiple teams of people working on the same large files all at once to perform necessary color correction, and as well as restore many old films in bad initial condition.
|
|
|
![]() |
Building a VDI Environment Using WMware Horizon View
The Toyota Technical Development Corporation had to use an IT Business Continuity Planning (BCP) in order to properly protect important data and ensure business continuity in the case of any disruptive event. As a result, TTDC decided on adopting a virtual desktop system (VDI), and faced the issues of migrating from the existing system, and securing stress-free operation at a cost effective price. Yasuhito Kurebayashi remarked that to resolve the issues surrounding the introduction of the VDI, they could see that it was first of all necessary to migrate the storage system currently in operation, and then expand its memory capacity and scalability as necessitated by the increased I/O speed.
|
|
|
![]() |
Sleep Innovations’ Dream Comes True, Connecting Growing OEM and Retail Business Using Single EDI System
Sleep Innovations, a multimillion-dollar manufacturer of foam bedding and sleep products, was facing a challenge with its legacy enterprise resource planning (ERP) software. The software was not compatible with two of Sleep Innovation’s key customers, resulting in inefficient, and at times impossible, exchange of information. Furthermore, the company realized its existing ERP was inadequate for meeting the company’s growing demand. To meet its customers’ growing requests, Sleep Innovations implemented Oracle’s JD Edwards EnterpriseOne ERP suite. However, its legacy EDI system could no longer manage all the transactions required in the company’s supply chain process. This was because the existing EDI system could not handle certain EDI transactions demanded by customers.
|
|
|
![]() |
Accelerating Transformation with a Different Approach to Multi-Cloud Management
Expedient's clients were looking to move numerous applications into a cloud operating model, which includes a mix of applications and assets they own on premises, in a hosting data center, or in a hyperscale cloud. However, figuring out the optimal placement of workloads from their current environment to the right mix of cloud operating model was a complex challenge. Many clients were only 30 percent of the way to that destination due to reasons such as not knowing how many of their applications would fit a hyperscale cloud model and not envisioning other ways to reach their objectives. Expedient needed a common control plane that could unlock and provide access to client compute resources while also giving Expedient tools to improve service delivery. The main problems that needed addressing were making it easier for clients to provision into multiple clouds without adding complexity, providing insight into costs to ensure clients were getting the best value for their dollar, and being able to provide governance and insight into security across clouds.
|
|
|
![]() |
Monitoring Scales Alongside Blue State Digital’s Rapidly Growing Infrastructure
Blue State Digital (BSD) had a complex stack with multiple tiers of web services, databases, and load balancers that relied on varied systems including Linux, PHP, MySQL, RabbitMQ, and more. They were also in the process of migrating sections of their infrastructure to Amazon Web Services (AWS) to further support rapid infrastructure growth. As BSD moved to a more dynamic cloud environment that included automated server provisioning, manually updating server counts, instrumentation and alerts were beginning to take up a lot of time and overhead. They needed a monitoring tool that would easily integrate with their existing technical setup and scale effortlessly alongside their infrastructure.
|
|
|
![]() |
Monitoring a Complex and Elastically Scaling Cloud Infrastructure to Avoid Performance Issues
GameChanger runs a complex and elastically scaling cloud infrastructure hosted on Amazon Web Services (AWS) to support its mobile and web-based applications. This environment includes multiple databases and services, each of which requires monitoring. Taking data from tens of thousands of sources, transforming it into reader-friendly snippets, and then pushing it to fans in real-time means GameChanger has to be ready to handle high traffic, heavy I/O and to troubleshoot issues at a moment’s notice. GameChanger first built its own infrastructure monitoring tools in-house from the open-source components Graphite and StatsD. These homegrown monitoring tools got the job done but at a steep price: they required an extra $1,000 of AWS resources and more than half an FTE’s hours each month just to keep GameChanger running.
|
|
|
![]() |
Taking Monitoring to the Next Level
Devsisters, a leading mobile gaming company, needed visibility into the health of their applications to meet the demands of their rapidly expanding user base. Their existing tools added complexity that made it increasingly difficult to pinpoint user-facing issues. Additionally, the implementation and integration of these tools into their tech stack required a significant and continual time investment from the engineering team. As Devsisters’ engineering team set out to monitor and ensure the reliability of their cloud-native systems, they initially adopted a handful of open source tools for their perceived low cost. However, implementing and integrating these tools with their tech stack required a significant time investment from the engineering team, both upfront and continually. More importantly, Devsisters realized that these open source tools could not handle the scale and complexity of their modern environments.
|
|
|
![]() |
How Infor Uses LogicMonitor to Monitor and Optimize its Massive AWS Deployment
Infor, a global enterprise software provider, has a massive deployment in Amazon Web Services (AWS) to support their purpose-built applications. They leverage a wide range of AWS services, including more than 14,000 EC2 instances, ECS, ELB, EBS, RDS, Elasticsearch, Auto Scaling, Lambda, ElastiCache, and more. However, maintaining visibility into the more than 50,000 AWS resources supporting their application solutions is a significant challenge. Before LogicMonitor, Infor relied on several monitoring tools, but primarily used an agent-based system that required a significant amount of ongoing maintenance and configuration. With more than 14,000 EC2 instances, upgrading and configuring agents required substantial staffing investments. The frequent need to upgrade agents only during appropriate maintenance windows for such a huge fleet was a constant challenge. On top of that, updating an existing agent or adding a new agent to a host carried the risk of affecting other production processes and services on that host.
|
|
|
![]() |
Global Manufacturer Leverages nGenius Solution to Avert Performance Problems With New Service Rollout
The company, a leading European-based manufacturer of home and industrial appliances, was looking for ways to automate and optimize the management of their service delivery environment by predicting and preventing service-effecting problems, rather than reacting to user calls to the help desk. The company supports a large, highly dispersed organization with an extremely complex network that utilizes dozens of different WAN service providers. Any workflow interruption can back up or even shut down the manufacturing floor, impacting a well-tuned just-in-time inventory management process.
|
|
|
![]() |
Visibility Across Hybrid Cloud Reduces Risk of Performance Issues
The company had embarked on a strategic initiative to move hundreds of application services to the public cloud. Due to the importance of availability and responsiveness of their business applications, they developed a dual-cloud strategy with Amazon Web Services (AWS) and another cloud provider to host key business applications. The network infrastructure team had long advocated network performance management solutions to troubleshoot issues as they were reported. It was their mandate to ensure they had the same or better visibility and performance management resources as they moved services to the cloud. As their current troubleshooting tool, as well as their existing packet flow switches, were out of date and unable to be extended to the cloud, it became an imperative for the IT organization to find a new solution that would fulfill their need for visibility throughout their new hybrid cloud environment.
|
|
|
![]() |
High-tech Company Reduces MTTR with Visibility into Virtualized & Cloud Environments
The high-tech company was experiencing issues with their unified communications deployment that were difficult to pinpoint and were impacting their user community. Several legacy packet brokers were outdated and upgrading that product’s features and capabilities would require complete replacement. They work with several market-leading public cloud partners to provide solutions to customers, and in trying to deliver the highest-quality customer experience found themselves lacking visibility into the virtualized servers and the east-west traffic in these environments. Any one of these projects would require a commitment in time for vendor and solution analysis, as well as for budget justification and authorization. Any way they could possibly combine these projects would accelerate the return on investment in time and cost.
|
|
|
![]() |
Manufacturer Achieves Quality IT Performance at Plants with NETSCOUT
The company’s distributed, worldwide factories are critical to achieving overall production quotas, and they must perform as well as the main manufacturing facilities. Not surprisingly, the distributed plants rely heavily on IT technology, such as automated assembly lines that communicate instructions and status between equipment on the lines. In addition, not all the company’s remote facilities use centralized IT services; rather, they use locally hosted applications. These remote facilities are run on a “network in a box,” where VMware ESX servers are used to host manufacturing / production applications that are responsible for bar code scanning, printing, and communications as well as underlying network protocols like DNS, file services, and LDAP. These applications are critical to operating the production line and business processes at the plants. When communication between machines slows or stops, the symptoms may not be quickly apparent locally and can persist for a few hours, leading to delays on the lines, halting the lines completely, or creating issues requiring rework.
|
|
|
![]() |
Driving Visibility for a Vehicle Manufacturer’s Multi-tiered Applications
The network operations (NetOps) group at this manufacturer had historically relied on a NetFlow-based network performance management tool to monitor their corporate data center. However, this had become ineffective at providing the level of detail necessary to truly find the source of issues as the company adopted digital transformations and modernizations. Slowdowns or degradations in performance in their production, inventory, or customer resource management applications can have drastic impact on targeted business goals. The NetOps group had several strategic requirements for their new service assurance solution: the ability to analyze activity across multi-tier services – including Web, application and database servers; Internet traffic; and evaluating the performance of 50 top applications for manufacturing, billing, and customer-facing websites and portals. With the added complexity of having voice, video, and business data services that required performance assurance across a high-speed, global network, the NetOps team had scalability and consolidated analysis and views at the top of their list of requirements as well.
|
|
|
![]() |
Manufacturer Prioritizes Application Uptime for On-time Product Delivery
The division of a multi-billion-dollar manufacturing company was facing multiple concerns that needed to be addressed for achieving data center transformation. The immediate concern was ongoing issues with a mission-critical manufacturing application used for planning and scheduling materials, resources, and manufacture of their final products. Any delay or downtime with the application could result in delays or missed deliveries impacting revenue and reputation. However, they lacked the visibility necessary to proactively identify degradations in performance and quickly pinpoint the source of the issue. The company was also experiencing continual degraded performance of communications applications hosted in the Microsoft Azure cloud impacting customers and employees based in 200 regional offices. The data center transformation this manufacturer had undergone over the recent years created a void in their visibility into performance across their hybrid cloud network.
|
|
|
![]() |
Visibility in Virtual Environments Accelerates Problem Resolution for Manufacturer
The company's distributed worldwide facilities are critical to the success of product design and delivery, so employees at each location need to have comparable access and application performance to do their jobs and achieve corporate goals and objectives. However, users in Europe consistently complained of slowdowns with heavily-used applications, specifically Citrix, Oracle, and Microsoft Office365, which negatively impacted their productivity and satisfaction with IT. The applications experiencing slowdowns were in a recently virtualized server farm and IT lacked visibility into the east/west traffic in those virtualized application servers, thus impeding their ability to identify the cause of the slowdowns. In addition to this specific issue, IT knew that their new CIO’s initiative of moving services to the cloud would mean they needed to maintain visibility into applications that were moved from their private data centers to the public cloud.
|
|
|
![]() |
Large U.S. Based Appliance Manufacturer Relies on NETSCOUT to Ensure Reliability of Customer Ordering Process
The manufacturer faced challenges with outages in their customized order management application, which interfered with order processing, resulting in lost revenue and harm to customer experience. These service disruptions also triggered large financial penalties that had to be paid to the retailers. A single outage could easily trigger hundreds of thousands of dollars in penalties. Lacking remote site monitoring of its custom applications, IT faced significant delays in issue detection and its ability to triage across many remote locations. The IT team had to rely on users to report issues, putting them under tremendous pressure to assure availability and performance.
|
|
|
![]() |
Multi-national Oil and Gas Company Relies on NETSCOUT for Application Service Performance
The multinational oil and gas company was under significant pressure to do more with a reduced budget. The company recognized the vital role the IT organization must play in helping to improve the business and deliver business projects on time. The chief information officer (CIO) and IT identified several-dozen key corporate applications that were deemed instrumental for delivering value to the business. Some of these applications were manufacturing, tax, and international finance services used globally across the company. Many were custom applications, web-based services, and/or relied on SAP and Citrix services for operation. In order to ensure these key application services were available around the clock and around the world, IT needed greater visibility into their virtualized private data centers that hosted them. Any delays or outages in these critical services would be harmful to various business units.
|
|
|
![]() |
Extending Data Center Performance Monitoring with Software-Based Smart Visibility
The agency's mission success depends largely on quickly and successfully analyzing large volumes of data to safeguard national interests, government assets, and global citizens. To ensure the success of their mission, the agency developed and deployed an application suite that provides the ability to quickly extract meaningful analysis from always-growing data volumes. Wanting to more nimbly spin-up compute and storage resources to support these mission-critical applications and newly developed apps, a strategic decision was made to embrace recent digital transformation innovations by consolidating existing data center operations and transitioning to a commercial cloud service provider (CSP) service. As part of this process, the agency employed a government-standard procurement process, which enabled deliberate selection of best-in-industry technology to install in the new CSP. As a result, they made an early decision to invest in Cisco Application Centric Infrastructure (ACI) Software-Defined Networking (SDN) architecture, which offered the benefits of application agility and data center automation. The move to an annual CSP service also involved use of new-to-the-agency Amazon Web Services (AWS) and virtual VMware hypervisor technologies.
|
|
|
![]() |
Manufacturer Improves Quality of Experience at Remote Offices
The manufacturer was facing frequent issues with their Multiprotocol Label Switching (MPLS) network, which connects their data center to over 50 remote manufacturing and warehouse locations. Their existing tools did not provide the necessary visibility into the network to identify what was consuming the bandwidth and causing intermittent disruptions. This lack of visibility was causing increased reports of application access issues from their remote locations, and the IT team was losing valuable time trying to pinpoint the cause. The IT team needed a solution that would provide visibility into their MPLS network and help them quickly and effectively troubleshoot problems.
|
|
|
![]() |
International Stock Exchange Improves Performance of Trading Floor Applications
The International stock exchange was experiencing slowdowns and disruptions in several of their business applications, like Web services, Citrix, and backend databases. Simultaneously, issues were impacting the application services used on their trading floor, such as FIX and ITCH. The importance of a properly operating trading floor can’t be understated, where the difference in response time of only milliseconds can mean millions in global currencies. The IT team had long been strong proponents of performance monitoring tools to help troubleshoot the source of such problems. Their existing tool offered limited application analysis, which while helpful when they first started using it years ago, now lacked depth of analysis, real-time monitoring, and key details to pinpoint the true root cause of these disruptions.
|
|
|
![]() |
Assuring Successful Data Center, Co‑Lo, and Application Migrations With NETSCOUT
The company was undergoing a transformation of its data center operations, migrating some of its business applications to a cloud-based Nutanix TierPoint co-located (Co-lo) facility. However, they were experiencing visibility limitations that made it difficult to determine whether the migrated applications were performing reliably. They also had visibility limitations into their Cisco Unified Computing System (UCS) and Oracle services, which made it challenging to assure the performance of applications used by guards and personnel to manage on-site operations at correctional and residential facilities. Additionally, the company had been given several months to comply with National Institute of Standards and Technology (NIST) measurements, which required evidence of organizational conformity with safeguards relating to network monitoring for performance and availability, preventing cybersecurity breaches, and protecting client and employee healthcare and financial data.
|
|
|
![]() |
Global Manufacturer Benefits from Quality Performance of DX Technologies
The multi-billion-dollar high-tech global manufacturing company was facing a challenge in delivering excellent performance management and monitoring with limited IT resources. The company was investing in the latest technologies and needed to fill gaps in visibility to their complex, evolving network. The company had increased their corporate network from 1 GB to 10 GB in many locations, implemented several Software-as-a-Service applications, including Microsoft® Office 365 and Skype for Business, and upgraded their voice and video services to Polycom. However, issues arose that impacted users and required IT intervention to pinpoint and correct. The resources available to support monitoring and managing their current tools had declined, leading to changes being made that broke the data collection, which caused gaps in visibility.
|
|
|
![]() |
Global Manufacturing Company Accelerates Digital Transformation
The company was in the process of implementing several initiatives for digital transformation to optimize business and manufacturing processes as well as internal and external communication. These changes drove the need for Data Center modernization, including upgrading to 40G, migrating to hybrid cloud deployments, and co-locating a portion of their network and applications to a 3rd party site. To support these changes, the IT team needed to deploy new architecture using a new provisioning methodology. They were still confronted with on-going issues affecting users. A few major issues were consuming resources and had the potential to impact business. Of primary concern at the executive level was on-going contact center voice issues with their VoIP service. Although problems were reported regularly, they were not consistent from site to site and not easily resolved.
|
|
|
![]() |
Leading Manufacturer’s SD-WAN Conversion Assured By NETSCOUT Virtual Visibility
The company had experienced exponential remote office network changes, with an increase from 50 locations to hundreds of them. To efficiently manage business service delivery to this large-scale remote office network using centralized IT resources, company leadership decided to convert from their hardware-based wide area network (WAN) to a Software-defined WAN (SD-WAN). The identified SD-WAN solution involved numerous technology vendors, including VMware, providing VeloCloud SD-WAN, VMware ESXi hypervisor, vRealize Network Insight, and VeloCloud Orchestrator, Avaya Voice Over IP (VoIP) services running at the remote offices, and Universal CPE (uCPE) and Virtual Network Function (VNF) vendor solutions. During standard pre-deployment activities, the IT team began testing the solution, including how the transformed SD-WAN services would support Avaya VoIP business services. The test results they were seeing were problematic, with the VoIP technology not performing reliably over SD-WAN.
|
|
|
![]() |
Global Manufacturer Improves Visibility in Co-Lo, Cloud, and Virtual Environments With NETSCOUT
The company’s multi-continent business operations footprint includes multiple data center facilities in Europe, Asia, and America. As the pandemic arrived, strategic projects involving Data Center Transformation and MultiCloud Migration were well underway. In this timeframe, IT had to advance strategic project activities, while facing unprecedented remote business service delivery demands. On the strategic side, the IT team was beginning the process of transitioning business applications to the cloud, selecting Microsoft Azure as one of their solution providers. Select data center services were also being migrated to Equinix Co-located (Co-lo) facilities during this timeframe. Tactically, business operations success suddenly became reliant on performance of the company’s virtual private network (VPN). As IT prepared to advance on these strategic and tactical challenges, they quickly found their progress impacted by lack of visibility into an SAP Customer Relationship Management (CRM) application hosted in the company data center environment, Microsoft Azure Cloud-based services and applications, Microsoft Skype for Business (Skype) unified communications (UC) voice and video services, Microsoft Office 365 business applications, Remote applications depending on reliable access to VPN services, Web traffic crucial to Web-based application performance. Existing monitoring toolsets were coming up short. That made it difficult to isolate issues and accurately assign IT ownership. In such scenarios, the Network teams often bear the brunt of finger-pointing, and that was the case here.
|
|
|
![]() |
Global Gaming Company Leverages NETSCOUT’s PFS Technology to Secure and Monitor Gamers’ On‑Line Experience
The gaming company has experienced an increase in business during the current pandemic crisis, on top of their natural growth. Some of their data centers generate tens of millions of packets per second that need to be monitored by generating NetFlow data, and forwarding it to multiple security tools, as well as the custom network and application performance tools that have been developed in house. Current monitoring solutions did not provide complete visibility into all traffic. Appliances with much higher traffic capacity than the existing solution were required for NetFlow generation. High-volume participation in multi-player games can draw millions of unique customers. They also have intensive live support from their IT team on the customer facing network, so the packet broker solution needed to scale for many simultaneous staff users as well as with the different data center locations experiencing different growth rates on the customer side.
|
|
|
![]() |
Bio-Tech Company Accelerates Roll-Out of New Data Center Architecture NETSCOUT Solutions Provide Visibility to Ensure Seamless End-User Experience
The biotech company was facing high costs and lack of control due to dependence on 3rd party-hosted applications. The decision was made to implement a performance hub project for their hundreds of remote locations. They used Cisco® Secure Agile Exchange (SAE), which enables enterprises to interconnect users to applications quickly and securely by virtualizing the network edge (DMZ) and extending it to colocation centers. All the 3rd party applications, as well as some applications hosted at internal data centers, were brought “in-house” to 10-15 colocation facilities and all the company remote locations will point to these colocations. A key goal of the project was to ensure the SaaS and internal applications were available to end users after the move to the colocation centers. IT knew they needed visibility from the user perspective, to ensure the new approach was working as designed with high-quality application availability and performance.
|
|