DataDirect Networks

Overview
HQ Location
United States
|
Year Founded
1998
|
Company Type
Private
|
Revenue
$100m-1b
|
Employees
201 - 1,000
|
Website
|
Twitter Handle
|
Company Description
DDN is the world’s largest private data storage company and the leading provider of intelligent technology and infrastructure solutions for data-centric organizations.
Over the last two decades, DDN has established itself as the data management provider of choice for over 11,000 enterprises, government, and public-sector customers, including many of the world’s leading financial services firms, life science organizations, manufacturing and energy companies, research facilities, and web and cloud service providers.
Supplier missing?
Start adding your own!
Register with your work email and create a new supplier profile for your business.
Case Studies.
Case Study
ACCELERATE: ACADEMIC RESEARCH - Purdue University Delivers 900% Faster Access to Data, Delivering Accelerated Time to Results for 1,000 Researchers and 300 Projects with Powerful Storage and Advanced Caching Technology from DDN
Purdue University, a global leader in research, discovery, and innovation, faced the challenge of meeting the needs of up to 1,000 researchers working on several hundred projects. This drove the decision to deploy one large, centralized data repository powered by high-performance storage. The big data variety, velocity, and volume created the need for highly versatile, scalable storage. Diverse workloads required highly flexible storage to accommodate large parallel I/O jobs and many small, random read requests. The university initially looked at the needs of several top research areas, including computational nanotechnologies, aeronautical and astronomical engineering, mechanical engineering, genomics, structural biology, as well as several large projects in the life sciences disciplines. The challenge of managing varied research needs is accommodating both very large parallel I/O jobs and millions of small, random read requests without imposing performance penalties on anyone.
Case Study
Kollins Communications Speeds HighEnd Video Production to Help Customers Like Samsung Blaze a New Trail in Retail Marketing
Kollins Communications, a company that provides all-in-one retail marketing solutions, was faced with an extremely tight deadline to produce over 20 4K videos in four weeks for Samsung Electronics USA. The company needed to reduce the time required to transfer files between editing stations and had a demand to stream three, uncompressed 4K video files simultaneously for a video display. To meet ongoing demands for fast project turnarounds, Kollins employs leading-edge technologies to expedite content creation workflows while keeping pace with an explosion of digital content. The company needed a high-performance storage hub that could expedite the flow of content among a mix of OS X, Windows and Linux workstations. Extremely fast performance was needed to support Autodesk Flame, a high-end 3D visual effects software running on Linux. Additionally, fast and scalable storage was necessary to support Autodesk 3ds Max as well as Adobe After Effects and Premiere Pro software that runs on OS X and Windows.
Case Study
ACCELERATE: MEDIA BROADCAST Starz Accelerates Digital Media Workflows with DDN Storage Solutions
Starz, a leading provider of premium subscription video programming, was facing challenges with its legacy HP Enterprise Virtual Arrays (EVAs) 8000s which powered the company’s complete digital media workflow system. The system was lacking scalability, performance, and reliability. The company was experiencing a dramatic increase in both capacity and performance requirements due to the expansion of its programming. The legacy storage infrastructure was causing administrative headaches and jeopardizing the whole file system. The raw performance limitations caused both transcode and encode processes to fail, leading to a lot of manual do-overs and wasted time.
Case Study
Accelerating Life Sciences
Public Health England (PHE) was established to consolidate health specialists from over 70 organisations into a single public health service. PHE's mission is to protect and improve the nation's health while reducing health inequalities. PHE's MS bioinformatics unit has been involved in the establishment of a NextGeneration Sequencing (NGS) Service that provides the means to sequence the whole genomes of pathogens. This sequence can be used to characterise and type pathogens, which in turn can be used, for example, to identify and monitor outbreaks locally and nationally. The same sequence may also help scientists better understand the evolution of bacteria and viruses or predict trends in the patterns of antibiotic resistance. To better support its NGS analysis service, PHE MS sought a High-Performance Computing (HPC) system that would enable simultaneous processing of hundreds of bacteria samples received from hospitals and other stakeholders.
Case Study
ACCELERATE: NATIONAL LABORATORIES DDN and SGI Deliver Advanced Image Acquisition, Storage, Retrieval, and Processing Enabling Real-Time Intelligence on the Battlefi eld in the Naval Research Laboratory’s Large Data Joint Capabilities Technology Demonstration
The military has been using satellite, manned airborne, and Unmanned Aerial Vehicle (UAV) photography to gain insight into the battlefield. However, the challenge lies in obtaining rapid access to the information being collected, sharing it among analysts, planners, and decision makers, and using it to provide a decisive advantage. As sensors increase in numbers and analysis is performed in multiple spectrums, the amount of data being generated has grown tremendously, requiring new technologies to retrieve, store, move, and make sense of it. The Large Data Joint Capability Technology Demonstration (Large Data JCTD) project at NRL is designed to meet this challenge. The project requires handling massively large data files and total data sets. Even in trials, the data would reach nearly a Petabyte per site and require ingest and output rates exceeding 3GB/s. As data is acquired, it may need to be automatically replicated between each Large Data JCTD site. At the data rates required by the project, this presented challenges in both WAN transport and encryption technologies.
Case Study
ACCELERATE: LIFE SCIENCES - Racing to Find a Cure, TGEN Uses DDN® Storage to Unravel the Genetic Components of Disease, Faster
TGen, a leading genomics research institute, was facing challenges with its legacy NAS system which was underpowered and unable to handle concurrent jobs without dragging performance below acceptable levels. Scaling NAS performance was expensive and time-consuming. Moreover, data growth was accelerating, making the existing infrastructure untenable. Genomics, the art of extracting understanding from an organism’s genome, is a complex and data-intensive task. The year-on-year improvements, in volume and accuracy of data being generated by gene sequencing instruments are mind-boggling. As these machines become more productive, the price for gene sequencing, assembly, and analysis drops, enabling new diagnostic methods and disease treatments. However, all this genetic data has resulted in a sea change in how to assemble them into meaningful data, so the analysis can take place.
Case Study
Accelerate: Media Broadcast
Nice Shoes, a full-service, artist-driven design, animation, visual effects and color grading studio, was facing challenges due to the explosive growth of resolutions, file sizes and performance requirements. The company was dealing with tight deadlines, complex workflows and multiple stakeholders. Their previous storage system was inefficient, leading to wasted creative time, unpredictable performance, and high cost and complexity of managing multiple systems. The company was also dealing with the increasing demand to deliver content in multiple formats to multiple devices, which traditional storage systems were unable to handle effectively.
Case Study
ACCELERATE: LIFE SCIENCES - Van Andel Research Institute Optimizes HPC Pipeline, Driving Research Discoveries and New Drug Therapies with End-to-End DDN Storage Solution
Van Andel Research Institute (VARI) was facing several challenges related to its storage infrastructure. The institute had fragmented storage pools for research and instrumentation data, which were costly, cumbersome, and lacked sufficient safeguards. The addition of high-powered, cryoelectron microscopy was anticipated to quadruple existing storage. There was an ever-increasing requirement to ingest, process, store, archive, and share research. A parallel file system was needed to address storage needs while providing enterprise data management capabilities. The organization began a thorough evaluation of next-gen HPC and storage solutions, including cluster and cloud computing, as well as parallel file and object storage.
Case Study
Accelerate: Genomics Research - The Wellcome Trust Sanger Institute Relies on Scalable, HighPerformance Storage from DDN® to Reduce Global Health Burden
The Wellcome Trust Sanger Institute, a genomic research center, was facing challenges in managing the surge in data volume and computational analysis due to major sequencing technology advancements. The institute's diverse research community, encompassing over 2,000 scientists worldwide, required a robust IT infrastructure with large-scale, high-throughput performance. The unpredictable data growth made it difficult to scale storage sufficiently without overburdening the Institute’s existing 10-GigE network infrastructure or encroaching beyond its one petabyte per floor tile rule in the space-constrained data center. The institute developed a classic “Big Data” problem that was further exacerbated whenever new advances in sequencer technology produced more sequencing data faster than ever before.
Case Study
Accelerate: HD Broadcast
Fox Network’s Engineering & Operations team was tasked with designing a file-based workflow solution for SPEED, specifically addressing the requirements to move from SD to HD content throughout the process, ingesting content directly into the Storage Area Network (SAN), creating low-res copies for easy editing, production and advanced editing, supporting Dalet transfers to and from video servers – while enabling 75 concurrent Dalet users to go about their everyday tasks, from logging content to rundown preparation. The team quickly began to leverage their experience from other, similar, projects and turned to high performance DDN® storage and Dalet for the key workflow components. The biggest challenge was timing – only a few weeks after signing the PO the entire system had to be on air in a brand new, purpose-built, 55,000 square foot facility.
Case Study
ACCELERATE: ACADEMIC RESEARCH - Researching the Genetic Basis of Behavior, Cognition and Aff ect, USC Needed a High Performance, Scalable Infrastructure to Support Next-Gen Genomics Sequencing
The Laboratory of Dr. James Knowles at the Zilkha Neurogenetic Institute, Keck School of Medicine at the University of Southern California (USC) was facing a significant challenge. The lab, which is focused on understanding the genetic basis of behavior, cognition, and affect, was struggling with a legacy SAN storage server that was nearing capacity and could not keep up with data access requirements. The storage throughput was hobbled by the network and by the performance limitations of NFS. The storage bottleneck caused by slow uploads was delaying time to discovery. The lab needed a new storage solution that could serve in excess of Gigabyte per second throughput and scale to petabytes in a single name space. The Knowles Lab had a data storage performance problem. They needed to sequence 1,400 full human genomes to support their ongoing studies. This work would generate several terabytes of raw data per day that needed to be transferred, inspected, and aligned to the human genome. Their legacy storage system could only output enough data to the CPU cluster to run a single instance of their Burrows-Wheeler Aligner (BWA) under the Pegasus MPI workflow. Furthermore, they could only upload data to that system at 30-50 MB/second, nowhere near the 100MB/second peak theoretical capacity of the GbE network. This bottleneck was not only an inconvenience, but it was slowing their time to discovery.
Case Study
ACCELERATE: LIFE SCIENCES - Institute for Computational Biomedicine at Weill Cornell Medical College Implements Scalable Solution for Genomics and Epigenomics Research
The Institute for Computational Biomedicine (ICB) at Weill Cornell Medical College was facing a challenge as they expanded their neuroscience, epigenomics, proteomics imaging facilities and brought online more genetic sequencers. Their legacy methodology of organically adding autonomous storage pools was no longer capable of meeting the computational needs of the researchers. The challenge was transitioning from their legacy method of adding a single dedicated RAID array (at a time), into something that was scalable and could meet their storage needs for years to come. As the data ingest rates continued to raise, the facility needed to look into a more robust, scalable and sustainable storage approach.
Case Study
ACCELERATE: LIFE SCIENCES - University of Miami’s Center for Computational Science Correlates Viruses with Gastrointestinal Cancers for The Cancer Genome Atlas 400% Faster Using DDN Storage
The Center for Computational Science (CCS) at the University of Miami is one of the largest centralized, academic, cyber infrastructures in the country. It supports over 2,000 researchers, faculty, staff, and students across multiple disciplines on diverse and interdisciplinary projects requiring high performance computing (HPC) resources. The center's guiding principle is to manage the entire data lifecycle as seamlessly as possible to streamline research workflow. However, the center faced several challenges. The diverse, interdisciplinary research projects required massive compute and storage power as well as integrated data lifecycle movement and management. The explosion of next-generation sequencing had a major impact on compute and storage demands, as it’s now possible to produce more and larger datasets, which often create processing bottlenecks. The heavy I/O required to create four billion reads from one genome in a couple of days only intensifies when the data from the reads needs to be managed and analyzed. The center needed a powerful file system that was flexible enough to handle very large parallel jobs as well as smaller, shorter serial jobs.
Case Study
British Antarctic Survey Navigates Surge of Big Data Scientific Research Requirements with High-Density, Scalable DDN Hybrid Flash Storage
The British Antarctic Survey (BAS) was facing a surge in data storage requirements due to its participation in a major global initiative and increased use of scientific modeling. The organization was collecting 10 times the amount of data it gathered just 10 years ago, with the rate of change increasing dramatically. This put pressure on their data collection and storage systems. In addition, BAS became part of a major global initiative, called Super Dual Auroral Radar Network (SuperDARN), which required a major storage expansion. The challenge was finding a solution that could meet the organization’s requirements for high-capacity, high-performance storage within its budget parameters.
Case Study
Ringling College of Art and Design Accelerates Student Creativity with High-Performance Computing and Powerful, Scalable DDN® Storage
Ringling College of Art and Design faced a challenge of explosive data growth caused by high-resolution, digital file-based workflows. This created a demand for future-proof storage that could scale on demand. The college wanted to use technology to support art as a tool, so that students could be creative without having to manage technology or deal with interruptions to their work. A robust, reliable and transparent storage infrastructure was required to accommodate the college’s desire to give students seamless access to all their digital assets, regardless of platform or location. The college wanted to avoid the management, access difficulty, cost and complexity of siloed storage.
Case Study
Accelerate: Media & Entertainment
MLB Network, a 24/7 TV network for baseball fans, was facing challenges in managing its vast baseball video archive. The network required a high-performance disk cache to support tape migration for the archive while accommodating 7,000 hours of new content ingested weekly. They also needed to simplify complex, concurrent workflows to ensure seamless support for up to 40 post-production jobs concurrently. The network was also looking for a technology that could suit the needs of two sports TV networks. The immediate challenge was moving LTO-4 content into a disk cache and then rewriting that content onto T10KD tapes, while simultaneously recording and archiving new footage onto the T10KD platform.
Case Study
Changing Research with a Forward-Looking AI and Big Data Computing Infrastructure
Tokyo Institute of Technology (Tokyo Tech) was faced with the challenge of speeding up data access times in parallel with continually improving algorithms that interact with data subsystems. They aimed to achieve this while maintaining optimal power consumption and system efficiency. The institution sought to break away from the conventions of the world's top supercomputers by incorporating elements and design points from containerization, cloud, artificial intelligence (AI), and Big Data.
Case Study
Accelerate: Media Entertainment
Deluxe Entertainment Services Group, a leading provider of state-of-the-art services and technologies for the global digital media and entertainment industry, faced several challenges. The company was experiencing an increasing demand to access and share high-resolution post-production workflow content on a global scale. The massive storage requirements and rapid growth created a surge of Big Data. The company's legacy tape-based archives proved insufficient in meeting fast turnaround and high-performance throughput expectations. Deluxe Creative Services sought an improved architectural underpinning that would enable selectively replicating data globally for easy and fast access by remote users. The company's traditional model of local SAN storage wouldn't scale out to a global footprint. Using NAS storage to move content from New York to London also would be too slow, leading to performance bottlenecks.
Case Study
Accelerate: Video Surveillance
Anyang City Hall in South Korea launched a major initiative to tackle urban street crime, slash traffic congestion, and beef up the city’s disaster response capabilities. The first step was to combine the city’s crime prevention and disaster management systems into a single infrastructure, deployed and managed in the city’s brand new Integrated Operations Briefing room. The challenge was to deploy a massive intelligent video surveillance system across Anyang’s entire 23-square-mile metropolitan area, with hundreds of high-definition digital video cameras linked to scores of servers running network video recording software and sophisticated pattern recognition analytics. The system links all cameras to the video management infrastructure in the Integrated Operations Briefing room at Anyang Police Department headquarters, complementing and enhancing the city’s police presence and rapid response capabilities. Anyang City Hall uses the same surveillance infrastructure for traffic management and forest fire prevention.
Case Study
Accelerate: Media Broadcast
Maple Leaf Sports & Entertainment (MLSE) wanted to create a central media repository to store and archive all of its digital broadcasting workflow, from production and ingest through editing, post-production and play-out. The storage system needed to support a wide variety of devices and software packages, including ingest devices, editing stations running Apple Final Cut Pro and Avid, as well as play-out servers from Harris. The solution also needed to integrate with a Digital Asset Management (DAM) solution which would be implemented during the second phase of the project. The solution not only needed to enable full connectivity and content sharing between heterogeneous systems, but allow them to work simultaneously, at full speed, without dropping frames or causing delays.
Case Study
Accelerate: Media Broadcast
WRN Broadcast, an international broadcast managed services company, was experiencing a 30% year-on-year growth. However, their underpowered storage was running slower than many of the systems accessing it, which was hurting productivity. They needed to replace their existing storage solution with a more flexible, scalable, and secure infrastructure. The company had grown 30% year-on-year, over the last two years alone, and it more than doubled the number of television channels that it serves. In order to facilitate such rapid growth, and also plan for future expansion, WRN Broadcast needed to replace its existing storage solution with something more flexible, scalable, and absolutely secure, with an important additional criterion of being a leader in its field to match WRN Broadcast’s own standards of excellence.
Case Study
Accelerate: HD Broadcast
Fox Network’s Engineering & Operations team was tasked with designing a file-based workflow solution for SPEED, specifically addressing the requirements to move from SD to HD content throughout the process, ingesting content directly into the Storage Area Network (SAN), creating low-res copies for easy editing, production and advanced editing, supporting Dalet transfers to and from video servers – while enabling 75 concurrent Dalet users to go about their everyday tasks, from logging content to rundown preparation. The team quickly began to leverage their experience from other, similar, projects and turned to high performance DDN® storage and Dalet for the key workflow components. The biggest challenge was timing – only a few weeks after signing the PO the entire system had to be on air in a brand new, purpose-built, 55,000 square foot facility.
Case Study
ACCELERATE: NATIONAL LABORATORIES North German Supercomputing Alliance (HLRN) Accelerates Scientific Breakthroughs with Peta-Scale Computing and DataDirect Networks High-Performance Storage
The North German Supercomputing Alliance (HLRN) provides scientists across seven North-German states with state-of-the-art storage and compute resources to accelerate scientific breakthroughs in the fields of physics, chemistry, fluid dynamics, engineering and the environment. The scientists, many of whom come from North German universities and other scientific institutions, have combined resources and funding from their respective states and the German federal government to create a powerful, distributed supercomputer system. HLRN’s ability to drive advanced scientific research requires the highest levels of compute power as well as high-bandwidth storage capacity. Given the wide range of data-intensive applications supported by the institute, HLRN sought a Big Data solution that could deliver a significant increase in storage capacity while scaling bandwidth and performance as needed. HLRN also needed to ensure that data could be accessed easily from different geographic locations.
Case Study
Accelerate: Media Workflows
Mel Hoppenheim School of Cinema at Concordia University in Montréal, Québec, Canada's largest university-based center for the study of film animation, film production, and film studies, faced several challenges. The school needed to update its aging workflows, moving from 90% film-oriented workflows to a 99% digital landscape. The technical support team was very limited in staffing. Multiple buildings on campus needed connectivity with third-party switches. Different client operating systems were attached to a single pool of storage.
Case Study
ACCELERATE: ACADEMIC RESEARCH National Center for Supercomputing Applications (NCSA) Builds Storage Environments with DDN SFA10K™ & SFA12K™ to House Vital Research Data for Advanced Scientifi c Discovery
The National Center for Supercomputing Applications (NCSA) was facing dwindling mid-range research funding which drove the need for condo-style campus clusters across a single shared environment. This was extremely complex as it involved accommodating multiple generations of hardware, interconnected technology and storage in one unified system. Ensuring equal access to all types and any number of nodes was complicated, including determining how to handle queuing and configurations. The center sought a blend of IOPS, bandwidth, performance and efficient capacity management in an environment including multiple generations of hardware resources.
Case Study
When Time is of the Essence: Wroclaw Centre for Networking and Supercomputing Speeds Understanding of Sea Level Change with DDN Storage and Advanced Lustre File Sharing
Wroclaw Centre for Networking and Supercomputing (WCSS) was facing challenges in supporting the processing and collection of increasingly large, complex, data-intensive scientific weather and geographic models. The temporary or scratch storage was required to address data growth across diverse scientific projects. The center needed to deliver access and capacity to support the work of over 3,000 scientists and researchers. There was an escalating need to address mixed workloads across a complicated, heterogeneous ecosystem of hardware and applications.
Case Study
ACCELERATE: LIFE SCIENCES - Children’s Mercy Kansas City Reduces Time to Achieve Major Medical Breakthroughs for Critically Ill Children by Nearly 2x with Rapid Genome Sequencing Powered by High-Performance DDN Storage
Children’s Mercy Kansas City, a leading children’s hospital, operates the world’s first whole genome sequencing center in a pediatric setting. The hospital’s Center for Pediatric Genomic Medicine collaborates with various professionals to sequence and analyze rare inherited diseases. However, genetic sequencing is very compute- and data-intensive, which puts ever-increasing pressure on the Center’s IT team to deliver ample processing power and highly capable data storage to support testing based on both whole genome sequencing and whole exome sequencing. The Center had previously deployed EMC Isilon storage, which initially met their storage requirement for handling both clinical and research workflows. Over time, however, Children’s Mercy’s traditional scale-out NAS storage lacked the scalability in performance and capacity to address demanding data creation and access needs.
Case Study
ACCELERATE: MEDIA BROADCAST - Czech Television Speeds and Simplifies FastGrowing Media Broadcast and Production Demands with Fully Integrated, Future-Proof DDN MEDIAScaler Storage Platform
Czech Television, the public television broadcaster in the Czech Republic, was facing a surge in digital video content and high-end video equipment which spurred the need for fast, scalable, robust, and reliable storage. The move to 4K uncompressed DPX workflows created a major spike in storage requirements, as the size of native raw files quadrupled as well as the performance required by the storage. Despite the massive data influx, Czech TV still had to guarantee flawless real-time workflows of concurrent UHD video streams—across ingest, editing, transcoding, distributing, and archiving. They needed to accommodate multiple teams of people working on the same large files all at once to perform necessary color correction, and as well as restore many old films in bad initial condition.
Case Study
The Institute of Cancer Research, London: Driving the Future of Dynamic Adaptive Therapies
The Institute of Cancer Research (ICR) in London, a world leader in identifying cancer genes, discovering cancer drugs and developing precision radiotherapy, needed a single, central storage infrastructure that would enable users to collect and analyze all types of active research data. The research data service (RDS) had to be broad enough to support eight research divisions and all types of biology, chemistry and physics-related data. It needed to be able to pull in massive amounts of data from different scientific instruments and next-generation sequencers while connecting to various research services from laptops and desktops of every flavor, to high performance supercomputers with CPUs and GPUs.
Case Study
Accelerate: Academic, Scientific and Industrial Research DDN Storage Empowers Pawsey Supercomputing Center to Speed Scientific Discoveries that Reveal the Secrets of the Universe
The Pawsey Supercomputing Center in Perth, Western Australia, is one of the most powerful facilities in the Southern Hemisphere. It supports scientific breakthroughs in radio astronomy, energy resources, and engineering. At any given point, more than a thousand scientists rely on Pawsey’s state-of-the-art facilities to conduct data-intensive research, complex simulations, and advanced visualizations. The Center also plays a pivotal role in the trailblazing Square Kilometer Array (SKA) project, which focuses on building a next-generation radio telescope that will be more sensitive and powerful than today’s most advanced telescopes to survey the universe with incredible depth and speed. The SKA project will generate massive amounts of data from thousands of connected antennae, giving astronomers unprecedented insights into the formation of the universe. To support this ground-breaking research, Pawsey must provide scientists around the world with easy access to high-end computing platforms and resilient, scalable storage.
Case Study
Building a VDI Environment Using WMware Horizon View
The Toyota Technical Development Corporation had to use an IT Business Continuity Planning (BCP) in order to properly protect important data and ensure business continuity in the case of any disruptive event. As a result, TTDC decided on adopting a virtual desktop system (VDI), and faced the issues of migrating from the existing system, and securing stress-free operation at a cost effective price. Yasuhito Kurebayashi remarked that to resolve the issues surrounding the introduction of the VDI, they could see that it was first of all necessary to migrate the storage system currently in operation, and then expand its memory capacity and scalability as necessitated by the increased I/O speed.
Case Study
The University of Queensland Builds UltraFast Data Storage Fabric with Powerful DDN Storage
The University of Queensland (UQ) is a leading academic institution with nine internationally recognized research institutes. The researchers at UQ are making landmark discoveries in various fields and depend on the IT infrastructure to deliver ultra-fast, multi-site data access. The University needed to ensure universal data access regardless of where researchers are based or data is created, manipulated, and archived. The University uses QRIScloud, a high-capacity cloud compute and storage node of the NCRIS national research infrastructure operated by the Queensland Cyber Infrastructure Foundation (QCIF). Exchanging data between campus computing clusters and QRIScloud was done manually to date, which wasn’t the best use of valuable research time. Equally important was the proactive need to address constant, organic data growth resulting from increased research collaborations.