• >
  • >
  • >
  • >
  • >
Google > Case Studies > Kakao Brain: Accelerating large-scale natural language processing and AI development with Cloud TPU

Kakao Brain: Accelerating large-scale natural language processing and AI development with Cloud TPU

Google Logo
Customer Company Size
Large Corporate
Region
  • Asia
Country
  • Korea
Product
  • KoGPT
  • Google Cloud TPU
Tech Stack
  • Tensor Processing Unit (TPU)
  • Generative Pre-trained Transformer 3 (GPT-3)
Implementation Scale
  • Enterprise-wide Deployment
Impact Metrics
  • Productivity Improvements
  • Innovation Output
Technology Category
  • Analytics & Modeling - Machine Learning
  • Analytics & Modeling - Natural Language Processing (NLP)
Applicable Functions
  • Product Research & Development
Use Cases
  • Generative AI
  • Machine Translation
Services
  • Cloud Planning, Design & Implementation Services
About The Customer
Established in 2017, Kakao Brain is a research and development company that develops AI-based technologies in various fields, such as natural language processing. The company recently released a Korean natural language processing model, KoGPT, to help further expand the use and value of AI. As a subsidiary of Kakao Corp., a major South Korean tech giant, Kakao Brain focuses on advancing AI technologies to enhance various applications, including natural language processing. The company is committed to pushing the boundaries of AI research and development, leveraging cutting-edge technologies to create innovative solutions. With a strong emphasis on AI and machine learning, Kakao Brain aims to contribute significantly to the field by developing models that can process and understand complex data, particularly in the Korean language. Their efforts are geared towards creating AI models that can perform tasks such as reading user intentions, writing letters, and even software coding, thereby expanding the scope of AI applications in the Korean language.
The Challenge
In November 2021, Kakao Brain, an artificial intelligence R&D subsidiary of South Korean tech giant Kakao Corp., unveiled KoGPT. A large-scale deep learning-based natural language processing model, KoGPT was developed by adapting Generative Pre-trained Transformer 3 (GPT-3), the most widely used natural language processing model, to the Korean language. When it comes to the English language, GPT-3 is already expanding the scope of application beyond simply translating words into text, by accurately reading a user’s intentions and writing letters, even software coding. This was not available for the Korean language because the process of creating a NLG machine learning model is labor intensive, with rapid learning of large-scale data required. However, KoGPT was able to process six billion model parameters and 200 billion tokens, creating an artificial intelligence model that can understand Korean.
The Solution
Deploying a dedicated machine learning processor optimized for learning large-scale data, according to Woonhyuk Baek, Large-Scale AI Research Scientist at Kakao Brain, Google Cloud TPU plays an important part in accelerating the training process of KoGPT and its massive workloads. Baek goes on to explain that understanding the characteristics of GPU (Graphical Processing Unit) and TPU (Tensor Processing Unit) is the most important starting point for proper utilization. Although TPU has strong AI data processing capabilities, simply replacing all AI systems with TPU immediately will not yield the results wanted. TPU and GPU also have clear areas that complement each other. GPU has the advantages of being able to start a project quickly, and easily respond to a general environment, but it is not easy to scale. Meanwhile, TPU is easy to manage because it can receive resources in units of pods, and the communication speed between each node is fast, which is integral for large-scale data processing. Unlike on-premise and cloud compatible GPU, Cloud TPU was born to accelerate machine learning workloads within the Google Cloud ecosystem. Baek says on-demand TPU devices and pod slices provided ease with workload management, adding that fast networking speeds between TPU nodes made data processing seamless.
Operational Impact
  • Massively reduces workload of processing large-scale data.
  • Shortens task completion time from seven days to one day.
  • Enables seamless large-scale system scalability.
  • Provides flexibility and reliability for AI research and development.
  • Allows Kakao Brain to draw a clearer product development roadmap for future goals.
Quantitative Benefit
  • Shortens task completion time from seven days to one day.
  • Processes six billion model parameters and 200 billion tokens.

Case Study missing?

Start adding your own!

Register with your work email and create a new case study profile for your business.

Add New Record

Related Case Studies.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that AGP may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from AGP.
Submit

Thank you for your message!
We will contact you soon.