DESIGN TOOLS
High-bandwidth memory

HBM3E

The industry's fastest, highest-capacity high-bandwidth memory (HBM) to advance generative AI innovation

pause

Production-capable Micron HBM3E 12-high 36GB cube now available

Today’s generative AI models require an ever-growing amount of data as they scale to deliver better results and address new opportunities. Micron’s 1ß (1-beta) memory technology leadership and packaging advancements ensure the most efficient data flow in and out of the GPU. Micron’s 8-high and 12-high HBM3E memory cubes further fuel AI innovation at up to 30% lower power consumption than the competition’s. Micron’s 8-high 24GB HBM3E is shipping with NVIDIA H200 Tensor Core GPUs and production-capable 12-high 36GB HBM3E is also available.

Micron HBM3E multiple chips stacked

Micron HBM3E | High-bandwidth memory for AI

Dive into the rapidly evolving landscape of AI with Micron HBM3E, the industry’s most efficient high-bandwidth memory offering 30% lower power than the competition. Built to advance generative AI innovation, HBM3E delivers over 1.2 TB/s bandwidth with superior power efficiency. Discover how Micron is shaping the future of AI, powering advancements in data centers, engineering, medicine, and more. All of that, all in here, with Micron HBM3E.

Screen capture of HBM3e SSD from linked YouTube video

Advancing the rate of AI innovation

Generative AI

Generative AI opens a world for new forms of creativity and expression, like the image above, by using large language model (LLM) for training and inference. Utilization of compute and memory resources make the difference in time to deploy and response time. Micron HBM3E provides higher memory capacity that improves performance and reduces AI CPU offload for faster training and more responsive queries when inferencing LLMs such as ChatGPT.™

3D illustration of an astronaut made of crystals

Deep learning

AI unlocks new possibilities for businesses, IT, engineering, science, medicine and more. As larger AI models are deployed to accelerate deep learning, maintaining compute and memory efficiency is important to address performance, costs and power to ensure benefits for all. Micron HBM3E improves memory performance while focusing on energy efficiency that increases performance per watt resulting in lower time to train LLMs such as GPT-4 and beyond.

Light blue, pink and yellow data streams combining to form a full variation of datasets on the right

High-performance computing

Scientists, researchers, and engineers are challenged to discover solutions for climate modeling, curing cancer and renewable and sustainable energy resources. High-performance computing (HPC) propels time to discovery by executing very complex algorithms and advanced simulations that use large datasets. Micron HBM3E provides higher memory capacity and improves performance by reducing the need to distribute data across multiple nodes, accelerating the pace of innovation.

An artist in a dimly lit room using a stylus to create a colorful abstract design on a tablet

HBM3E built for AI and supercomputing with industry-leading process technology

Micron extends industry-leading performance across our data center product portfolio with HBM3E. Delivering faster data rates, improved thermal response, and 50% higher monolithic die density within same package footprint as previous generation.

Micron HBM3E single chip

HBM3E built for AI and supercomputing with industry-leading process technology

HBM3E provides the memory bandwidth to fuel AI compute cores

With advanced CMOS innovations and industry-leading 1β process technology. Micron HBM3E provides higher memory bandwidth that exceeds 1.2 TB/s.1

Blurred image of female wearing AI goggles

HBM3E unlocks the world of generative AI

With 50% more memory capacity2 per 8-high, 24GB cube, HBM3E enables training at higher precision and accuracy.

Splash of water on neon background

HBM3E delivers increased performance per watt for AI and HPC workloads

Micron designed an energy-efficient data path that reduces thermal impedance and enables greater than 2.5X improvement in performance/watt3 compared to the previous generation.

Generative AI illustration of a modern high technology server room in purple neon colors

HBM3E pioneers training of multimodal, multitrillion-parameter AI models

With increased memory bandwidth that improves system-level performance, HBM3E reduces training time by more than 30%4 and allows >50% more queries per day.5,6

Chat Bot Chat with AI or Artificial Intelligence technology. Woman using a laptop computer chatting with an intelligent artificial intelligence asks for the answers he wants

Micron HBM3E: The foundation for unlocking unprecedented compute possibilities

Micron HBM3E is the fastest, highest-capacity high-bandwidth memory to advance AI innovation — an 8-high, 24GB cube that delivers over 1.2 TB/s bandwidth and superior power efficiency. Micron is your trusted partner for AI memory and storage innovation. 

Cropped shot of African American data engineer holding laptop while working with supercomputer in server room lit by blue light, copy space

Engineering reports: The impact of HBM on LLM performance

Frequently asked questions

Micron’s HBM3E 8-high 24GB and HBM3E 12-high 36GB deliver industry-leading performance with bandwidth greater than 1.2 TB/s and consume 30% less power than any other competitor in the market. 

Micron HBM3E 8-high 24GB will ship in NVIDIA H200 Tensor Core GPUs starting in the second calendar quarter 2024. Micron HBM3E 12-high 36GB samples are available now.

Micron’s HBM3E 8-high and 12-high modules deliver an industry-leading pin speed of greater than 9.2Gbps and can support backward-compatible data rates of HBM2 first-generation devices.

Micron’s HBM3E 8-high and 12-high solutions deliver an industry-leading bandwidth of more than 1.2 TB/s per placement. HBM3E has 1024 IO pins and the HBM3E pin speed of greater than 9.2Gbps achieves a rate higher than 1.2TB/s​.

Micron’s industry-leading HBM3E 8-high provides 24GB capacity per placement. The recently announced Micron HBM3E 12-high cube will deliver a jaw-dropping 36GB of capacity per placement. 

Micron’s HBM3E 8-high and 12-high solutions deliver an industry-leading bandwidth of greater than 1.2 TB/s per placement. HBM3E has 1024 IO pins and the HBM3E pin speed of more than 9.2Gbps and achieves a rate greater than 1.2TB/s.​

HBM2 offers 8 independent channels running at 3.6Gbps per pin and delivering up to 410GB/s bandwidth and comes in 4GB, 8GB and 16GB capacity. HBM3E offers 16 independent channels and 32 pseudo channels. Micron’s HBM3E delivers pin speed greater than 9.2Gbps at an industry-leading bandwidth of more than 1.2 TB/s per placement. Micron’s HBM3E offers 24GB capacity using an 8-high stack and 36GB capacity using a 12-high stack. Micron’s HBM3E delivers 30% lower power consumption than competitors. 

Please see our Product Brief.

Featured resources

1.  Data rate testing estimates based on shmoo plot of pin speed performed in manufacturing test environment.
2.  50% more capacity for same stack height.
3.  Power and performance estimates based on simulation results of workload uses cases.
4.  Based on internal Micron model referencing an ACM Publication, as compared to the current shipping platform (H100).
5.  Based on internal Micron model referencing Bernstein’s research report, NVIDIA (NVDA): A bottoms-up approach to sizing the ChatGPT opportunity, February 27, 2023, as compared to the current shipping platform (H100).
6.  Based on system measurements using commercially available H100 platform and linear extrapolation.

Customer support

Need to get a hold of us? Contact our support teams as well as get contact information for our individual locations.

Search for, filter and download data sheets

Get in-depth information about product features, specifications, functionality, and more.

Order a Micron sample

Your online source for placing and tracking orders for Micron memory samples.

Downloads & technical documentation

Resources to help you design, test, build and optimize your innovative designs.