One of the biggest problems in deep neural networks is memory. There is only so much memory that a DRAM device has and DNNs frequently push DRAMs to the limit. But when you dig deeper into this memory problem, it becomes clear that neural network architectures have varying memory requirements for each stage of the data pipeline.

Image: Sulagna Saha (c) Gestalt IT

Frank Denneman, Chief Technologist at VMware has an illuminating article on this where he explains the memory consumption of neural networks at training and inferencing stages. In his article titled โ€œTRAINING VS INFERENCE โ€“ MEMORY CONSUMPTION BY NEURAL NETWORKSโ€ he writes,

What exactly happens when an input is presented to a neural network, and why do data scientists mainly struggle with out-of-memory errors?

Read the rest of his article -โ€œTRAINING VS INFERENCE โ€“ MEMORY CONSUMPTION BY NEURAL NETWORKSโ€ to find the answer to this.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Experience at Qlik Connect 2025

SHARE THIS STORY