Why it matters: Managing graphics memory has become one of the most pressing challenges facing the realm of real-time 3D rendering. As visuals become more detailed, the amount of VRAM required for modern high-end games is pushing against what average customers can afford. AMD and Nvidia are currently developing remedies to the issue, which involve shifting certain rendering tasks from memory to the GPU.
A new research paper from AMD explains how procedurally generating certain 3D objects in real-time-rendered scenes, like trees and other vegetation, can reduce VRAM usage by orders of magnitude. The technique could benefit hardware with small memory pools or enable future games to increase perceived detail dramatically.
Game developers already create assets like trees and bushes using procedural generation, which employs algorithms to dynamically build variations of a limited number of hand-crafted models. However, those models are then stored in the game data, and rendering them can significantly increase VRAM usage and storage requirements. AMD’s proposed technique utilizes work graphs to procedurally generate vegetation on the fly, eliminating the need to keep it in video memory or system storage.
AMD researcher’s real-time GPU tree generation system uses work graphs (w/ mesh nodes) for procedural tree generation. Without work graphs, the trees in the scene would have required 34.8 GiB of VRAM. With work graphs, only 51 KiBhttps://t.co/2YcWdOj5Lehttps://t.co/aDkZB08tks
– Compusemble (@compusemble) June 23, 2025
In a video demonstration, the researchers show a dense forest running smoothly on a Radeon RX 7900 XTX at 1080p. Achieving the level of quality shown using traditional methods might require almost 35GB of VRAM, far above the GPU’s 24GB allocation. However, real-time generation cuts memory usage to just 51KB. Furthermore, the trees maintain impressive visual detail and variety. They can shift with seasons, animate by swaying in the wind, and efficiently manage levels of detail without visible pop-in.
The technique fundamentally resembles the neural texture compression system Nvidia has been developing for a few years. While Nvidia’s applies to textures instead of vegetation, both methods aim to dynamically calculate assets entirely on the GPU without repeatedly pulling them to and from memory and storage.
Neural Texture Compression utilizes machine learning to decompress textures as needed during rendering, reducing VRAM usage by up to 95% while potentially increasing detail. A minor performance hit would be the only downside. A recent study from Nvidia describes ongoing improvements in the technique’s filtering solution.
Technologies such as work graphs and neural compression could enable next-generation hardware to provide significant visual improvements without requiring dramatic increases in memory size and storage speed if they gain wide adoption.
//platform.twitter.com/widgets.js
Source link
Discover more from gautamkalal.com
Subscribe to get the latest posts sent to your email.
Be First to Comment