The recent hack of NVIDIA private servers leaked a mountain of data about the chipmaker’s various projects. These include the source code for DLSS and the LHR limiter used on the RTX 30 series graphics cards. In addition, the specifications and codenames of future GeForce GPUs have also surfaced. Most notably, the configurations of all the Ada Lovelace graphics cores have also leaked out.
GPU | TU102 (RTX 2080 TI/TITAN) | GA102 (RTX 3080/3090) | AD102 (RTX 4080/4090) |
---|---|---|---|
Arch | Turing | Ampere | Ada Lovelace |
Process | TSMC 12nm | Sam 8nm LPP | TSMC 5nm |
GPC | 6 | 7 | 12 |
TPC | 36 | 42 | 72 |
L2 Cache | 6MB | 6MB | 96MB |
SMs | 72 | 84 | 144 |
Shaders | 4,608 | 10,752 | 18,432 |
TFLOPs | 16.1 | 37.6 | 90? |
Memory | 11GB GDDR6 | 24GB GDDR6X | 24GB GDDR6X? |
Bus Width | 384-bit | 384-bit | 384-bit |
TGP | 250W | 350W | 600W? |
Launch | Sep 2018 | Sep 20 | Aug-Sep 2022 |
For starters, the flagship AD102 die set to power the top-tier Nvidia RTX 4080, RTX 4080 Ti, and RTX 4090 GPUs will pack a total of 12 GPCs (Graphics Processing Clusters). These will be further divided into 6 TPCs (Texture Processor Clusters) per GPC, resulting in an overall count of 72 TPCs. This totals to 144 SMs (Streaming Multiprocessors) or 18,432 FP32 shaders (half as many INT32). Like Ampere, each SM will pack 128 FP32 and, 64 INT32 cores, in addition to a handful of Tensor and RT cores. The GeForce RTX 4090 will be paired with 24GB of GDDR6X memory via a 384-bit bus, while the RTX 4080 will trim it to 16GB and a 320-bit bus. Finally, the RTX 4080 Ti will likely combine 18-20GB GDDR6X memory with a 384-bit bus. These GPUs will have a TBP of 500-600W and should launch in August or September.
Best RAM for Gaming DDR4 and DDR5 Kits | Best Nvidia RTX & AMD Radeon RX graphics cards for Gaming
The most interesting part of this leak is the enlargement of the L2 cache. The RTX 4090, 4080, and 4080 Ti will pack up to 96MB of L2 cache which is a massive upgrade over the 6MB featured in Ampere and Turing. Much like AMD’s Infinity Cache (L3), this should improve hit rates, and boost internal bandwidth.
NVIDIA RTX 4090 and 4080
GPU | AD103 (RTX 4070 TI) | AD104 (RTX 4070) | AD106 (RTX 4060/4050 TI) | AD107 (RTX 4050) |
---|---|---|---|---|
GPC | 7 | 5 | 3 | 4 |
TPC | 42 | 30 | 18 | 24 |
L2 Cache | 64MB | 48MB | 32MB | 32MB |
SMs | 84 | 60 | 36 | 24 |
Shaders | 10,752 | 7,680 | 4,608 | 3,072 |
TFLOPs | ~50-60 | ~35-40 | ~20-25 | ~10-15 |
Memory | 16GB GDDR6X? | 16GB GDDR6X? | 12GB GDDR6? | 8GB GDDR6? |
Bus Width | 256-bit | 192-bit | 128-bit | 128-bit |
TGP | 350W | 300W | 250W | 200W |
Launch | 2023 | Q4 2022 | Q4 2022/2023 | 2023 |
Down the stack, we have the AD103 and AD104 dies. These will power the RTX 4070 Ti, and RTX 4070 with 10,752, and 7,680 FP32 shaders respectively. Both the SKUs should come with 16GB of GDDR6X memory across a 256 or 192-bit bus. The L2 cache will be trimmed to 64MB for the RTX 4070 Ti (AD103) and 48MB for the RTX 4070 (AD104). These GPUs will have a TBP of 300-250W, with a release in late 2022.
Finally, we have the RTX 4060 and 4050 powered by the AD106 and the AD107, respectively. The former will pack a total of 4,608 cores across 36 SMs, and the latter will cut it down to 3,072 cores across 24 SMs. The RTX 4060 will likely be paired with 12GB of GDDR6 memory across a 128-bit bus while the RTX 4050 will combine 8GB across a 128-bit bus. These SKUs should have a TBP of 250W, and 200W, respectively, with a launch slated for early 2023.