AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Dx12 fp641/23/2024 GPU just has to be capable to make good use of it. Both GDDR5X and HBM are capable to deliver sufficient bandwidth for 4k. Today we are already close to bandwidth required for 4k. And there are methods which require GPU raw power to calculate something (geometry, shadow). In the end there are operations which require a lot of memory bandwidth (High res textures/AA on 4k resolution). GDDR5X is definitely easier to adopt than HBM, but unless we start to believe GPU manufacturers are going to bring 512bit GDDR5X cards, all we can expect in middle of 2016 is 384bit card with GDDR5X matching transfer speeds of 512bit GDDR5 cards.Īpparently that's not problem for nVidia as they did just fine with 384bit bus on Titan X, and having like 30% more data available to GPU (without beefing up memory controller to 512bit) may allow for like 50% more TMU/ROPs/Shaders. Yes, it delivers power efficiency and it looks like internal complexity is bit lower, so price in long run may go down faster than HBM1. And GDDR5X in middle of 2016 with 10Gbps slowly going up to 16Gbps in around 2020.įor now it promises 25% more data per pin per second. What you see on image is GDDR5 8Gbps per pin for quire some time. Well I don't understand all that, but I have heard that GDDR5x is supposed to be offering twice the bandwidth of current GDDR5 (in addition to that same 'fact' being mentioned in this article) - through greater efficiencies mainly, but also through higher clock speeds. It seems that command and address bus is directly shared, so there is another logic behind which tells chips if command is for them (maybe both read required address and check if it is in their scope). 1st chip gets command and address bus and relays required information to 2nd chip if appropriate (extra latency?).Įdit: Actually little correction for underlined text: > in x16 mode 1st memory chip gets 2 out of 4 data and 2 out of 4 error correction channels. > GDDR5X uses 4 data and 4 error correction channels, has command and address bus I'll keep an eye on memory layout from now on when I check GDDR5X graphics cards. While memory controller may be 384bit, 2 chips will share traces for higher capacity. In reality one GDDR5X is connected in standard way and another is connected behind it (easily on other side of PCB). X16 mode is to allow communication through lower amount of traces (lower speed), but to allow another set of traces to connect another chip ("doubling" capacity per traces used). But signaling pin count stays practically same, so PCB complexity stays same per memory chip. Chip is accessible in x32 and x16 mode (faster and slower). Number of pins goes apparently up by ~12%. Or does that mean 1.25 * 68/67 = 1.269 ~ 27% higher transfer rate than GDDR5 standard package? Means one chip should deliver 1.25 * 190/170 = 1.397 ~ 40% higher transfer rate than GDDR5 standard package. Signaling pin count went up from 67 to 68 which does not increase complexity of PCB. Their prognosis for early chips is 25% higher throughput per pin. GDDR5 has 170 pins per package, GDDR5X has 190 pins per package.
0 Comments
Read More
Leave a Reply. |