RDNA inherited binning rasterizers from Vega, didn't it?I think at 4k they would almost certainly want to pin the framebuffer, because otherwise that and the texture data would just flush everything every frame and only leave them gaining advantage in locality in address, without any benefit from locality in time.
I'm not convinced, reading a 4k frame buffer at 100 FPS, requires only 3.2 GB/s, and could be even less with compression.
GPUs render the frame buffer in some tiled way, frequently reusing tiles preventing them being swapped out the cache by other data.
Yeah, but they say 2 slot here, so i guess 2 lot one will be double fan design for 6800?I looked again at the presentation and there is no declaration of which boards use 2,5 slot cooling and which use 2 slot cooling.
wow, Nappe1. Haven't seen you for years!Long time, no see...
Do you really think AMD who designed the SoCs for the XSX and PS5 don't have a full fledged implementation of DS? No-one likes proprietary standards, not even Nvidia. The scant adoption of RTX before it became a part of DirectX shows as much.
Either ways, DS is not really relevant for the PC market at the moment and won't be for some time due to the fragmented nature of PC hardware and software. It will be a while before an SSD is a hard requirement for a PC game..
Maybe they want to upsell 3070 buyer to 6800. For $80 more you get double the VRAM. The base VRAM bandwidth is not only faster, but 6800 also have infinity cache. No matter how you look at it, 6800 is a much better buy than 3070, especially if you mainly game and don't care about tensor and CUDA stuff. Of course after that you might think "add $70 more and you get 6800XT".....
Yeah, but they say 2 slot here, so i guess 2 lot one will be double fan design for 6800?
https://www.amd.com/en/products/specifications/compare/graphics/10516,10521,10526
https://wccftech.com/amd-helped-god...-traced-shadows-though-its-barely-noticeable/
GodFall an Dirt 5 have raytraced shadows on AMD series 6000
yep, last time I posted here, it was 2013. Heck, one of my last threads I talked about "framing memories" in my and my girlfriend's new rental flat... and that was 2011! Even though we get married and moved to our own apartment, the chips are still framed and on the wall as display. Too bad that few people nowadays know that they are looking an unicorn and it's doo doo while watching them.wow, Nappe1. Haven't seen you for years!
I did notice launch of Iris Pro, but it was not enough to get me coming back. However when there's a mention of remarkable amount on-chip RAM in GFX chip, you bet I am reading the information. I pretty quickly calculated the same maths as people have done here: 128MiB 4096 bits bus which is most likely divided to two 2048 bits wide parts. both serving 4 32bit wide external memory channels.
Just buy it. You'll be waiting a year+ before DirectStorage actually makes a difference to any games.Nvidia's potentially very imminent 3070Ti.
How do you know NVidia isn't already doing this? Why would Intel be involved?I agree wit the overall sentiment but if this forces NV and Intel to address the same weakness in the PC's dGPU based architecture then I'm all for it.
That's not the math. Most likely, it's split into 16 entirely separate cache slices, each of which serves 512 bits per cycle. It's very instructive to know that it's fundamentally quite similar to the L3 in modern Zen CPUs, just (probably) cache lines that are twice as long and with twice the bus width per slice.
On GDDR6, the external memory channels are 16-bit wide, and there are two of them per chip. So one 8MB, 512-bit cache slice per memory channel.
And yes, it's definitely SRAM. eDRAM is gone, it's not compatible with modern logic lithography.
You sure they've split it into (more than two) slices? I don't think many of the theorized uses for it would really like small slices like that. I'd put my money on either 64 or 32 MB slices.On GDDR6, the external memory channels are 16-bit wide, and there are two of them per chip. So one 8MB, 512-bit cache slice per memory channel.
You sure they've split it into (more than two) slices? I don't think many of the theorized uses for it would really like small slices like that. I'd put my money on either 64 or 32 MB slices.
Just buy it. You'll be waiting a year+ before DirectStorage actually makes a difference to any games.
How do you know NVidia isn't already doing this? Why would Intel be involved?
NVidia marketing designed to make stupid people think it's something more than DirectStorage.RTX-IO
NVidia marketing designed to make stupid people think it's something more than DirectStorage.
The Xbox Velocity Architecture comprises four major components: our custom NVME SSD, hardware accelerated decompression blocks, a brand new DirectStorage API layer and Sampler Feedback Streaming (SFS).
I'm actually not sure that is the case here. As someone commented earlier DirectStorage does not seem to have specifics with respect to decompression.
NVidia marketing designed to make stupid people think it's something more than DirectStorage.
Nvidia said:NVIDIA RTX IO plugs into Microsoft’s upcoming DirectStorage API, which is a next-generation storage architecture designed specifically for gaming PCs equipped with state-of-the-art NVMe SSDs, and the complex workloads that modern games require. Together, the streamlined and parallelized APIs, specifically tailored for games, allow dramatically reduced IO overhead and maximize performance/bandwidth from NVMe SSD to your RTX IO-enabled GPU.
Specifically, NVIDIA RTX IO brings GPU-based lossless decompression, allowing reads through DirectStorage to remain compressed while being delivered to the GPU for decompression. This removes the load from the CPU, moving the data from storage to the GPU in its more efficient, compressed form, and improving I/O performance by a factor of 2.
With rise in storage bandwidth, the IO load on the CPU rises proportionally, to a point where it can begin to impact performance. Microsoft sought to address this emerging challenge with the DirectStorage API, but NVIDIA wants to build on this.
....
NVIDIA RTX IO is a concentric outer layer of DirectStorage, which is optimized further for gaming, and NVIDIA's GPU architecture. RTX IO brings to the table GPU-accelerated lossless data decompression,
....
There is, however, a tiny wrinkle. Games need to be optimized for DirectStorage. Since the API has already been deployed on Xbox since the Xbox Series X, most AAA games for Xbox that have PC versions, already have some awareness of the tech, however, the PC versions will need to be patched to use the tech. Games will further need NVIDIA RTX IO awareness, and NVIDIA needs to add support on a per-game basis via GeForce driver updates.
Digital Foundry said:Working alongside the DirectStorage API built into the upcoming Xbox Series X, RTX IO "enables rapid GPU-based loading and game asset decompression, accelerating input/out performance by up to 100x compared with hard drives and traditional storage APIs." That should allow for higher frame-rates as well as "near-instantaneous game loading" - not bad if it lives up to that description!