Love_In_Rio
Veteran
Or two 8 tflops GPU chiplets...Unless it's 8 TFlops plus additional hardware on the side like a ray-tracing engine.
Or two 8 tflops GPU chiplets...Unless it's 8 TFlops plus additional hardware on the side like a ray-tracing engine.
I was under the impression real time RT requires deep learning to filter a sparse number of rays. Is this still the case?We've sort of discussed this in the other thread, but it comes down to what's needed for a client/end-user, so RT is seemingly in, but not the AI part i.e. training/tensor, and I thought the consensus we reached was that the latter was more or less ideally done in the cloud i.e. not necessary for gaming hardware to do, and maybe coincidentally, BF5 is our writing-on-the-wall test case for implementing RT without using tensor anyway. *ahem*
It'll be curious to see, but I imagine it would be an RT unit (whatever that may be ????) per CU or a group of CUs like how nV has an RT per SM - so maybe it ends up as one RT block per group of CUs, which in Polaris is up to 3 CUs with shared L1 as opposed to the max of 4 in the rest of GCN (including Fiji/Vega I assume).
Wonder if they'd be inclined to go down to 2 CUs per L1 grouping (or @3dilettante can bust my hypotheticals :V )
----
"confirmed" SSD storage ought to be nice.
We have yet to see real time denoising using ML. We won’t see any ML based techniques that are built into the renderpipeline without DirectML if I understand correctly.I was under the impression real time RT requires deep learning to filter a sparse number of rays. Is this still the case?
IIRC Star Wars reflections demo, Remedy's demo, Porsche demo & Project SOL (Nvidia tech demo) all use RTX/OptiX for Denoising.We have yet to see real time denoising using ML. We won’t see any ML based techniques that are built into the renderpipeline without DirectML if I understand correctly.
Link that it’s ML based denoising? They are certainly using denoising though. Optix uses ML but not fast enough for game purposes IIRC.IIRC Star Wars reflections demo, Remedy's demo, Porsche demo & Project SOL (Nvidia tech demo) all use RTX/OptiX for Denoising.
OptiX Denoising is always ML. It's running on the Tensor Cores through CUDA (no need for DirectML in their case). The number of rays is so low that the Tensor Cores are enough to hit 25/30Fps @ 1080P.Link that it’s ML based denoising? They are certainly using denoising though. Optix uses ML but not fast enough for game purposes IIRC.
It is possible they use DirectML, because there are unofficial release versions for programming. But we won’t see it in game until the windows patch comes out with DirectML.
Yea using CUDA I can see that happening.OptiX Denoising is always ML. It's running on the Tensor Cores through CUDA (no need for DirectML in their case). The number of rays is so low that the Tensor Cores are enough to hit 25/30Fps @ 1080P.
Or two 8 tflops GPU chiplets...
All depends on the northbridge - PCIe could just refer to the link technology rather than the physical expansion ports, so they'd maybe opt for a couple lanes instead to optimize for the SSD performance that is reasonably affordable.First of all is 1+GB/s referring to read speed? because that seems slow for NVMe. Also do you really think these consoles would have a PCIe slot?
All depends on the northbridge - PCIe could just refer to the link technology rather than the physical expansion ports, so they'd maybe opt for a couple lanes instead to optimize for the SSD performance that is reasonably affordable.
Why would 1GB/s be too slow for reading (compared to conventional HDD) Almost an order of magnitude faster than what consoles are accustomed to.
On PCI-E 3.0, that seems to be equivalent to 1x -> 18 pins/wires (correct me if I'm wrong, just kinda parsing)
They also cost more. In the context of consoles, it seems like that'd be a reasonable trade-off.Don't NVMe's get like 2.5GB/s read speeds...
It would look like they're using one of those low-cost/low-power DRAM-less Phison controllers that use 2 lanes PCIe 3.0.Like for instance "1TB NVMe SSD @ 1+GB/s". First of all is 1+GB/s referring to read speed? because that seems slow for NVMe.
They currently have SATA connectors, so why couldn't they transition to NVMe?Also do you really think these consoles would have a PCIe slot?
Considering Microsoft and Sony have had replaceable storage for two and a half console generations? No.Wouldn't it be more likely to just have the flash memory soldered to the board?
Well they have SATA connections because they use mechanical drives..which can't be "embedded" into the board like flash memory can..and I would not consider Xbox One as having replaceable storage.It would look like they're using one of those low-cost/low-power DRAM-less Phison controllers that use 2 lanes PCIe 3.0.
1 GB/s consistent throughput with solid state latencies would be a huge improvement over the SATA 3 HDDs we find in current consoles.
And if the consoles use a regular PCIe 4x connector+bus, there's still a substantial upgrade path for enthusiasts and/or future console revisions.
They currently have SATA connectors, so why couldn't they transition to NVMe?
Considering Microsoft and Sony have had replaceable storage for two and a half console generations? No.
Then it's only logical that they'll use NVMe connections if the new gens use sold state drives, otherwise they'd be limited by the bus.Well they have SATA connections because they use mechanical drives....
What the..? Wow you're right!and I would not consider Xbox One as having replaceable storage.
Then it's only logical that they'll use NVMe connections if the new gens use sold state drives, otherwise they'd be limited by the bus.