Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
On gddr6, if each channel can access all banks, and one channel can read while the other writes, I wonder if the penalty of read/write reversing can be mitigated?
Edit: nope, each channel have it's own banks :(
 
On gddr6, if each channel can access all banks, and one channel can read while the other writes, I wonder if the penalty of read/write reversing can be mitigated?
Edit: nope, each channel have it's own banks :(
Similar to XB1 esram ?
 
Similar to XB1 esram ?
Yeah but no it doesn't work, I just looked it up, and each channel is it's own half of the chip, it's almost exactly like having two separate 16bit chip. HBM is similar, channels have their own banks. Still allow more read/write mixes I guess? Statistically?
 
There's a whole discussion on AI. It's not an easy thing to change. Bethesda had to tone down the Radiant AI in Oblivion because it screwed with the gameplay. We don't want AI that's too smart or it'd be unbeatable.

The important thing is to have versatile hardware that can be used as needed. I'll repeat my desire for a volumetric modelling of the world that can be used for graphics, audio, and AI. Whether that warrants a ray0tracing accelerator, or just a fast GPU with loads of RAM and bandwidth, I don't know.
I don't want to go too off topic, but I also don't think battle AI is where the future of gaming gets exciting. People like cheesing enemy AI anyway.

It's AI that "remembers" you. This has been done in Mass Effect, Bethesda games, and the recent LOTR games, but they are the tip of the iceberg and due to limits like cost of voice overs, are often programmed bespoke. But if there are advancements with synthesized, emotion capable speech (like Westwood), and other lookup advancements, I could see truly immersive worlds where your choices have profound effects. That would be a game changer, no pun intended. That would truly feel like magic to the player.
 
Actually this could just as easily indicate new hardware will be released in 2019. Recent history shows lots of AAA releases even when new hardware is announced/released.

2013 first/second party AAA games:

GT6
Beyond: Two Souls
God of War: Ascension
TLoU

On top of which the small matter of GTA5 released.....PS4 being announced in February 2013 and releasing in November didn't seem to matter at all.
Arguably Sony released too much 1st party software for PS3 in 2013 [you forgot Puppeteer and last PS3 Ratchet&Clank game]. 1st two years of PS4 lifecycle suffered a bit because of that.
 
I don't want to go too off topic, but I also don't think battle AI is where the future of gaming gets exciting. People like cheesing enemy AI anyway.

It's AI that "remembers" you. This has been done in Mass Effect, Bethesda games, and the recent LOTR games, but they are the tip of the iceberg and due to limits like cost of voice overs, are often programmed bespoke. But if there are advancements with synthesized, emotion capable speech (like Westwood), and other lookup advancements, I could see truly immersive worlds where your choices have profound effects. That would be a game changer, no pun intended. That would truly feel like magic to the player.

Can't remember where the article (or search) is, but there was a write-up/discussion on training AI-bots and characters with Nvidia Tensor related hardware/APIs, then implementing these "trained" algorithms into gaming. Rather than relying on game related hardware to figure it out (waste of computational time), the trained algorithm (not scripted in the traditional sense, but actual trained logic) would have understanding/knowledge on what-to-do, rather than fumbling around. I could have been on a Dev board...
 
Can tensor cores be useful for things other than AI? I mean it's matrix multiplications, it doesn't feel exclusive to AI.
 
Yeah but no it doesn't work, I just looked it up, and each channel is it's own half of the chip, it's almost exactly like having two separate 16bit chip. HBM is similar, channels have their own banks. Still allow more read/write mixes I guess? Statistically?

In theory, having more channels means there's twice as many queues that can be used to reorder accesses in ways that can best avoid turnaround penalties and other long-latency events, assuming the queues aren't cut in half when the channel count is doubled.
Having twice as many command streams to process in parallel could help overlap some of the long-latency meta-work that results in dead cycles, possibly. A single channel processing long-latency events sequentially may not be able to combine as much of that serial overhead.


On a different area of functionality, I'm wondering how to interpret the way AMD has gone back and forth on whether primitive shaders can be auto-generated or developer controlled.

There is speculation that this was implemented as a console-targeted feature.
In some ways, it is compatible, in part because the PS4 likely has something rather similar already.

Which console-targeted features had this pattern of publicly going down the path of revealing features, implementing driver changes, and then reversing before the console hardware came out?

In the case of the current gen, items like enhanced asynchronous compute engines and other functions were built into Bonaire and rolled out further with Hawaii. Some low-level API enhancements might have migrated into Mantle, but that wasn't abandoned until DX12 and Vulkan took over as the path forward.
The TrueAudio blocks using customizable Tensilica cores appear to be a case of console-specific features that barely got attention in the PC space, but AMD didn't try that hard to implement those functions and didn't fully give up on them until after the consoles were out.

What would it mean for a console-targeted feature to be marketed to the public, then mostly ditched prior to a planned console launch?
 
I would guess the power-pc cores would have melted the thin structures ;)

well not really, but I don't think it would have changed much. Maybe than we could get a silent xbox 360 in a smaller form-factor (e.g. WiiU size), but I really don't know what you want with that.
I really don't know why MS didn't clock the cpu in the xbox one s a bit higher. that would have changed a bit more :)
 
I would guess the power-pc cores would have melted the thin structures ;)

well not really, but I don't think it would have changed much. Maybe than we could get a silent xbox 360 in a smaller form-factor (e.g. WiiU size), but I really don't know what you want with that.
I really don't know why MS didn't clock the cpu in the xbox one s a bit higher. that would have changed a bit more :)
they've have to up the clock in the OG as well.
 
More just academic curiosity that anything. :) At 32nm there'd certainly be a need to integrate the eDRAM, and @14nm SOI it'd probably end up with a bunch of dead space ala Espresso.

they've have to up the clock in the OG as well.
Why?
 
More just academic curiosity that anything. :) At 32nm there'd certainly be a need to integrate the eDRAM, and @14nm SOI it'd probably end up with a bunch of dead space ala Espresso.


Why?
Profile shrinking, and weird game code hack breaking if that's a thing (it's not).
It's not worth while (effort and testing) unless the whole platform and developers can take advantage of it.
 
Last edited:
More just academic curiosity that anything. :) At 32nm there'd certainly be a need to integrate the eDRAM, and @14nm SOI it'd probably end up with a bunch of dead space ala Espresso.

The 32nm 360 SoC did integrate the edram, didn't it....?
 
who knows... Maybe at 5 or 7 nm can be possible a portable XBOX360 ... game size are easy downloadable... Should be quite cheap to produce too... Market ?!?
 
I believe the ps5 will use older - more mature and cheaper hardware instead of cutting edge more expensive hardware.
I believe the Xbox x is a good example of this.
Ps5 will probably use zen with a beefed up GPU. Similar to how ms went with the cheaper more mature jag and tweaked it and beefed up the GPU and memory.

I expect at least 5x more performance than the ps4 and be out late 2020.

My stab in the dark.
 
I believe the ps5 will use older - more mature and cheaper hardware instead of cutting edge more expensive hardware.
I believe the Xbox x is a good example of this.
Ps5 will probably use zen with a beefed up GPU. Similar to how ms went with the cheaper more mature jag and tweaked it and beefed up the GPU and memory.

I expect at least 5x more performance than the ps4 and be out late 2020.

My stab in the dark.


I think Sony will be relatively conservative as they were with PS4. To a certain extent they have no choice with console box limitations.

AMD released a 3.8TF GPU in late 2011 and PS4 launched 2 years later with 1.84TF so if PS5 comes in at 10-12TF in 2019 versus AMD having a 13TF top end GPU today it would be much less conservative this time round relatively.
 
Status
Not open for further replies.
Back
Top