Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Status
Not open for further replies.
Apparently the way AMD make these custom chips for there customers is they each get a separate team that kind of works in isolation. So the team working on Microsofts chip has no communication with the team working on Sony's.

We know from the Cerny interview that they have been working on it for four years and if we choose to believe that rumour or whatever it was where that guy tweeted about Sony taking two thirds of AMD GPU engineers then you can see how this is possible or how someone took that information and made this all up.
 
You appear to be arguing for arguments' sake here because you've completely ignored the rest of my post where I clarify exactly my position. Whatever.

Apologies, I thought you were responding to somebody else (check they way you quoted) so I skipped it. I got freakers to kill, you know!

Your post suggest you believe I think PS5 will have RT where other AMD products will not but I've never suggested this, nor do I believe it remotely likely. Raytracing is old tech, I remember dabbling with it with a legally questionable copy of Sculp3D on the Commodore Amiga a long, long time ago.

To answer your question about how involved you could expect Sony Animation to be, I'd suggest they day-to-day involvement in any collaborative project to be minimal. The hardware and scale that Sony Animation model environments is tens of orders magnitude greater than what will be in PS5. You're talking tens of millions increase in total computational capability that still takes hours to produce a single frame.

But Sony Animation have been credited in many published technical articles on methods for optimising raytracing algorithms and this, and non-published experience, can make a huge difference even, or especially, when scaled down.

I am sure that AMD are more than capable of incorporating any client proprietary IP into their designs without it contaminating what I assume is a genuine cleanroom GPU architecture R&D environment. When you're in a billion dollar industry you have to play fair and by the rules.

Will PS5 have AMD+Sony Special Source RT hardware? Sure, maybe. Will this impede AMD producing their own RT hardware? Pretty doubtful. Again, RT is old. The trick is doing it quickly in ways customers (developers) need, which is a roblme that AMD engineers face every single day in their competitive field.

Therefore, IMO, the HWRT solution in PS5 is likely to be a modified version of what AMD are already working on, in the same way Sony had an ID buffer added to the GPU in PS4Pro. Those modifications will not appear in an MS console.

Agreed. I've never suggested anything different.

Those modifications may appear in PC GPUs depending on the deal. Regards Sony's wider experience, I don't see it as particularly relevant, I doubt Sony Picture's involvement at any level beyond everyone reading their white-papers, because they don't profit from it at all (unless the designs are going to be included in AMD GPUs that will accelerate Arnold), and it'll be engineered by AMD anyway because Sony Pictures and SIE have no experiencing of engineering silicon.

It's math. The mathematical model is important. From experience I can tell you I don't need to know anything about engineering silicon to explain a software performance issue to a hardware partner for them to produce a number of possible solutions.
 
Well I'm not sure AMD have those kinds of resources. It would also be a lot of redundant work. AMD would have told Sony what they're working on, and I don't think Sony would pay for them to do something they're already doing. I can see them asking for AMD to take their architecture in a modified direction given how big a customer they are, and I can see AMD agreeing if the gain is big enough, but such different, divergent paths doesn't seem sensible.



ArtX made the GC GPU. They designed it and then were looking for a customer, which turned out to be Nintendo. Then ATI bought them. Then AMD bought them. It's not really like AMD span off an entire design team to do a new GPU. The 360 was also based on an abandoned PC architecture (R500?), that MS were forward thinking enough to see the merit in. It wasn't a fully bespoke architecture though. Things have only got more complex and expensive since then.

I think AMD can do exclusive features, but the more engineering work they need the bigger the cost and the bigger the impact on whatever else AMD are working on. You can't rule out a radically different version of Navi with custom Sony RT hardware, but I think it's a big ask. Sony asking for some level of customisation seems likely though, and it's also possible that three years ago (or whatever) Sony talked with AMD about the direction of GCN and what they wanted to see.

I just think that the more complex and core the feature or functionality, the less likely it'll be exclusive. I mean, MS work with AMD too and have done for a long time now (14 years and counting), and they've been one of the prime movers in pushing ray tracing into the consumer space. But I don't seem them having some kind of unique hardware ray tracing variant of Navi.

I think it's most likely that if anyone has it, everyone has it. But, you know, I live to be proven wrong. :LOL:
I think it's best for now to postpone that discussion until we know the specs of PS5, XB2 and desktop Navi GPUs. That's going to be another very interesting thread !

I think it's most likely that if anyone has it, everyone has it. But, you know, I live to be proven wrong.

It seems to be what many of you are thinking. Do Microsoft have any hardware acceleration dedicated (mostly) for checkerboard rendering ? No. Well I think it could to be exactly the same with hardware RT only on PS5, just with more advanced stuff.

I hope if it's the case you'll all be OK because it seems to be a big problem for you like if it was breaking some important rule. Personnaly after the first PS4 Neo leak in 2016 and the announcement of the mid-gen console, nothing surprises me anymore. :nope:
 
Your post suggest you believe I think PS5 will have RT where other AMD products will not but I've never suggested this...
I never said you did! It's an argument from me saying it'll never happen ahead of anyone really suggesting it, although ultragpu floated the possibility first. My initial comment was an independent statement, not a reply to anyone in particular, targeted at the inevitable fanboy noise to come across the internet - "PS5 has hardware RT and is the only console to have hardware RT and PC's won't even get AMD RT and it's made by the people who make movies so it'll be able to render Toy Story 4, or at least Hotel Transylvania 3, in realtime, blah blah."

But Sony Animation have been credited in many published technical articles on methods for optimising raytracing algorithms and this, and non-published experience, can make a huge difference even, or especially, when scaled down.
But even then, it's not like graphics progress happens behind closed doors. It's all shared pretty openly to advance the art. Even if Sony weren't looking to add RT to PS5, AMD would be collating ideas from their existing visualisation partners, I'm sure. Case in point - your articles you link to are free for me and everyone to read, and I'm sure a few people at AMD keep up to date with them all.

It's math. The mathematical model is important. From experience I can tell you I don't need to know anything about engineering silicon to explain a software performance issue to a hardware partner for them to produce a number of possible solutions.
Yeah, but I wouldn't call that engineering. AMD would engineer a solution. And in the case of executing this maths, maths isn't the bottleneck by any stretch. It's memory traversal. That's something software engineers likely have little to contribute (they can have best theories for data structures, but not for designing a memory subsystem to accelerate those as part of the whole memory topology) because they don't need to worry about how to execute what they want, and just ask for it. They can present their ideas with how they'd like to access memory, and then hardware engineers have to find some way to pull that off, if possible. There'll be some crossover, for the low level software engineers who know how the hardware works and have ideas, and of course ideas can always help, but generally I can't see software engineers being much use in engineering silicon. Their contribution will be requests for how they want to access the data and what structures they think can accelerate RT fastest and so forth.

And my point here isn't to say "Sony brings nothing" (my previous references to Cell and PS2 GPUs were a bit tongue-in-cheek) but to be clear to the internet fanboys, who aren't here reading this, that what Sony brings isn't going to be the difference between realtime Spider Man CGI from PS5 versus last-gen graphics from a future XBox. It'll be the difference between PS4Pro having an ID buffer or not; a 5% acceleration in some workloads and a bit more flexibility for others, kind of thing. It won't be a game changer, and it won't be Secret Sauce. If MS get RTRT hardware from AMD, it'll be comparable.
 
Could things be far simpler, Sony perhaps were in the right place internally testing RTRT and AMD are working on it. Sony lent software developer time to test and assist with AMDs own development. AMD are not a software firm and we know Sony have shown GT with RTRT
 
Apparently the way AMD make these custom chips for there customers is they each get a separate team that kind of works in isolation. So the team working on Microsofts chip has no communication with the team working on Sony's.

It makes zero sense to me for AMD to co-develop a significant GPU related technology like RT with Sony in control of the intellectual property. At the least it would limit AMD's future development and at worst it could end in a messy blackmail/lawsuit case.
 
Could things be far simpler, Sony perhaps were in the right place internally testing RTRT and AMD are working on it. Sony lent software developer time to test and assist with AMDs own development. AMD are not a software firm and we know Sony have shown GT with RTRT
Even simpler.
We have no idea what form the RTRT will take in the following products:
Navi
PS5
Scarlett

This ongoing narrative that PS5 is hardware based (which we don't know what that actually means), Scarlett is software because it says what api it supports is just simply wrong.

Next I'm going to hear that Scarlett doesn't have a gpu because it supports DX12, therefore it's software based
 
It makes zero sense to me for AMD to co-develop a significant GPU related technology like RT with Sony in control of the intellectual property. At the least it would limit AMD's future development and at worst it could end in a messy blackmail/lawsuit case.
This is wrong.

Also didn't mean that even if it was co-developed it couldn't turn up on future pc gpu's if they agreed to it. It comes down to money and agreements
 
Even simpler.
We have no idea what form the RTRT will take in the following products:
Navi
PS5
Scarlett

This ongoing narrative that PS5 is hardware based (which we don't know what that actually means), Scarlett is software because it says what api it supports is just simply wrong.

Next I'm going to hear that Scarlett doesn't have a gpu because it supports DX12, therefore it's software based
AMEN!:yep2:
 
´[...]
Also didn't mean that even if it was co-developed it couldn't turn up on future pc gpu's if they agreed to it. It comes down to money and agreements

Yes, and I would even argue that it would be in Sony's interest to make it available for AMD to use in PC GPU's. PC support would mean more people could use it which in turn would be an incentive for 3rd party developers to use it. That could then potentially lead to 3rd party games running and/or looking better on the PS5 compared to Anaconda.
 
Playing devil's advocate, if Sony had an exclusive feature that went underused by 3rd parties, their first parties would gain a clear advantage and likely sell more as a result. But nigh impossible to happen for real as too damaging for AMD and the graphics industry at large.

On the subject of console customisations that ended up in PC GPUs, do we have any clear examples? I think PS4's eight ACEs was one, though wasn't that inevitable? ID buffer doesn't seem to have crossed over.
 
I never said you did! It's an argument from me saying it'll never happen ahead of anyone really suggesting it, although ultragpu floated the possibility first.

You need to make it clearer when you are segueing from responding directly to the person your quoting and some form of public service announcement because I couldn't tell the difference. Metaphorically, we were having a chat about the weather (it's rubbish again) then you railed on me about how terrible terrorism is.

But even then, it's not like graphics progress happens behind closed doors. It's all shared pretty openly to advance the art.

My job encompassed keeping on top of these developments and 100% wrong. Most of the critical techniques to reduce the computational requirement is proprietary and closely guarded. If Pixar come up with a method to half their rendering cost at almost no loss to quality then they now have a massive advantage in the multi-billion dollar movie industry.

Yeah, but I wouldn't call that engineering. AMD would engineer a solution. And in the case of executing this maths, maths isn't the bottleneck by any stretch.

It think that's a little insulting the the folks who work on engineering silicon and the software APIs and libraries that form the basis of the technology in consoles. Hardware isn't architected or engineered in isolation. :nope:

It's memory traversal. That's something software engineers likely have little to contribute (they can have best theories for data structures, but not for designing a memory subsystem to accelerate those as part of the whole memory topology) because they don't need to worry about how to execute what they want, and just ask for it.

If your engineering starting point is x gigabytes/teraflop but your software method can halve this, your hardware design (and associated problems) becomes software driven. Every single one of Intel's additional 80x86 instructions since around 1987 has been software driven. Starting with 3D then media (audio/video codecs), through virtualisation and more recently, encryption. None of these hardware capabilities are a blackbox solution, software needs inform Intel about what type of instructions for the very wide variety of software options.
 
On the subject of console customisations that ended up in PC GPUs, do we have any clear examples? I think PS4's eight ACEs was one, though wasn't that inevitable? ID buffer doesn't seem to have crossed over.
That would be funny, DX supporting something that's in pc and PS but not Xbox :LOL:

I think most cross over, if any would be optimization type of changes. Not new instructions.
Basically invisible beyond driver level.

But this is a new age, this amount of console and pc hardware development around defined IP blocks
 
You need to make it clearer when you are segueing from responding directly to the person your quoting..
I didn't quote anyone.
https://forum.beyond3d.com/posts/2066947/

"There's no way Sony have hardware RT and AMD haven't got any for their GPUs. What's Sony's expertise in creating an RT unit that AMD have fumbled with? What IP are Sony going to own and control and limit AMD's use of? It's fanboy nonsense. If there's RT hardware in the GPU, it's from AMD and will appear in their PC GPUs. Worst case, Sony have paid console exclusivity in some weird deal and Ms are out of luck, which is pretty implausible IMO."​

That was my first statement, my 'public service announcement'. Everything after that was in response to responses to me.

Edit: So, I stated my position in my first statement. Then, in the response I replied to, I said I was going to clarify my position and re-expressed my original view to try and avoid ongoing confusions. Apparently that didn't work out so well. ;)
 
Last edited:
It is generous for 1080p, but I can see couple reasons why would do this
  1. Call it 1080p as that's what people understand and easier to talk about. But it's a range of 1080p-1440p with it being closer to 1440p and just scaled to appropriate output resolution. Will give good results on both 1080p & 4k displays.
  2. When Anaconda is dropping res to gain performance say 1800p, Lockhart won't need to drop below 1080p so will hold up in the future pretty well.
  3. Disabling more than 50% doesn't improve yields/bining, so may as well leave it over powered.
Just to add a 4 to this which might be one of the bigger reasons.
X1X BC, being able to run 1X versions of XO games.
Even though a lot less than 6TF may actually be fine performance wise for next gen at 1080p because games would be coded to use new instruction set and features. It may not have enough raw performance in BC mode.
Therefore would need it to be closer to 6TF than would actually be needed for next gen.
 
Status
Not open for further replies.
Back
Top