Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Status
Not open for further replies.
I hope if it's the case you'll all be OK because it seems to be a big problem for you like if it was breaking some important rule. Personnaly after the first PS4 Neo leak in 2016 and the announcement of the mid-gen console, nothing surprises me anymore. :nope:
I'd be okay with it, if your statements were grounded in some form of reality.
Your last statement you still cling onto when we know already that it takes much longer than a single year to produce a new console. And that MS has stated on several occasions that both 1S and 1X started development at the same time, which was shortly after Phil took over.

The idea that MS can come up with an effective software based RT solution is the issue, not the fact that it's software based. A software based RT solution would require a lot more adjustment to the API if they want to get good performance from it because right now it's way too ridgid for a software RT to have great performance, we see this with the 2080TIs. There are just too many moving parts and if it were software based, there's nothing stopping 1X from supporting that today.

So the most probable reality is that either MS is shipping with hardware acceleration for RT or they aren't going RT at all. As per their DXR documentation they are no longer developing software RT support further, so to qualify the second statement you'd have to ignore the fact that they launched DXR _last_ year and they would have to relaunch another DXR API to support a software based one.
 
Quite. Wouldn’t they need to do a lot porting to support the Stadia?

There is no reason to be coy about Xbox support either, so I imagine it may simply be PS4.
Its already on PC, so not as painful as trying to port down to Switch.
 
Don’t suppose you looked over the GDC presentation about porting Odyssey?

At least for launching it on PS4 it’s more of a case of getting Sony to agree and support such a platform rather than porting an entire stable of backlog.
 
I'd be okay with it, if your statements were grounded in some form of reality.
Your last statement you still cling onto when we know already that it takes much longer than a single year to produce a new console. And that MS has stated on several occasions that both 1S and 1X started development at the same time, which was shortly after Phil took over.

The idea that MS can come up with an effective software based RT solution is the issue, not the fact that it's software based. A software based RT solution would require a lot more adjustment to the API if they want to get good performance from it because right now it's way too ridgid for a software RT to have great performance, we see this with the 2080TIs. There are just too many moving parts and if it were software based, there's nothing stopping 1X from supporting that today.

So the most probable reality is that either MS is shipping with hardware acceleration for RT or they aren't going RT at all. As per their DXR documentation they are no longer developing software RT support further, so to qualify the second statement you'd have to ignore the fact that they launched DXR _last_ year and they would have to relaunch another DXR API to support a software based one.
Statements based on reality ? What about your statements ? All I hear from Microsoft and from the usual controlled leaks or Microsoft outlets is about, and only about, directx RT. And since the PS5 reveal oddly they are changing their message and now are talking about hardware RT using 'async compute', 'microcode', that kind of words. So another way to say software RT.
 
Statements based on reality ? What about your statements ? All I hear from Microsoft and from the usual controlled leaks or Microsoft outlets is about, and only about, directx RT. And since the PS5 reveal oddly they are changing their message and now are talking about hardware RT using 'async compute', 'microcode', that kind of words. So another way to say software RT.
Right, that's how DXR is intended by use.
You still have to make the calls and you can call DXR via async in the compute queue, or directly through the compute queue. DXR has always leveraged the compute queue for it's calls since after the intersections are calculated the shader run is a compute shader on the triangle with a hit.

Developers make an async compute DXR call to rebuild their BVH tree for ray tracing to eliminate the latency. They may even use it for intersection calculation but as well. Whether it's hardware accelerated is totally dependent on hardware as the driver is responsible for performing what the API is asking for.

I'm not sure if this is a misunderstanding; but the hardware acceleration portion of ray tracing is only on BVH traversal and intersection. Possibly ray projection, i guess this would be intertwined with intersection or scheduling. The remaining work is done on compute.

The orange box is the hardware part is the only part that is accelerated in RT, that we know of, possibly some scheduling as well; at least from the standpoint of DXR.
flow.png


The scheduling portions of execution are hard-wired, or at least implemented in an opaque way that can be customized for the hardware. This would typically employ strategies like sorting work to maximize coherence across eadsthreads. From an API point of view, ray scheduling is built-in functionality.

The other tasks in raytracing are a combination of fixed function and fully or partially programmable work:

The largest fixed function task is traversing acceleration structures that have been built out of geometry provided by the application, with the goal of efficiently finding potential ray intersections. Triangle intersection is also supported in fixed function.

Design goals
  • Implementation agnostic
    • Support for hardware with or without dedicated raytracing acceleration via single programming model

    • Expected variances in hardware capability are captured in a clean feature progression, if necessary at all
  • Embrace relevant D3D12 paradigms
    • Applications have explicit control of shader compilation, memory resources and overall synchronization

    • Applications can tightly integrate raytracing with compute and graphics [queues]

    • Incrementally adoptable
  • Friendly to tools such as PIX
    • Running tools such as API capture / playback don’t incur unnecessary overhead to support raytracing
 
Last edited:
My job encompassed keeping on top of these developments and 100% wrong. Most of the critical techniques to reduce the computational requirement is proprietary and closely guarded. If Pixar come up with a method to half their rendering cost at almost no loss to quality then they now have a massive advantage in the multi-billion dollar movie industry.

How does this "massive advantage" work in practice for a studio which only does their own movies? They aren't really in price competition with other studios. IMHO they would license their technology so they could make money from everybody needing it to optimise their costs.
 
How does this "massive advantage" work in practice for a studio which only does their own movies? They aren't really in price competition with other studios. IMHO they would license their technology so they could make money from everybody needing it to optimise their costs.

Your creative teams are limited by the time it takes to render. It you can experiment and render in half the time, your digital production budget is literally half the cost and your hardware can output twice as many movies. There is no shortage of movies waiting to be made, there is a shortage of CPU time
 
So they can produce more movies but still don't really profit from others not able to produce theirs cheaper.

They are not a production company which offers their services to others where this competitive edge would profit *them*.
 
Oh this again..

Where is this discussion?
Who wrote this on B3D? Please, please point me to one post that describes this theory. Where is it? One post is enough.
I see this coming up over and over, yet I can't find the origin of it all. Does it even exist?

I mean...

...[snip]...Besides, the reddit post actually only says the RT blocks were co-developed by AMD and Sony (who contrary to local forum belief still develops custom ISPs and SoCs), and they think that's the source for the "Navi is custom made for Sony" rumors. ...

...[snip]...You literally just dismissed my point of Sony having hardware teams that have been consistently working on imaging processors for the last decade. It's right there in my post that you quoted.
¯\_(ツ)_/¯
There are extracts but fully posts are linked. Here you're talking about a reddit post where the PS5 RT blocks are co-designed by Sony (you're actually talking about the theory you're now asking me to provide a link to as though it doesn't exist...?); you're also being critical of Shifty for not taking Sony's image processor teams seriously in terms of potentially leading to ... enhanced functionality for the PS5 GPU (presumeably)??

This thread is getting a bit trippy.

The Cell does graphics work. In fact, the Cell was originally intended to do all the graphics work.
And if Sony "haven't substantially engineered a console GPU in nearly 20 years", then who made the GPU on the 15 year-old Playstation Portable?
I'd also say the dedicated Wide I/O implementation on the SGX543MP4 for the 8 year-old Vita using TSV should count as "substantial engineering" (since it was the first ever TSV implementation in a mass produced device AFAIK)

As time is of the essence tonight, I'll try to be be brief!

- Cell was never seriously planned to do all the graphics work, the idea was discarded early on. They had initially tapped Toshiba, iirc, to do the graphics chip. But Sony's original requirements were unsuitable and they had to scramble for something with pixel and vertex shaders (there's a certain theme here!).

- PSP is a handheld, not a console, but even if you want to count it (different meanings to different folks and all that), it was functionally limited compared to PowerVR of the day, and Sony would discard it for an outsourced design next time (there's a certain theme here!).

- I'll happily admit TSV are cool, but I never said Sony couldn't do "substantial engineering" as you imply, I said "Sony haven't substantially engineered a console GPU in nearly 20 years". And PSV is a hand-held, not a console (semantics again), but even if you want to count it, it's actually a PowerVR design in there (there's that theme again!).

I admit MS got more out of GCN, and I object the statement about them doing a better job when is was clearly a different job.
I can also say "AMD with the Vega 64 did a better job of getting more performance out of a FinFet GPU than nVidia did with the GTX1050".
It makes just as much sense as your "better job" sentence.

I think that's a bit of a strange counter example, to be honest.

GTX 1050 gets much higher real world performance per theoretical GF, much higher real world performance per GB/s, and clocks higher with better power scaling. And it's a different architecture. All Vega has going for it is absolute performance.

X1X gets higher real world performance per theoretical GF, higher real world performance per GB/s, and clocks higher with better power scaling. And it's the same architectural family. X1X not only has higher absolute performance, it's better at delivering within it's theoretical limits. I think that makes it a better implementation of a GCN GPU.

Like PS4 ended up being compared to X1, really. At least Sony were at a relative disadvantage for time and cost with the Pro. MS just plain lost first time round because they were playing the wrong game.

Anyway, I fully accept that you think differently. Now if next gen one system is slower overall but has faster raytracing ... I'd say that's just plain different. Not sure you could easily say if one was better. I guess software would tell you in the long term.

I'm a genie and I just made your wish come true:
https://en.wikipedia.org/wiki/PlayStation_Vita
Sadly, I was meaning high power stuff, rather than fanless and battery powered. There are lots of examples of cool PowerVR stuff in mobile. Some of the Intel integrated stuff was actually pretty capable, but always held back by drivers and the fact that Intel were more interested in their own graphics tech and none Atom-y processor lines.
 
Last edited:
Right, that's how DXR is intended by use.
You still have to make the calls and you can call DXR via async in the compute queue, or directly through the compute queue. DXR has always leveraged the compute queue for it's calls since after the intersections are calculated the shader run is a compute shader on the triangle with a hit.

Developers make an async compute DXR call to rebuild their BVH tree for ray tracing to eliminate the latency. They may even use it for intersection calculation but as well. Whether it's hardware accelerated is totally dependent on hardware as the driver is responsible for performing what the API is asking for.

I'm not sure if this is a misunderstanding; but the hardware acceleration portion of ray tracing is only on BVH traversal and intersection. Possibly ray projection, i guess this would be intertwined with intersection or scheduling. The remaining work is done on compute.

The orange box is the hardware part is the only part that is accelerated in RT, that we know of, possibly some scheduling as well; at least from the standpoint of DXR.
flow.png




Design goals
  • Implementation agnostic
    • Support for hardware with or without dedicated raytracing acceleration via single programming model

    • Expected variances in hardware capability are captured in a clean feature progression, if necessary at all
  • Embrace relevant D3D12 paradigms
    • Applications have explicit control of shader compilation, memory resources and overall synchronization

    • Applications can tightly integrate raytracing with compute and graphics [queues]

    • Incrementally adoptable
  • Friendly to tools such as PIX
    • Running tools such as API capture / playback don’t incur unnecessary overhead to support raytracing

iroboto, do you think Navi could end up like Turing but in reverse? I.e. the mainstream cards come without dedicated raytracing acceleration, but the higher models coming later include it?

Maybe the push to get the core (traditional raster / compute) features completed came from one or both console vendors so they could begin work on custom units, and as a result hardware RT got pushed back?

Meh. Just spitballin'.
 
Is their sensor silicon development that specialized?

It's stacking dissimilar dies inexpensively which seems to be the biggest problem lowering the cost of HBM. TSV are too expensive. Sony's continuous R&D in stacking is producing industry firsts almost every time they announce a new product.

I'm looking at the latest 20MP sensor which is putting a 14bit ADC on every single pixel, which requires multiple Cu-Cu contacts per pixel. If I understand correctly it means they have maybe 50 million Cu-Cu connections between the two dies and it doesn't use TSVs.

I have no idea if this is useful for a console, or if it's too specialized for sensors only. But this is potentially useful for cross-patent licensing to improve the density of contacts between layers, maybe die alignment, maybe lowering stacking costs. It could have value to powerful players in the chain.
 
So they can produce more movies but still don't really profit from others not able to produce theirs cheaper.
Other than they are able to get more movies out to make more money? More merchandising? What kind of profit are you looking for? Treasure? Doubloons?
 
I have no idea if this is useful for a console, or if it's too specialized for sensors only. But this is potentially useful for cross-patent licensing to improve the density of contacts between layers, maybe die alignment, maybe lowering stacking costs. It could have value to powerful players in the chain.
If Sony are capable of contributing to fast RT processors, wouldn't they pursue that for professional imaging? It'd help their own studios and they could sell them for Big Bucks. I'm sure AMD would be on board with that too.

Well, there's nothing in the rumour saying that isn't happening. ;) Maybe Navi will be presented as a Sony co-engineered product line and sold to pro imaging?
 
Other than they are able to get more movies out to make more money? More merchandising? What kind of profit are you looking for? Treasure? Doubloons?
Also, Hollywood works on a crap-shoot principle. Ideas are thrown out there without any real idea if they'll work or not, with the cost of weak films being absorbed through the taking of big hitters. Faster film production results in faster cycling through ideas, with more films per year resulting in more big hitters per year on average. That should help even out the erratic highs and lows of the Pictures' division's financials.

Although OTOH, more films could just result in movie fatigue and less revenue per movie.
 
Other than they are able to get more movies out to make more money? More merchandising? What kind of profit are you looking for? Treasure? Doubloons?

They don't profit from the *exclusivity* of your theorised technology advantage then. If they shared their technology they would profit though. Is that argument really that difficult grasp?
 
If Sony are capable of contributing to fast RT processors, wouldn't they pursue that for professional imaging? It'd help their own studios and they could sell them for Big Bucks. I'm sure AMD would be on board with that too.

Well, there's nothing in the rumour saying that isn't happening. ;) Maybe Navi will be presented as a Sony co-engineered product line and sold to pro imaging?
I am only using baseless rumors as a foundation to create tangential discussions.
 
I am only using baseless rumors as a foundation to create tangential discussions.
Sounds derivative.

thinking-face-apple-icon.png

2. based on or making use of other sources
1. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the @function at that point.
 
Last edited:
Status
Not open for further replies.
Back
Top