NGGP: NextGen Garbage Pile (aka: No one reads the topics or stays on topic) *spawn*

Status
Not open for further replies.
What you've described elsewhere about the DMEs' operations has me somewhat concerned that devs will have a Cell-like memory juggling act to perform, micromanaging memory accesses and movements around the system.

Yes potentially, though I think Cell is a poor analogy, if what I was told is true, it's probably closer to loading textures into PS2 VRAM.
As I've said in other posts it potentially gives developers a lot of rope to hang themselves with.

A lot of modern renderers are only different in superficial ways (from a data flow standpoint) now, and there might be simple guidelines/best practices that MS can just give developers that will avoid the worst of it.

I happen to like optimizing data flow, it appeals to the way my mind works, I like building models of how something works, constructing experiments to verify and measuring improvements, but I'm not sure that most people want to deal with that crap when they are just trying to ship a game.
 
I understand, but from what rumors are pointing out, Orbis won't have any problems with bandwidth either. It seems like general feel is "Durango will be able to use bandwidth very efficiently and keep GPU fed all the time, while move engines will result in performance gulf being much smaller", while at the same time completely ignoring the fact that Orbis should have no problems keeping GPU fed either.

It seems as though some people try to describe Durango in best possible case (very efficient, dedicated silicon for gfx related tasks, low latency), and Orbis like off shelf PC with worse efficiency and more raw power. I don't think it paints the full picture, but I guess the wackiest we can get from Orbis will be memory bandwidth of 144Gb/sec - 176Gb/sec as I don't see them achieving 192 Gb/sec.

Yes, there's not much to discuss about Orbis as it's fairly straightforward from what we know so far.

There's people pointing out best possible cases for Durango only because there's a flood of people pointing out worst possible cases. Using myself as an example going purely by the numbers the Durango is obviously at a performance disadvantage assuming rumors for Orbis and Durango are correct. There's nothing interesting to talk or speculate about there.

What is interesting is trying to figure out what the various bits MS has put in to, presumably, ameliorate some of those performance discrepencies and how close they can get to Orbis. Will it outperform Orbis at certain tasks while obviously falling short at others? Will it then be overall similar with regards to frametime and resolution?

Is it going to be a similar situation as PS3/X360 where PS3 was simply atrocious and entirely too slow at certain tasks, but also faster than the X360 at others?

And in many ways it's similar to the PS3 versus X360 discussions on tech. X360 was relatively boring to talk about as it was rather straightfoward. PS3 was the interesting one due to the fact that it was obviously not as good at graphics rendering if you went purely by the numbers. What was interesting was to see how other bits and bobs were used to bring overall performance onto a relatively equal footing.

And since we don't know enough, yet, about Durango. Most of this is purely speculation about what various bits "could" potentially do to help things. It may not work out at all like how people speculate. It may not work out well in general game rendering workloads. Maybe it works better. Perhaps there's more we don't know about.

Noone is saying currently that Durango is going to be better than Orbis. Most that are speculating as to whether the extra bits can help significantly are assuming that even if everything works like MS hopes that it'll just get Durango close enough to Orbis that your average consumer won't be able to tell the difference.

There's already enough people bemoaning the "by the numbers" performance deficit. There's no need to add to that even if we (or I) know that by the numbers, it is at a disadvantage.

In other words, speculating about what could be done to help performance doesn't mean we've ignored the fact that baseline performance is at a serious deficit.

Regards,
SB
 
Just one element I must get correct...gddr5...very high bandwidth..at the expense of cost and latency. ..am I hot or cold..


Correct, for traditional 3D rendering high bandwidth is much more important than low latency. Note - that isn't to say that low latency isn't good. But GPUs with their highly parallel nature are good at hiding high latency access as long as there is "something" for them to do. Hence graphics oriented memory is designed with high bandwidth being a priority.

Regards,
SB
 
It sounds as though Microsoft traded GPU power for 8GB's of ram (and the ability to use it), essentially.

Both Sony and Microsoft want to go with a SoC and so I guess some trade-offs have to have been made for 8GB's and MS feels they are worth it.

edit: Ok, I just read ERP's posts, It seems there is more to Durango than I first thought. It will be interesting to see how the two architectures compare down the road.
 
Last edited by a moderator:
Can someone clarify something about ram that most people are confused about. If sony for example has 8 gb of ram at the rumored 176 gb/s bandwidth, which means they can only use 3 gb of ram per frame @60 fps...What happens to the rest of the ram other then OS? Is it totally useless?
 
Can someone clarify something about ram that most people are confused about. If sony for example has 8 gb of ram at the rumored 176 gb/s bandwidth, which means they can only use 3 gb of ram per frame @60 fps...What happens to the rest of the ram other then OS? Is it totally useless?

You don't touch every piece of memory in a frame.
I really don't know where this idea even came from.
 
It isn't useless it can be used for caching loads loading unique data without having to refill the RAM before the next frame etc...
 
Correct, for traditional 3D rendering high bandwidth is much more important than low latency. Note - that isn't to say that low latency isn't good. But GPUs with their highly parallel nature are good at hiding high latency access as long as there is "something" for them to do. Hence graphics oriented memory is designed with high bandwidth being a priority.

Regards,
SB
I don't think though the higher bandwidth of gddr5 compared to ddr3 really comes at the expense of latency - IIRC latency should be similar to ddr3 (in absolute terms, not clocks) though there may be some higher overhead indeed but it shouldn't be that much (that is, percentage-wise the latency hit should be much smaller than the bandwidth increase).
 
It isn't useless it can be used for caching loads loading unique data without having to refill the RAM before the next frame etc...
So if Sony has 4 gb of fast ram and 3 gb/s are used for a frame, supposedly 500 mb is used for OS and the other 500 mb is used for caching unique data I guess.

Now, will this suffice for next gen? Would there be a big impact if Sony had more ram to load special effects or whatever? I suppose that depends if sony wants 1080p 3D at 60 fps!?

Now since Durango has less bandwidth, does it mean it can potentially load more unique data? I'm guessing these are the pros and cons for each system's memory setups.

Thanks for the answers.
 
I don't think though the higher bandwidth of gddr5 compared to ddr3 really comes at the expense of latency - IIRC latency should be similar to ddr3 (in absolute terms, not clocks) though there may be some higher overhead indeed but it shouldn't be that much (that is, percentage-wise the latency hit should be much smaller than the bandwidth increase).

Yeah, I've been wondering about that myself, it's hard to find the information on the net. Granted I haven't spent THAT much time with it. I just know that GPU's with GDDR5 are designed with hiding latency in the 100's of cycles. While CPU's are generatlly dealing with 10's of cycles. And that's even considering that GPU's have a significantly lower clock speed such that each cycle takes longer to execute.

If GDDR5's latency was low enough I'd imagine that Intel would have made an attempt to use it as system memory. Intel certainly isn't afraid to push more expensive memory technologies if they feel it's an overall benefit to CPU processing tasks. But I've heard nothing about them trying to do that. Especially in the server market where bandwidth could actually be pushed significantly depending on the workload.

Likewise it would also seem to be a perfect fit for APU's if latency truly wasn't much worse than DDR3 as GPU performance there is pretty important. Yet in that case we still stick to DDR3, likely due to cost and latency in the case of APUs.

Regards,
SB
 
I´ve read at GAF that the 192 GB/s of the Orbis memory might be downgraded for the production version.

This is just getting out of control. These incestious rumors heard from one forum or another that may or may not be credible by Psuedo insiders who may or may not be insiders that have no real source beyond hearsay get repeated again and again all over the place and eventually get taken as gospil becuase multiple comfirming reports that are all really the same report. Just look how the blitter joke took off.
 
Yeah, I've been wondering about that myself, it's hard to find the information on the net. Granted I haven't spent THAT much time with it. I just know that GPU's with GDDR5 are designed with hiding latency in the 100's of cycles. While CPU's are generatlly dealing with 10's of cycles. And that's even considering that GPU's have a significantly lower clock speed such that each cycle takes longer to execute.

If GDDR5's latency was low enough I'd imagine that Intel would have made an attempt to use it as system memory. Intel certainly isn't afraid to push more expensive memory technologies if they feel it's an overall benefit to CPU processing tasks. But I've heard nothing about them trying to do that. Especially in the server market where bandwidth could actually be pushed significantly depending on the workload.

Likewise it would also seem to be a perfect fit for APU's if latency truly wasn't much worse than DDR3 as GPU performance there is pretty important. Yet in that case we still stick to DDR3, likely due to cost and latency in the case of APUs.

Regards,
SB
GDDR5 is based on ddr3 so I guess they had to give up something. If it were just faster I expect we'd all be using it as system ram. I know I've heard it has higher latency before, but I don't recall where or how much difference there is.
 
Yeah, I've been wondering about that myself, it's hard to find the information on the net. Granted I haven't spent THAT much time with it. I just know that GPU's with GDDR5 are designed with hiding latency in the 100's of cycles. While CPU's are generatlly dealing with 10's of cycles. And that's even considering that GPU's have a significantly lower clock speed such that each cycle takes longer to execute.

If GDDR5's latency was low enough I'd imagine that Intel would have made an attempt to use it as system memory. Intel certainly isn't afraid to push more expensive memory technologies if they feel it's an overall benefit to CPU processing tasks. But I've heard nothing about them trying to do that. Especially in the server market where bandwidth could actually be pushed significantly depending on the workload.

Likewise it would also seem to be a perfect fit for APU's if latency truly wasn't much worse than DDR3 as GPU performance there is pretty important. Yet in that case we still stick to DDR3, likely due to cost and latency in the case of APUs.

Regards,
SB
High latency with graphics is a function of how the memory controller and caching works and isn't due to the memory itself. GPUs optimize for high throughput and buffer up many requests to minimize page breaks, etc. CPUs optimize for low latency as that's what's most important for their workloads. I don't know if GDDR5 has higher latency than DDR3, but even if it does I'd guess the number is very low.

BobbleHead had some good posts about this recently though I don't remember if latency numbers were given.
 
So if Sony has 4 gb of fast ram and 3 gb/s are used for a frame, supposedly 500 mb is used for OS and the other 500 mb is used for caching unique data I guess.

Now, will this suffice for next gen? Would there be a big impact if Sony had more ram to load special effects or whatever? I suppose that depends if sony wants 1080p 3D at 60 fps!?

Now since Durango has less bandwidth, does it mean it can potentially load more unique data? I'm guessing these are the pros and cons for each system's memory setups.

Thanks for the answers.

In the console realm, numbers have to be interpred, a balanced structure wins easly over a "pumped numbers" one, if this is the case is too early to know but one thing is sure, I can't find a reason to allocate 80% of main memory just for one frame when RAM is filled by a slow IO device as a bluray, you can optimize multiple copies of the assets on disc, you can do some smart streaming (an harddisk helps a lot but not solve the situation) but hardly you can feed gpu with 3GB in one single frame
seems reasonable that the amount of data per frame will be a lower, dynamic value depending on the contest of the moment or depending on the lower denominator if we have a multiplatform engine/game
so I think that the higher speed of RAM (orbis), and the higher quantity of RAM (durango) will suffer IMHO of the lower denominator syndrome

anyway from VGLeaks, there's the info of the last DEV KIT (not orbis console)


SoC Based Devkit

Available January 2013
CPU: 8-core Jaguar
GPU: Liverpool GPU
RAM: unified 8 GB for devkit (4 GB for the retail console)
Subsystem: HDD, Network Controller, BD Drive, Bluetooth Controller, WLAN and HDMI (up to 1980×1080@3D)
Analog Outputs: Audio, Composite Video
Connection to Host: USB 3.0 (targeting over 200 MB/s),
ORBIS Dualshock
Dual Camera

http://www.vgleaks.com/orbis-devkits-roadmaptypes/

don't know if we can take this as confirmation or if the kotaku leak is spreading on the web
 
Last edited by a moderator:
How much "real-life" implications would it have on the PS4 if the bandwidth gets reduced to 176GB/s (rumoured at NeoGAF) and wouldn't that make PS4 and Durango a lot closer?

I saw some postings there and so far I couldn't find a valid reason why Sony would deviate from the 192GB/s target. GDDR5 is a well known quantity and I doubt that the move from 6Gbps to 5.5Gbps modules wouldn't really have much impact on a for me unknwon TDP.

I can't really tell which console is better because I am biased :)
 
Well, Will exist 5,5 Gbps 4 Gb density GDDR5 modules in production?. Hynix only have listed 5,6 and 7 Gbps modules in its roadmap. Can they take a 7 Gbps module and downclock it to decrease heat?.
 
Last edited by a moderator:
32GB eSRAM means a framebuffer 1080p 32bit color with MSAA 2x, what you mean exactly when you say "small size"?

That only applies for games that use a forward rendering, but many modern games use deferred rendering.

combined parallel eSRAM-DDR BW is 170 GB/s

You just can't add the bandwidth of either pool and say "the new XBox has 170GB/s of bandwidth", that's not how these things work.

why we can't combine BW when DME Units are here for this purpose

You don't know what the rumored DME Units are for.
 
Interesting bit still based on the rumours and his belief about Ms not allowing as much direct hardware access as Sony which may not be true.
 
Status
Not open for further replies.
Back
Top