NV got DS2 contract according to BSN

And they probably would have been able to sell it for less than Wii's go for. Even XBox 360 was sold for less than Wii for much of their competing lifespans thus far.

Actually how is that a good thing? The fact that nintendo manages to sell a machine which won't cost them much to build at such a relative high price is something which, from a company POV, you can only be proud of. Wouldnt MS and Sony have loved to make money on each console they sold from day one instead of spending billions on subsidizing hardware hoping to make up for it by selling software?
 
RAM was one of them (Wii has much more RAM than GCN), CPU clockspeed too (Gekko was used quite a bit to help the GPU, some games like Luigi's Mansion do some of the character lighting on the CPU).
Ok, i totally neglected the RAM, as I was focused on the pipeline design, but you have a point there - RAM architure was largely re-designed to provide the GPU with access to a new pool of RAM (the former A-RAM, currently GDDR3). As for the clocks - everything was scaled up proportinally, so that does not really shift ballances in the pipeline. Actually, from that same 'everything-was-scaled-up' POV, the new bank of RAM is a step backward from the original design, as the new RAM is of disproportionate high access latencies, and thus is mainly suitable for GPU access (i.e. the CPU did not get to see a bump in its usable memory pool).

Even if they had some bigger bottleneck than that, they did not really need to solve it, not for the console they wanted to deliver :).
IMO, for an SD console the wii is a pretty well rounded machine, at least in what it does (apparently the things it does not do are outside the scope of this argument).
 
As I said, for whatever reason ... it happens. See most any review of Metroid Prime Hunters.

Did you read the rest of my post? I don't doubt that stuff stays still on the screen while something else prevents the geometry from changing but it's not the same phenomenon.
 
(i.e. the CPU did not get to see a bump in its usable memory pool).

It did, the quite big GDDR3 pool, if you can shift more and more GPU bound data into it leaves with more breathing room for the latency sensitive CPU in its low latency RAM pool (1-T SRAM). Still, out of the three last-generation consoles, Gekko was the CPU with the best Cache hierarchy of them... about 2x the L2 cache compared to XCPU (512 KB vs 256 KB) so even if it deals with a far higher latency, it should still work quite well with it compared to another CPU with far less L2 cache.

I'll agree with you that it is a quite nicely balanced design over-all, but the biggest flaw was not fixed IMHO... Hollywood should have received some more e-DRAM for its embedded frame-buffer... 16 bits rendering and/or many games shipping with AA turned off is not nice for SD TV's either.
 
It did, the quite big GDDR3 pool, if you can shift more and more GPU bound data into it leaves with more breathing room for the latency sensitive CPU in its low latency RAM pool (1-T SRAM). Still, out of the three last-generation consoles, Gekko was the CPU with the best Cache hierarchy of them... about 2x the L2 cache compared to XCPU (512 KB vs 256 KB) so even if it deals with a far higher latency, it should still work quite well with it compared to another CPU with far less L2 cache.
True, the new memory setup should alleviate the situation with CPU-usable RAM availability, statistically. But it does not help with extreme cases - if a CPU code utilized close to (the client-available portion of) the 24MB on the cube, it would not get any headroom on the wii.

I'll agree with you that it is a quite nicely balanced design over-all, but the biggest flaw was not fixed IMHO... Hollywood should have received some more e-DRAM for its embedded frame-buffer... 16 bits rendering and/or many games shipping with AA turned off is not nice for SD TV's either.
I agree. Not promoting AA to 'use freely' levels on the wii was a mistake.
 
Unless translucency is not subject to state batching, latter is just another sort from the POV of the drawing end of the pipleine. If you're going to sort by state, you can just as well sort by state + spatial order - that would not be any slower.

Right, but how is state sorting absolved from that? Under such conditions the pipeline should not attempt *any* sorting, or if draw order was not an issue but state is, cut the spatial key from the sort; basically, switch the sorting keyset from one including spatial info to one not including such (as we assume you still want state batching). Neither difficulty nor speed complexity would change.
Doing a spatial sort that makes any sense is going to require multiple additional passes through your database, it is going to be slower, this may be fine on the desktop but in the embedded space the CPU's are still lacking the grunt to waist on this sort of thing.
Most pipelines I've worked with do not make deliberate attempts to impose any state multiplicity discipline/methodology whatsoever. Even for such heavy hitting state elements like texture residency such pipelines would leave everything to the final stage - default state batching at draw.
Perhaps I'm misunderstanding your point here but this reads like it re-inforces my arguement...
And I realize that you're speaking from the position of deferred rendering, where your hw offloads (part of) spatial sorting work from the client, but from where I stand I may not have the luxury to delegate that to the underlying hw. IMRs are still common, and opaque overdraw is not something to be taken lightly, right?
I think it depends on your target market, in the embedded space IMR's (of any worth) are pretty much non existent and the dominant architecture is a fully deferred tiler ;)
The hardware I predominantly work with does not have spare bandwidth to burn. While input geometry multiplicity can be addressed through transform feedback buffers (subject to availability), the algorithm's texturing bandwidth appetite is just unsurmountable - there's only so much you can drop your g-buffer's vector fidelity before it starts to show in the shading. That's why i'm sticking with ubersaders for the time being. That also saves me the need to explain to the artists what eats their precious bandwidth for 1:1 HD-res textures : )

Hey, we're agreeing here! LOL ;) I think feedback buffers only address the issue of shader clocks burnt on skinning etc, you're still left with a pretty big BW cost on resubmission, and the Z buffer read BW cost is huge. Funnily enough we do quite like content that has been "optimised"in this manner, although obviously we'd prefer it if engines skipped the multiple geometry passes where possible when building their g-buffers! G-buffer read bandwidth itself can also be all but eliminated on a tiler, but that's a different story :)

Cheers,
John.
 
I'd like to know more about the Mali architecture.

Early-Z tiler. I wouldn't hope for anything "programmable" in 3DS, which excludes a LOT of currently available GPU IP.

Although I'm hoping for something based on Flipper/Hollywood to be in 3DS.
Well I don't think it's AMD IP either, but non programmable and utterly boring is getting closer to the final result most likely. IMO far too many are expecting from the 3DS a "next generation" handheld; how about something more like a refresh in a fancier package?
 
Early-Z tiler. I wouldn't hope for anything "programmable" in 3DS, which excludes a LOT of currently available GPU IP.

Well I don't think it's AMD IP either, but non programmable and utterly boring is getting closer to the final result most likely. IMO far too many are expecting from the 3DS a "next generation" handheld; how about something more like a refresh in a fancier package?

A non programmable GPU in 2010. DEATH TO NINTENDO. :LOL:
 
IMO far too many are expecting from the 3DS a "next generation" handheld; how about something more like a refresh in a fancier package?

That's what I expect, but at the start of the thread you insisted that Nintendo had Tegra IP forthcoming.
 
That's what I expect, but at the start of the thread you insisted that Nintendo had Tegra IP forthcoming.

That was the rumor for a very long time for the past few years and I'm as certain as I can be that there were at least negotiations between Nintendo and NVIDIA. When exactly got the picture clearer and it started to show that the 3DS isn't a true next generation handheld?
 
That was the rumor for a very long time for the past few years and I'm as certain as I can be that there were at least negotiations between Nintendo and NVIDIA.

Guess that shows that rumors shouldn't be insisted upon, although naturally it hasn't been disproven yet. Negotiations sound pretty interesting, maybe there'll be something further down the line. Do you know for sure that these were about Tegra and not something possibly more general? Like nVidia graphics in Nintendo's next console. Although it would be pretty funny if Wii's successor used a Tegra I'm not going to make such a comment like that.

When exactly got the picture clearer and it started to show that the 3DS isn't a true next generation handheld?

Sorry, could you please rephrase this? I'm getting a massive parse error on this sentence :(
 
Else the 3DS is a more a "refresh" than a next generation console.

3DS is next gen from Nintendo perspective at least, because it'll be BC to NDS. 3D alone would need them to beef up their hardware, but they know they'll be competing with the likes of Apple and MS. And also PSP2 which probably coming soon. And Sony has taken a huge chunk of Nintendo domination in handheld, where others typically failed to make any dent.

I also hope Nintendo get the message from NDS LL sales, there is a market for bigger handheld.
 
3DS is next gen from Nintendo perspective at least, because it'll be BC to NDS.

No doubt it is for Nintendo and it will be marketed as such. Despite the long-winded pro and contra arguments for hand-held consoles I'd say that most of us could agree that Nintendo's success with any sort of consoles doesn't rely on each devices hw capabilities.


3D alone would need them to beef up their hardware, but they know they'll be competing with the likes of Apple and MS. And also PSP2 which probably coming soon. And Sony has taken a huge chunk of Nintendo domination in handheld, where others typically failed to make any dent.

I also hope Nintendo get the message from NDS LL sales, there is a market for bigger handheld.

Depends how and where you beef up which hw. At this point I'm fairly sure that it doesn't contain anything Tegra, but Tegra mentioned the Tegra2 GPU sounds more like a die shrunk Tegra1 GPU at twice the frequency than anything else (and yes I wouldn't doubt for some minor "cosmetic" changes, but nothing that would break that description). Yes that's just one irrelevant example but frequency is one way to "beef up" hw and no I don't know what the 3DS really will contain.

I don't doubt that the 3DS will do more than fine in terms of sales, I just don't think it'll rely on as advanced hw as the PSP2 will and no I don't personally see a problem in that either.
 
Several developers that have experienced 3DS in its current form have reported, off the record, that it has processing capabilities that far exceed the Nintendo Wii and bring the device with abilities that are close to HD consoles such as PlayStation 3 and Xbox 360. According to several developer sources, the 3DS device is not using the NVIDIA Tegra mobile chipset, a rumor that's been floating around since 2009.

This came from IGN.

http://ds.ign.com/articles/109/1094930p1.html
 

Also at digitalfoundry:
digitalfoundry said:
According to our two independent, unconnected sources, the Nintendo 3DS - almost certain to be revealed at E3 - features a design totally divorced from the NVIDIA Tegra SoC (system on chip) initially thought to have been powering the DS successor. It's now thought that Nintendo has instead chosen a Japanese partner for the 3D acceleration hardware within the 3DS.
My guess is Broadcom. Considering they have been suppliers for Nintendo's wifi adapters, seems like a no brainer.
 
Back
Top