Fact: Nintendo to release HD console + controllers with built-in screen late 2012

The TEVs are programmable. They're not as flexible as DX9+ shaders but I've seen many developer statements claiming they're comparable to DX8.1 pixel shaders. There's no way that a game like The Conduit could be made otherwise.
DX7 texture combination system had upper limit of 4 layers that you could combine with various types of operations (add/mul/dot product/etc, constant multipliers/adds). DX8 pixel shaders have eight instruction slots (but only 4 texture slots). DX8 system is more flexible (two operations for each texture sample), but still it's very limited. DX7 already introduced support for cube maps, for dependant texture reads (EMBM) and for DOT3 texture combinator. With those features, and the 4 layer texture combination system, you can do many of the same things as you can with the DX8 shaders. 10 years ago my DX7 engine supported both diffuse and specular normal mapping (multipass mix of EMBM and DOT3 texture layer tricks). Good looking per pixel normal mapping is doable in pure DX7 fixed function pipeline. With proper tricks you can also do blur kernels and have nice looking bloom effects. Some PS2 games even had these. Calling the DX7 style fixed function texture combiners as programmable shaders is more a marketing trick than anything else. DX7 is capable of many things, but calling it programmable is a stretch. The Conduit could be easily made with DX7 hardware.

I personally woudn't call DX8 fully programmable either. DX8 shaders had really limited instruction set, only 8 instruction slots and severe limits how you can use the texture data you have sampled. Both internal calculations and the data input/output are in fixed point format (10 bits if I remember correctly, range from -4 to +4) limiting the usability even more. DX9 shaders can be up to 65536 instructions long, can fully utilize texture sampling results in all calculations and can both input/output and calculate results in floating point formats. It's a huge difference.
 
What about it do you think is technologically infeasible in DX7 ?

I'm not a graphics expert (hence my question) but I'm pretty sure I have never seen a DX7 game carrying those visuals.



Anyways, all I wanted to say is that Flipper&Hollywood were considered to be a lot closer to a DX8 GPU (most notably, NV2x) than a DX7 one, and that TEVs can make the same operations as pixel shaders (as ERP stated, both are "color combiners"):
http://forum.beyond3d.com/showpost.php?p=968368&postcount=17
http://forum.beyond3d.com/showpost.php?p=968387&postcount=36
 
Last edited by a moderator:
I'm not a graphics expert (hence my question) but I'm pretty sure I have never seen a DX7 game carrying those visuals.

Well, it's certainly loaded with a number of things: Lots of characters, lots of foliage, semi-large environment, and bloom, but those aren't necessarily a problem for DX7 implementation. You'll notice significant LOD popping in the video as well. That it also runs at 480p would be a significant advantage compared to games and hardware during that era on PC, but performance isn't really the question I was asking.

Unreal Engine 2 would be an appropriate example I would think. Unreal Tournament 2003-4 may have had DX8/9 renderers, but Epic never really pushed pixel/vertex shaders. The DX9 renderer was actually mostly experimental and related to updating the engine and paving the path towards UE3.
 
Honestly I can't believe the number of people proclaiming Nintendo had designed the most balanced consoles. Seriously?
I'm not seeing a lot of people saying that. This branch of the discussion came from Rangers saying Nintendo didn't have the design experience of IBM and ATI/AMD, and the subsequent point to systems made by Nintendo as opposed to components made by the other companies he expected to be better at designing whole consoles. I'd say last gen had serious issues with all three players, which really proves how hard it is to design an potent, cost effective system, rather than how rubbish everyone is at that game!
 
I'm not seeing a lot of people saying that. This branch of the discussion came from Rangers saying Nintendo didn't have the design experience of IBM and ATI/AMD, and the subsequent point to systems made by Nintendo as opposed to components made by the other companies he expected to be better at designing whole consoles. I'd say last gen had serious issues with all three players, which really proves how hard it is to design an potent, cost effective system, rather than how rubbish everyone is at that game!

Agreed. You can point to the N64, Saturn, Xbox, Xbox 360 and the PS3 and see design flaws that if hindsight was 20/20 would have been removed or redesigned by the manufactuers.

This discussion makes me wonder how a company like AMD (or Intel with its limited IGPs) that designs chipsets, cpus and gpus has never seriously tried to make a console?
 
Agreed. You can point to the N64, Saturn, Xbox, Xbox 360 and the PS3 and see design flaws that if hindsight was 20/20 would have been removed or redesigned by the manufactuers.

This discussion makes me wonder how a company like AMD (or Intel with its limited IGPs) that designs chipsets, cpus and gpus has never seriously tried to make a console?

Probably the fact that they're in the PC world has a lot to do with it - as well as the fact that making a console which you can sell for profit or at least without losing tons of money (see PS3, XB360 in the beginning for example), gain marketshare for it etc without being a software company, should be hard enough.
It's completely different thing to just get paid for chips than sale whole thing yourself and get games for it, audience etc
 
Sorry if was posted before.

It is possible that the Wii U is using a customised powerpc A2 with 4 cores (if I remember correctly there especulation/reports of eDRAM etc.)?

Each core( evolution PPU / Xenon cores?) has four threads with 8 MB ​​cache (eDRAM like?) and the version with 16 core have 428mm ^ ,45nm,1.43 billion transistors,2.3 GHz,65 watts.


http://en.wikipedia.org/wiki/PowerPC_A2

http://www.eetimes.com/electronics-news/4087537/ISSCC-IBM-back-in-network-processor-game

http://www.theregister.co.uk/2010/02/09/ibm_wire_speed_processor/
 
Last edited by a moderator:
It is possible that the Wii U is using a customised powerpc A2 with 4 cores (if I remember correctly there especulation/reports of eDRAM etc.)?

Each core( evolution PPU / Xenon cores?) has four threads with 8 MB ​​cache (eDRAM like?) and the version with 16 core have 428mm ^ ,45nm,1.43 billion transistors,2.3 GHz,65 watts.

Sure, but looking at the die shot, you'd still need significant re-engineering to cut things down. It'll be fairly large even with just 4 cores there and 2MB of L2 eDRAM. Of course, with the removal of the 12 cores, you'd still need to pad the leftover space so you get a rectangular die, so the amount of L2 you could potentially fit is significantly higher (perhaps 16MB with a naive rearrangement). That still doesn't make the chip particularly small though as more than half of the remainder of the chip is "uncore".

Miscellaneous bits: hardware accelerated Crypto, Compression, RegX, XML, Host Ethernet Adapter/Packet Processor, a second memory controller, 4x10G Ethernet, 2x PCI Gen2.

Who knows what might be kept or discarded. :p

Furthermore, the cores are designed for up to 2.3GHz, so they'd have to do quite a bit to get that even higher (if that is desired).
 
Yeah, so why did you bother pointing out the Gamecube example, when literally everybody in the business does it? Sure Sony can design graphics chips, but their last high end chip(PS3) was an off the shelf one, probably because it out-featured and outperformed whatever they could come up with at the time. Nintendo is the same way.
Cell processor off the shelf by golly gosh. Put simply what is Sony's per annum R&D budget $20billion?. Nintendos $100m?. Nintendo do the ducking and diving. For instance SCE wanted a motion controller released on PS2/PS3 but Sonys a very big place.
 
This discussion makes me wonder how a company like AMD (or Intel with its limited IGPs) that designs chipsets, cpus and gpus has never seriously tried to make a console?

maybe nvidia will, as theyve been pushed out of the x86 chipset market, and slowly but surely integrated graphics are eating right into the mainstream. when all thats left for discreet gpu's is the high end, theyre going to need something else to keep the money coming in. they have tegra, but a more powerful variant would do just fine in a console id think. good relations with the game industry too, it would be pretty cool if they did.
 
The die size for Hollywood is 72mm^2 the GPU+eDRAM and the MEM1 is 94.5mm^2, perhaps this isn´t important now but let me explain why I am remembering this.

I doubt that Nintendo has helped to develop any new processor from IBM, they ever take existing processors with some twist in them and I am sure that the Xenon/Waternoose is the base for the choosen processor but with a few changes like the L2 SRAM being changed for L2 eDRAM. The original one was 176mm^2 being fabbed at 90nm, at 45nm the area should be at least reduced 50% (88mm^2) without changed, taking in consideration the L2 eDRAM it could be easily reduced. The original processor had 168 million transistors with 1MB L2, this is 48 millions transistors, using eDRAM it should be 8 millions transistors, this is 40 million transistors less and 24% less area. 67mm^2 for the final design using 1MB of L2, but 2MB could be an option too.

The other part is the GPU, I am not in the RV7x0 bandwagon, I believe that for Nintendo is preferable to take a current GPU design even if when the console is released it will happen the same situation than the RSX when the system will be released. I believe that AMD offered the GPU inside Llano (400 Stream Processors, DX11, 600Mhz) to Nintendo, it seems to have the perfect TDP for the Wii U box and it is only 80mm^2. Nintendo could make a custom version with a integrated memory controller for the entire system and changing the RAM used from DDR3 and GDDR4 to GDDR5.

I know that I could be wrong in some places, if I have written something wrong let me know.
 
The die size for Hollywood is 72mm^2 the GPU+eDRAM and the MEM1 is 94.5mm^2, perhaps this isn´t important now but let me explain why I am remembering this.

I doubt that Nintendo has helped to develop any new processor from IBM, they ever take existing processors with some twist in them and I am sure that the Xenon/Waternoose is the base for the choosen processor but with a few changes like the L2 SRAM being changed for L2 eDRAM. The original one was 176mm^2 being fabbed at 90nm, at 45nm the area should be at least reduced 50% (88mm^2) without changed, taking in consideration the L2 eDRAM it could be easily reduced. The original processor had 168 million transistors with 1MB L2, this is 48 millions transistors, using eDRAM it should be 8 millions transistors, this is 40 million transistors less and 24% less area. 67mm^2 for the final design using 1MB of L2, but 2MB could be an option too.

I really doubt that it will only be 1 or 2 MB eDRAM, this doesn't fit with "huge amounts" in the press release. I think the final chip will be somewhere in between 80-100mm²

The other part is the GPU, I am not in the RV7x0 bandwagon, I believe that for Nintendo is preferable to take a current GPU design even if when the console is released it will happen the same situation than the RSX when the system will be released. I believe that AMD offered the GPU inside Llano (400 Stream Processors, DX11, 600Mhz) to Nintendo, it seems to have the perfect TDP for the Wii U box and it is only 80mm^2. Nintendo could make a custom version with a integrated memory controller for the entire system and changing the RAM used from DDR3 and GDDR4 to GDDR5.

I would also say that something in the range of the Llano GPU portion (with much better memory interface of course) is highly likely (maybe a few more stream processors but lower clock, but nothing to write home about)
 
Last edited by a moderator:
I think it's highly unlikely to see a single-chip CPU+GPU in the first iteration of the console, considering both are coming from different companies.

Developers have confirmed that the specs aren't final yet, there's a new prototype with different specs coming up in this month, and until E3, developers were working with underclocked versions..

Fruthermore, IGN's sources (supposedly developers) claimed the GPU "will feature a tweaked design but a similar speed to the HD 4850".
That's 800 VLIW5 shaders, 40 TMUs, 16 ROPs @ 625MHz and 63.55GB/s memory bandwidth.

I'm more inclined to believe the GPU has the same number of units as a RV770, but the earliest SDKs had the GPU underclocked to ~350MHz, with the final version having something closer to 500->600MHz.
 
Well, you could get all defensive for everyone disagreeing with your miss&miss rampage of decisively wrong statements (Gamecube being a "Dreamcast + PS2", Flipper being somehow equivalent to GS because it has edram, SH-4 being MIPS, Nintendo not having anyone to design a GPU, you being able to "design" a console.. the list goes on and on) or... you could just accept that you were wrong and come out in a more responsible manner.. It's your call.

You are free to prove me wrong...;)

If you look closer, you'll see that none of the people refuting your statements actually implied that the WiiU will be very powerful or very weak,.

And where did I imply that the people doing the refuting were the same wishful thinking fanboys? Just because I put two different sentences into the same post doesn't imply they were directed at the same people.

BTW, regarding the "if I had enough money" statement.. well, no sh..t sherlock..
With enough money you can buy a house. It doesn't mean you're competent enough in all civil construction areas to make a house by your hands.

Exactly! Now apply that same logic to Nintendo and maybe you'll get it.;)

How about I make it simple for you...Sherlock.

Nintendo doesn't design anything inside the console but guess what? They have a lot of money...so they outsource the engineering work to others IBM, ArtX, Mosys, NEC...etc.

Thanks for proving my point...

Following the exact same line thought, any person/company can get pretty much anything if they pay another person/company to do the job for them.

Exactly, even me....

Either you can make a profitable business out of it, it's a whole other issue, and Nintendo does not pay other companies to make the motherboard design + component/specs choice for their consoles.

WTF does designing a motherboard have to do with profits and designing silicon?

As for the rest of your posts, I see that you've been thoroughly corrected regarding your DX7 vs DX8 Flipper claims...
 
Last edited by a moderator:
Back
Top