That's hard to quantify. What measurable difference would that fourth core make, such that Joe Public could see a 3 core NGP game sided by side with a 4-core game and point out what's better? If it's the difference between 6 cars and 8 cars in GT, perhaps they'd notice, but if it's the difference between 12 and 16, it'd only be the documented numbers that'd idenitify the loss.I presume that 4th core is reserved for OS and Anti Piracy measures.. Isn't that a bit much? Taking away 25% of its horsepower?
If the whole platform gets excellent background services (big if at the moment!)
Although not applicable to many rendering scenarios, TBDR can also get by with a single buffered frame buffer as another advantage to memory savings.
That was meant to be a feature of CLX in Dreamcast but I don't think it was ever used. (FWIW you need "frame" buffering for a couple of rows of tiles). Besides, I don't think the memory savings justify the extra complexity on the game of having to guarantee each row was rendered in sufficient time.I guess technically, if your display hardware is setup right, you don't even need a framebuffer, just enough of a scanline buffer to cover a single row of tiles..
That was meant to be a feature of CLX in Dreamcast but I don't think it was ever used. (FWIW you need "frame" buffering for a couple of rows of tiles). Besides, I don't think the memory savings justify the extra complexity on the game of having to guarantee each row was rendered in sufficient time.
Oh, BTW, the Tomb Raider port we did featured a single buffered mode mainly because my graphics card at the time only had enough memory for 1 frame of 1024x768 so I made sure I could play the game at full res
Devs have enough of a hard time hitting double and even triple-buffer timings! For the cost of a few MBs of screen buffer, I definitely agree with you.That was meant to be a feature of CLX in Dreamcast but I don't think it was ever used. (FWIW you need "frame" buffering for a couple of rows of tiles). Besides, I don't think the memory savings justify the extra complexity on the game of having to guarantee each row was rendered in sufficient time.
Oh gosh, this was back in the PCX1 / PCX2 days, and you don't see the scene being drawn because it only wrote complete tiles, across the PCI bus, into the 2D card's framebuffer. (Hmmm I think I might have had a 2MB Matrox card at the time)Heh, neat.. was that with a Kyro? Or did it amount to actually watching the scene get drawn? That'd probably break my eyes, but I presume you had this to otherwise test the resolution.
Oh gosh, this was back in the PCX1 / PCX2 days, and you don't see the scene being drawn because it only wrote complete tiles, across the PCI bus, into the 2D card's framebuffer. (Hmmm I think I might have had a 2MB Matrox card at the time)
No, the worst you got was a discontinuity at a tile boundary if the refresh was reading as a tile got updated. Unless you were standing on the spot spinning around (so that each frame looked very different) it wasn't very noticeable. A bit like turning off vsync when running double buffered.
The confirmation of VRAM is very interesting. On a SoC, there really isn't any reason to have dedicated VRAM: if both of the GPU and CPU are sharing the same die, there isn't a good reason to burn different pins to get to two separate memory pools.
That suggests two possibilities, both interesting:
a) The VRAM is actually on-die (such as EDRAM), and thus will be much smaller than 128MB.
b) It isn't a SoC: the GPU is on a separate die, and thus has its own memory interface and DRAM device.
Time differential means nothing here. They are different architectures and NGP's innards bare no relation to PS2's. I don't think there's any precedent for pairing SGX with eDRAM. TBDR is orthogonal to high BW use anyway, so eDRAM doesn't make that much sense. Furthermore, your hopes of 128 MBs eDRAM are very wishful thinking. XB360 managed 10 MBs, which wasn't enough to fit a whole frame but due to cost considerations was what they were limited to. So 6 years after PS2, the increase was from 4 MBs to 10 MBs. Another 5 years, you think they can get a 10x increase in eDRAM on chip?! As a head's up, IBM have just got 32 MBs eDRAM into their POWER7 server processors using a new technology. At about 10 mbit/mm^2 @ 45nm, 128 MBs eDRAM would take up about 100mm^2. the NGP GPU is probably something like 32 mm^2 @ 65nm if I'm reading these specs right (8mm^2 per core) - someone correct me if wrong. But basically as you can see, lots of eDRAM would be huge, expensive, and isn't going to happen.they are using eDRAM with a PS2 like memory set up it's been 11 years sense the PS2 came out with 4MB of eDRAM that's enough time for the tech to be at a price & level where getting 128MB of eDRAM now for a handheld would be like getting 4MB of eDRAM 11 years ago for a console
Not a chance! The eDRAM bandwidth of PS2 is the least of your worries. The GPU would be impossible to emulate in real time as its such an alternative architecture. NGP has no more chance of software emulation of PS2 as PS3 had - None.plus that would give us hope for easy PS2 EMU on the NGP
Time differential means nothing here. They are different architectures and NGP's innards bare no relation to PS2's. I don't think there's any precedent for pairing SGX with eDRAM. TBDR is orthogonal to high BW use anyway, so eDRAM doesn't make that much sense. Furthermore, your hopes of 128 MBs eDRAM are very wishful thinking. XB360 managed 10 MBs, which wasn't enough to fit a whole frame but due to cost considerations was what they were limited to. So 6 years after PS2, the increase was from 4 MBs to 10 MBs. Another 5 years, you think they can get a 10x increase in eDRAM on chip?! As a head's up, IBM have just got 32 MBs eDRAM into their POWER7 server processors using a new technology. At about 10 mbit/mm^2 @ 45nm, 128 MBs eDRAM would take up about 100mm^2. the NGP GPU is probably something like 32 mm^2 @ 65nm if I'm reading these specs right (8mm^2 per core) - someone correct me if wrong. But basically as you can see, lots of eDRAM would be huge, expensive, and isn't going to happen.
Not a chance! The eDRAM bandwidth of PS2 is the least of your worries. The GPU would be impossible to emulate in real time as its such an alternative architecture. NGP has no more chance of software emulation of PS2 as PS3 had - None.
It doesn't, but in retrospect of past 10 years, I'd argue that DS lineup actually benefited from imposing this type of extra complexity on developers early on (likewise for PS2 - though the guarantee there was "only" on frame-time).Simon F said:Besides, I don't think the memory savings justify the extra complexity on the game of having to guarantee each row was rendered in sufficient time.
I think Sony already knew whats needed for PS2 emulation before finishing up the PS3. Thats why extra chips where used.thanks for the info so maybe not eDRAM but I'm guessing that the 128MB of Vram is going to be some kind of fast ram.
& I still think that NGP will have a better chance at PS2 emulation than the PS3 because SONY know what's needed to make PS2 emulation work & maybe designed it to work better.
but then again they can just use the code that they are using for the HD re releases to just sell the games on PSN
I think Sony already knew whats needed for PS2 emulation before finishing up the PS3. Thats why extra chips where used.
Unless you imply that they will seriously revamp the GPU in NGP while for some reasons they dint with PS3 the very same issues arise (both are "fat state" architectures and wont respond well to PS2-like workloads.)
Improbable, in the same way we expected RSX to have special source for emulation, and it turned out to be a pretty stock part. What's required for emulation is basically the Reality Synthesizer in there. I don't see how any other GPU could emulate it unless its an extremely customised part, which is adding considerable cost and complexity.maybe I have too much faith in time & research
43MP4+
maybe that's what the + is for?