Welcome, Unregistered.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Reply
Old 10-Jan-2012, 10:56   #26
rpg.314
Senior Member
 
Join Date: Jul 2008
Location: /
Posts: 4,274
Send a message via Skype™ to rpg.314
Default

For a TBDR, picking the optimal tile size is a matter of LOTS of simulation and careful tradeoffs. If your tile size is O(1000) pixels, then you don't need edram. If your tile size is O(10^5) pixels, then you probably need do multipass geometry and frustum cull at draw call granularity, and then you don't need a TBDR.
rpg.314 is offline   Reply With Quote
Old 10-Jan-2012, 11:41   #27
Brimstone
B3D Shockwave Rider
 
Join Date: Feb 2002
Posts: 1,835
Default

Quote:
Originally Posted by AlStrong View Post
Forgive me if I don't put any weight into a RAM designer's projections about the needs of game rendering rather than an educated guess based off of current rendering tech requirements that a game developer would have far more knowledge about. My question is asking for what specifically is consuming bandwidth and what those figures are, not just throwing numbers to try to fit some imaginary curve in order to push some agenda for selling a memory technology.


Ok...lets see what John Carmack wants.

Quote:
“One of the most important things I would say is a unified virtual 64-bit address space, across both the GPU and the CPU. Not a partition space, like the PS3. Also, a full 64-bit space with virtualization on the hardware units – that would be a large improvement. There aren’t any twitchy graphics features that I really want; we want lots of bandwidth, and lots of cores. There’s going to be a heterogeneous environment here, and it’s pretty obvious at this point that we will have some form of CPU cores and GPU cores. We were thinking that it might be like a pure play Intel Larabee or something along that line, which would be interesting, but it seems clear at this point that we will have a combination of general purpose cores and GPU-oriented cores, which are getting flexible enough that you can do most of the things that you would do on a CPU.” – id Software co-founder John Carmack on next gen consoles.
__________________
When God plays an online shooter he plays Shadowrun. He buys resurrection first round and selects Dwarf.

www.shadowrunshow.com
Brimstone is offline   Reply With Quote
Old 10-Jan-2012, 12:16   #28
Shifty Geezer
uber-Troll!
 
Join Date: Dec 2004
Location: Under my bridge
Posts: 30,847
Default

Quote:
Originally Posted by rpg.314 View Post
For a TBDR, picking the optimal tile size is a matter of LOTS of simulation and careful tradeoffs. If your tile size is O(1000) pixels, then you don't need edram. If your tile size is O(10^5) pixels, then you probably need do multipass geometry and frustum cull at draw call granularity, and then you don't need a TBDR.
I agree with that, and would expect a TBDR to use SRAM rather than eDRAM. But if we do see a console with eDRAM, I expect it to be a TBDR, maybe with enormous tiles. (which isn't really TBDR, I know). eDRAM working with full buffers will just need to be too large, or be gimped.
__________________
Shifty Geezer
...
Flashing Samsung mobile firmwares. Know anything about this? Then please advise me at -
http://forum.beyond3d.com/showthread.php?p=1862910
Shifty Geezer is offline   Reply With Quote
Old 10-Jan-2012, 12:27   #29
rpg.314
Senior Member
 
Join Date: Jul 2008
Location: /
Posts: 4,274
Send a message via Skype™ to rpg.314
Default

I think there is a need for a big edram, which is exposed very flexibly. It can provide the bandwidth and size we need. Interposers seem like they won't be ready in time.
rpg.314 is offline   Reply With Quote
Old 10-Jan-2012, 12:35   #30
french toast
Senior Member
 
Join Date: Jan 2012
Location: Leicestershire - England
Posts: 1,634
Default

Can the resolution be scaled up from 1080p to 2569x1600?..like console do now from 720p-1080p?

If they do standardise it to 1080p, then it would solve alot of issues. marketing wise though, they are going to have to differntiate the new consoles away from the current 'hd' ones...which according to the consumer have been doing '1080p' since 2005...why should they upgrade for the same thing?

By the way everyone is talking, edram seems extremely expensive, what about 128mb of edram?
7 years ago 360 had 10mb after all...
french toast is offline   Reply With Quote
Old 10-Jan-2012, 12:44   #31
AlNets
Posts may self-destruct
 
Join Date: Feb 2004
Location: In a Mirror Darkly
Posts: 15,207
Default

Well, yes. We've already gone through marketing with HD and also motion control gaming. I wouldn't be surprised if the emphasis is on features, apps and the sort instead, so focusing solely on 1080p would be a huge mistake. Again, you can always do prettier pixels and more advanced rendering features, and that works against pushing raw numbers of pixels. Remember, at the end of the day, we're going to be saddled with a silicon budget that has limits in physical reality so all this wishing for 1080p+ mandates and expectations is just going to hinder what level of graphics we see. Devs could have targeted 1080p this gen if they wanted to, but then we'd have PS2 graphics or worse (just look at how the HD ports of PS2 games fairs at 1080p on PS3). That's hardly marketable for a generational shift.

The mass market isn't going to give two hoots what the real rendering resolution is. And this is bloody well off topic enough, so let's get back to what the topic is about: edram technical pros/cons.
__________________
"You keep using that word. I do not think it means what you think it means."
Never scale-up, never sub-render!
(╯°□°)╯︵ □ Flipquad
AlNets is offline   Reply With Quote
Old 10-Jan-2012, 13:25   #32
Shifty Geezer
uber-Troll!
 
Join Date: Dec 2004
Location: Under my bridge
Posts: 30,847
Default

Quote:
Originally Posted by french toast View Post
Can the resolution be scaled up from 1080p to 2569x1600?..like console do now from 720p-1080p?
You can upscale to any resolution. Any higher resolution TVs will have upscaling anyway. I can't see any console supporting higher than 1080p output out of the box, except maybe for 4k movie playback as that's a simple software feature.

Quote:
If they do standardise it to 1080p, then it would solve alot of issues. marketing wise though, they are going to have to differntiate the new consoles away from the current 'hd' ones.
They'll produce much better pictures! We've only had one generation of hardware selling on a resolution. Other consoles have just been a progression without making a song and dance about what resolution they are rendering. And notably next gen will actually be rendered at clean 1080p, instead of murky, sometimes upscaled (especially some buffers like reflection buffers) 720p. There's plenty of room for visual improvement without needing to chase a niche resolution.

Quote:
By the way everyone is talking, edram seems extremely expensive, what about 128mb of edram?
7 years ago 360 had 10mb after all...
eDRAM doesn't appear to scale according to Moore's law, due to costs I think. The largest amount of eDRAM yet seen is 32 MBs on Power7. PS2 had 4MBs. 6 years later XB360 had 10 MBs, not enough to fit a full framebuffer. eDRAM's an expensive option to provide massive bandwidth. If it isn't a large enough amount of RAM to be useful, that BW gain is of little use, and if you don't need all that BW then it's not much use. If next gen could get by on, say, 150 GB/s, and that was possible on a conventional bus at an affordable cost, eDRAM has no place. If the cost of RAM is going to cap it at 50 GBs, say, and you need more, then eDRAM offers a solution but we face the issue of how much, If we need tens of MBs, it'll be too costly. And if we're needing BW and RAM capacity, perhaps it's time to switch to TBDR.

If we compare PS2 to XB. PS2 needed the BW as it rendered 'shader effects' through multipass rendering. XB could get away with far less BW by computing effects in shaders. The same 4GBs eDRAM on XB would have whacked the price up considerably to little gain except the beauty of PS2's particles. The expense of XB was due principally to being rushed and having poor contracts where the cost savings of process shrinks weren't passed on to MS IIRC.
__________________
Shifty Geezer
...
Flashing Samsung mobile firmwares. Know anything about this? Then please advise me at -
http://forum.beyond3d.com/showthread.php?p=1862910
Shifty Geezer is offline   Reply With Quote
Old 10-Jan-2012, 14:45   #33
french toast
Senior Member
 
Join Date: Jan 2012
Location: Leicestershire - England
Posts: 1,634
Default

Ok then it seems like edram has always been a ballancing act between memory controller width and the edram.
If you say its as expensive as that..(360 only got a 6mb jump over 6 years from ps2)..it seems like you dont get the cost reduction benefits from node shrinkage/re packaging new iterations of the console...as you do with usual components.

This is the same arguement as with controller width, but seeing as pc graphics cards are continually using wide controllers and go no where near edram perhaps that gives us the answer.
It seems to me that edram was beneficial 6 years ago because wide-mem controllers where harder to come by and more expensive, the hd2970 xtx jumped up to a rather insane 512bit then we went back down and standardised at 256-384bit.

If the only reason that we are not considering wide pin is because cost reduction in future it seems edram is many times more expensive and just as hard to shrink.

So im going with a minimum 256bit bus 4gb gddr5 unified..not split pool.
My joker card is gonna be 3-4gb xdr2...with maybe a large 32mb stash of L3 on the cpu.
french toast is offline   Reply With Quote
Old 10-Jan-2012, 15:03   #34
rpg.314
Senior Member
 
Join Date: Jul 2008
Location: /
Posts: 4,274
Send a message via Skype™ to rpg.314
Default

Quote:
Originally Posted by Shifty Geezer View Post
eDRAM doesn't appear to scale according to Moore's law, due to costs I think. The largest amount of eDRAM yet seen is 32 MBs on Power7. PS2 had 4MBs. 6 years later XB360 had 10 MBs, not enough to fit a full framebuffer. eDRAM's an expensive option to provide massive bandwidth. If it isn't a large enough amount of RAM to be useful, that BW gain is of little use, and if you don't need all that BW then it's not much use. If next gen could get by on, say, 150 GB/s, and that was possible on a conventional bus at an affordable cost, eDRAM has no place. If the cost of RAM is going to cap it at 50 GBs, say, and you need more, then eDRAM offers a solution but we face the issue of how much, If we need tens of MBs, it'll be too costly. And if we're needing BW and RAM capacity, perhaps it's time to switch to TBDR.
eDRAM scales according to Moore's law. Power 7 has edram on a logic process/die. Xbox has edram on DRAMish process. Not comaparable.
rpg.314 is offline   Reply With Quote
Old 10-Jan-2012, 15:33   #35
Shifty Geezer
uber-Troll!
 
Join Date: Dec 2004
Location: Under my bridge
Posts: 30,847
Default

Quote:
Originally Posted by rpg.314 View Post
eDRAM scales according to Moore's law.
Yeah. I meant it in the same way French Toast seemed to be using it to explain why 7 years from XB360 should give us 150 MBs eDRAM. Obviously eDRAM as a tech based on transistor will still see exponential growth as long as chips see a doubling of transistors every two years, but its actual application doesn't follow suit in the same way other uses of transistors - RAM amount and execution units - do. Otherwise XB360 would have been on more like 32 MBs eDRAM, and we'd be contemplating 256-512 MB's eDRAM in PS4 following on from 4 MBs in PS2.

I don't know the ins and outs regards eDRAM tech and progression though. AFAICT it's mostly due to being far less dense than normal DRAM that it's not popular, as you can fit more other stuff into the same transistor count. Regarding my current view of eDRAM in a console, the main advantages of eDRAM BW can be negated by using alternative rendering techniques that use those transistor more efficiently as logic, and just work smarter.
__________________
Shifty Geezer
...
Flashing Samsung mobile firmwares. Know anything about this? Then please advise me at -
http://forum.beyond3d.com/showthread.php?p=1862910
Shifty Geezer is offline   Reply With Quote
Old 11-Jan-2012, 23:55   #36
AlNets
Posts may self-destruct
 
Join Date: Feb 2004
Location: In a Mirror Darkly
Posts: 15,207
Default

Quote:
Originally Posted by steampoweredgod View Post
nextgen xdr..
You really ought to have more substance to your post than this. This means absolutely nothing in the context of the thread. For starters, you fail to consider the economical feasibility of said technology or even the implications on the memory controller; everyone's GPUs are built with GDDR5 in mind these days, so they'd have to do a lot more work to even consider XDR2 as a replacement. XDR2 is not some free ride - the I/O can take up a considerable amount of physical space. Hell, XDR2 isn't even being sampled for a real-world product. Do you actually have anything to add to the discussion or are we supposed to read your mind? The thread is trying to gauge advocating eDRAM over actual real-world alternatives, not mythical technology.
__________________
"You keep using that word. I do not think it means what you think it means."
Never scale-up, never sub-render!
(╯°□°)╯︵ □ Flipquad
AlNets is offline   Reply With Quote
Old 12-Jan-2012, 00:58   #37
Dominik D
Member
 
Join Date: Mar 2007
Location: Wroclaw, Poland
Posts: 723
Default

Quote:
Originally Posted by french toast View Post
By the way everyone is talking, edram seems extremely expensive, what about 128mb of edram? 7 years ago 360 had 10mb after all...
It's about cost, which does not scale linearly with size. In simple words: every wafer has spots which are "broken". If your chip die covers that spot, it's most likely broken as well. The larger the die, the more probable it is that your die will be broken. Less working chips per wafer (lower the yield), the higher the cost of production. 12x the size in MB is pretty much 12x the size. If you want to use smaller node (smaller transistors), you'll end up with technology which is more difficult to control and yield will be even smaller. You can't go up the node, because your die will be so large, clock will be "lagging" in certain points and you'll have problems synchronizing your chip. Also it will produce more heat and consume more power. 128MB would be prohibitively expensive and probably close to impossible to produce at this point in time.

At least that's how I understand it.
__________________
Shifty Geezer: I don't think the guy really understands the subject.
PARANOiA: To be honest, Shifty, what you've described is 95% of Beyond3D - armchair experts spouting fact based on the low-level knowledge of a few.

This posting is provided "AS IS" with no warranties, and confers no rights.

Last edited by Shifty Geezer; 12-Jan-2012 at 11:00. Reason: Denoisification
Dominik D is offline   Reply With Quote
Old 12-Jan-2012, 00:59   #38
fellix
Senior Member
 
Join Date: Dec 2004
Location: Varna, Bulgaria
Posts: 3,034
Send a message via Skype™ to fellix
Default

Quote:
Originally Posted by AlStrong View Post
You really ought to have more substance to your post than this. This means absolutely nothing in the context of the thread. For starters, you fail to consider the economical feasibility of said technology or even the implications on the memory controller; everyone's GPUs are built with GDDR5 in mind these days, so they'd have to do a lot more work to even consider XDR2 as a replacement. XDR2 is not some free ride - the I/O can take up a considerable amount of physical space. Hell, XDR2 isn't even being sampled for a real-world product. Do you actually have anything to add to the discussion or are we supposed to read your mind? The thread is trying to gauge advocating eDRAM over actual real-world alternatives, not mythical technology.
Interface padding for XDR memory is actually faily simple, compared to any DDR-X standard, and takes much less die area. Just look at the die shots of the original Cell/B.E. and PowerXCell.
__________________
Apple: China -- Brutal leadership done right.
Google: United States -- Somewhat democratic.
Microsoft: Russia -- Big and bloated.
Linux: EU -- Diverse and broke.
fellix is offline   Reply With Quote
Old 12-Jan-2012, 01:56   #39
steampoweredgod
Naughty Boy!
 
Join Date: Nov 2011
Posts: 164
Default

If nextgen xdr delivers 100s of GBs(300-500GBs), it might not be necessary.

Quote:
? The thread is trying to gauge advocating eDRAM over actual real-world alternatives, not mythical technology.
Mythical? XDR low latency was chosen for cell's performance, iirc. Backwards compatibility of a next gen unified memory would demand similar low latency memory, how does gddr5 compare latency wise.

I would assume rambus has at the least tried to address issues that might hamper bringing such product into the market place. Unless they aren't serious about such being a viable future technology...

IF we want to talk mythical, we'd be talking optical interconnects(which I expect for 2020s consoles.).

Last edited by Shifty Geezer; 12-Jan-2012 at 11:01. Reason: Denoisification
steampoweredgod is offline   Reply With Quote
Old 12-Jan-2012, 11:31   #40
function
Senior Member
 
Join Date: Mar 2003
Posts: 3,118
Default

Quote:
Originally Posted by french toast View Post
This is the same arguement as with controller width, but seeing as pc graphics cards are continually using wide controllers and go no where near edram perhaps that gives us the answer.
PCs have to support multiple huge buffers simultaneously without tiling, and mainstream graphics are provided by integrated graphics with no dedicated bus at all (never mind 256-bit). Edram isn't an option for PCs; it wasn't in 2005 when the 360 launched or in 2000 when the PS2 launched either.

Quote:
If the only reason that we are not considering wide pin is because cost reduction in future it seems edram is many times more expensive and just as hard to shrink.
How do you work that out?
function is offline   Reply With Quote
Old 12-Jan-2012, 12:12   #41
french toast
Senior Member
 
Join Date: Jan 2012
Location: Leicestershire - England
Posts: 1,634
Default

I was under the impression that the wider the memory controller the harder it is to scale down the whole motherboard, just from what i have picked up on these threads, and that is why console manufacturers loath to use them.

They would much rather use a next gen memory/edram instead to get the bandwidth, as that is more cost effective in the long run..although judging by what was said above edram seems off the table as well, i would gadger either a big slab xdr 2,/a 256bit bus+gdrr5 or an ace card...GDDR6??
I dont think they will go with a split memory architecture/pool.
french toast is offline   Reply With Quote
Old 12-Jan-2012, 13:16   #42
Crossbar
Senior Member
 
Join Date: Feb 2006
Posts: 1,821
Default

Scanned through the seminar topics of this years ISSCC.

Found this one a bit intriguing.

Quote:
A 3D System Prototype of an eDRAM Cache Stacked Over Processor-Like Logic Using Through-Silicon Vias
M. Wordeman1, J. Silberman1, G. Maier2, M. Scheuermann1
1IBM T. J. Watson, Yorktown Heights, NY
2IBM Systems and Technology Group, Fishkill, NY
This solution lets you make your main CPU on a different process (cheaper) than the EDRAM and still you get a really high bandwidth connection to the EDRAM through the TSVs instead of a massive number of pins. Still this was a prototype and I doubt that either MS or Sony wants to bet on such a unproven technology for their upcoming console launches.
Crossbar is offline   Reply With Quote
Old 12-Jan-2012, 15:07   #43
function
Senior Member
 
Join Date: Mar 2003
Posts: 3,118
Default

Quote:
Originally Posted by french toast View Post
I was under the impression that the wider the memory controller the harder it is to scale down the whole motherboard, just from what i have picked up on these threads, and that is why console manufacturers loath to use them.
That's my understanding too. If the 360 had gone with a 256-bit bus they'd still be stuck with 8 memory chips, just like the launch systems and it could have placed additional restrictions on how small they could make the system.

Quote:
They would much rather use a next gen memory/edram instead to get the bandwidth, as that is more cost effective in the long run..although judging by what was said above edram seems off the table as well, i would gadger either a big slab xdr 2,/a 256bit bus+gdrr5 or an ace card...GDDR6??
I dont think they will go with a split memory architecture/pool.
I don't get the feeling that embedded memory is any more off the cards this generation than last generation. Nintendo appear to be using it despite going for a relatively low power device that could possibly (probably?) be serviced by a 128-bit bus and GDDR5 in terms of raw bandwidth (at least it looks that way if you compare to the PS360 and and the PC space).

My big concern about edram is that it might discourage developers from using accurate, subsample based forms of AA meaning we end up with popping pixels, unnecessarily blurry/artefacty shader-only based AA and screen space effects that are no better in motion than they are now (I'm looking at you, light-shafts!).
function is offline   Reply With Quote
Old 13-Jan-2012, 16:49   #44
Humus
Crazy coder
 
Join Date: Feb 2002
Location: Stockholm, Sweden
Posts: 3,217
Send a message via ICQ to Humus Send a message via MSN to Humus
Default

Quote:
Originally Posted by manux View Post
I would make assumption EDRAM is good if it is more flexible than in xbox360 implementation and if the amount of it is sufficient.
For eDram to be truly useful it should just be a fast memory area addressable like regular video mem. This way we can put render targets in regular video memory if we like and others in eDram, and no need to ever resolve or copy data between the two areas. Unless of course the game finds that this is the most efficient for its stuff. For deferred rendering we wouldn't have to fit the whole g-buffer setup in eDram. If we put say the first two g-buffers in eDram and depth buffer and another couple of g-buffers in regular video mem we would still benefit from the extra bandwidth to the buffers that fit.

Quote:
Originally Posted by manux View Post
What about EDRAM/large caches to help cpu/gpu talk and divide work. It would be pretty useful to pass significantly sized buffers between computing units without going through main memory.
This generation we have seen people do graphics on SPUs, which in a way is mostly a workaround for a poor GPU, but I think we'll still see this sort of thing going forward. Ideally all memory should be accessible from all units with no bandwidth restrictions in any direction, using a unified address space. This way the app could make wise choices where to put resources based on required bandwidth, in sys mem (slow), vid mem (faster) or eDram (very fast). There could certainly be CPU tasks that would benefit from huge bandwidth.

Provided enough flexibility, any amount of eDram could be a good thing, with more being better of course. With the restrictions it put on Xbox360, it hindered more than it helped.
__________________
[ Visit my site ]
I speak for myself and only myself.
Humus is offline   Reply With Quote
Old 14-Jan-2012, 06:04   #45
Squilliam
Beyond3d isn't defined yet
 
Join Date: Jan 2008
Location: New Zealand
Posts: 3,172
Default

Would this kind of architecture make sense?:

RAM <-> GPU <-> ED-RAM* <-> CPU <-> RAM

*Or equivalent

Would it be beneficial to have the ED-RAM used as a pass through for data both from the GPU back to the CPU and vice versa? That way both the CPU and GPU could benefit from the additional bandwidth as needed depending on the needs of the program architecture?
__________________
It all makes sense now: Gay marriage legalized on the same day as marijuana makes perfect biblical sense.
Leviticus 20:13 "A man who lays with another man should be stoned". Our interpretation has been wrong all these years!
Squilliam is offline   Reply With Quote
Old 14-Jan-2012, 09:11   #46
AlNets
Posts may self-destruct
 
Join Date: Feb 2004
Location: In a Mirror Darkly
Posts: 15,207
Default

At that point, you might be concerned about I/O interfaces on the three chips and the impact towards future die reductions/combining.

I'm not sure what the implications there are for latency or avoiding contention.

Actually.... Would it be possible to do this then:
Code:
               RAM
                 ||
GPU <-> eDRAM <->CPU
Not sure how that'd work out for passing information back and forth.... i.e. memory controller config.
__________________
"You keep using that word. I do not think it means what you think it means."
Never scale-up, never sub-render!
(╯°□°)╯︵ □ Flipquad
AlNets is offline   Reply With Quote
Old 14-Jan-2012, 11:13   #47
Shifty Geezer
uber-Troll!
 
Join Date: Dec 2004
Location: Under my bridge
Posts: 30,847
Default

Quote:
Originally Posted by Humus View Post
Provided enough flexibility, any amount of eDram could be a good thing, with more being better of course. With the restrictions it put on Xbox360, it hindered more than it helped.
Hmmm. Could you give examples of how you'd use eDRAM at different amounts, assuming enough bandwidth to always be an excess? Let's say 10MBs, 32 MBs and 100 MBs. Unless rendering in tiles I'd have thought you'd need a minimum amount ot fit a render target.
__________________
Shifty Geezer
...
Flashing Samsung mobile firmwares. Know anything about this? Then please advise me at -
http://forum.beyond3d.com/showthread.php?p=1862910
Shifty Geezer is offline   Reply With Quote
Old 14-Jan-2012, 14:02   #48
Humus
Crazy coder
 
Join Date: Feb 2002
Location: Stockholm, Sweden
Posts: 3,217
Send a message via ICQ to Humus Send a message via MSN to Humus
Default

Sure, you need to fit the entire render target. Or I suppose it would be possible to put a partial render target in eDram and the rest in Video Ram, but I wouldn't expect that sort of flexibility. Still, let's say you run 1080p and deferred rendering. AA through FXAA/MLAA/whatever. One g-Buffer render target then takes 7.9MB. So even with only 10MB you could at least fit one. With 32MB you could fit four, so either three buffers plus depth buffer, or four buffers with depth buffer in video memory. Or you put two buffers in eDram and the rest in video mem and then put the FP16 light accumulation buffer in eDram. Or you can put the shadow maps in eDram, especially if you want to many light sources with shadows. Or you put a color correction volume lookup texture in eDram because it has such irregular access pattern that it's constantly trashing the texture cache, so if it's backed by eDram it could act as a large higher level cache. There are many possibilities.
__________________
[ Visit my site ]
I speak for myself and only myself.
Humus is offline   Reply With Quote
Old 14-Jan-2012, 14:13   #49
Fafalada
Senior Member
 
Join Date: Feb 2002
Posts: 2,773
Default

Quote:
Originally Posted by AlStrong
Actually.... Would it be possible to do this then:
I'm not entirely clear about the diagram, are you suggesting a GPU with no access to RAM? That's pretty much the PS2 then.
__________________
"I see Subversion as being the most pointless project ever started."
Linus Torvalds
Fafalada is offline   Reply With Quote
Old 14-Jan-2012, 14:22   #50
AlNets
Posts may self-destruct
 
Join Date: Feb 2004
Location: In a Mirror Darkly
Posts: 15,207
Default

Quote:
Originally Posted by Fafalada View Post
I'm not entirely clear about the diagram, are you suggesting a GPU with no access to RAM? That's pretty much the PS2 then.
hm.... was thinking of a unified memory space with the CPU and GPU having access to the edram.
__________________
"You keep using that word. I do not think it means what you think it means."
Never scale-up, never sub-render!
(╯°□°)╯︵ □ Flipquad
AlNets is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 03:56.


Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.