Predict: The Next Generation Console Tech

Status
Not open for further replies.
The blitter does seem a bit farfetched, but the lightning stuff would actually be quite usefully and plausible.

Recently i have been playing STALKER Clear Sky and the diference betwen static lightning and enha. full dinamic lightning dx10 is 3x less Fps.

It´s absurd the hardware required for quality lightning, so maybe this is the flop "cheat" per say
 
It´s absurd the hardware required for quality lightning, so maybe this is the flop "cheat" per say

This is the circular logic coming back into play though; let's not try to derive from sensationalistic rumors evidence or theories as they pertain to other rumors or concepts, some of which have originated right here to begin with. Any hardware-derived benefit to lighting will require transistors; insofar as that is the case, the dust will have to settle before Flops and rendering efficiencies can be measured and compared to one another. No reason to think that any one rumor is the key to another at this point, since the "quality" rumors can likely be counted on one hand with fingers to spare.
 
Rumor is that Oban is a 3d vertically stacked ic. The process they used IS integration w2w 3D.

ref: http://misterxmedia.livejournal.com/114179.html


The reason I believe in this rumor is SOITEC have lots of patents around this 3D Stacking process. And Soitec have a pretty impressive relationship with AMD-IBM-Microsoft.

(article from 2004 : http://www.eetimes.com/electronics-news/4122077/SOI-set-to-drive-Xbox-PlayStation-says-Soitec-exec )

I know lots of people just don't want to believe misterxmedia BUT his rumors are looking quite real from my rigoursly patent backed ideals...
 
The Vita chip is running very cool (maybe a watt or two?), that's why they could do it. DRAM is sensitive to heat, putting it on top of a 100W die is a major engineering problem. That's why everyone is suggesting 2.5D as the solution, it would work with the latest wide I/O memory standard (HBM) and provide amazing speed. I think cost is still up in the air, so is yield. Nobody really knows.

I think a 4Gbit wide I/O memory is somewhere around 100mm2, so having multiple stacks would require a pretty big interposer. Earlier someone posted a Global Foundry image of a 2.5D chip (with two small pieces on the right of the big die) it was speculated to be an AMD GPU with stacked memory, but I doubt it, because the memory parts are way too small in the picture.

Heat with 3D is definitely an issue even if there are multiple ways around it, but I'd be impressed if they manage to pull it off in a mass produced consumer level device so soon. 2.5D gives similar performance benefits but in a much simpler layout (but bigger) to put together so I think they'll do that. Maybe two DRAM stacks then? Or a single stack 8 high? It really depends on the kind of densities DDR4 will be in at launch.

The chip packaging part of Orbis is the most interesting part to me and is completely overlooked by flop and GB figures. It's this IC tech that'll keep Moore's law going for the next 5-10 years.
 
Rumor is that Oban is a 3d vertically stacked ic. The process they used IS integration w2w 3D.

ref: http://misterxmedia.livejournal.com/114179.html


The reason I believe in this rumor is SOITEC have lots of patents around this 3D Stacking process. And Soitec have a pretty impressive relationship with AMD-IBM-Microsoft.

(article from 2004 : http://www.eetimes.com/electronics-news/4122077/SOI-set-to-drive-Xbox-PlayStation-says-Soitec-exec )

I know lots of people just don't want to believe misterxmedia BUT his rumors are looking quite real from my rigoursly patent backed ideals...

Can we ban all MisterXMedia information from B3d? There is absolutely no genuine or worthwhile information coming from that imbecile.
 

Every chip maker has plenty of patents surrounding 3D stacking. To look at old Sony or Toshiba patents would be to draw a lot of incorrect conclusions, let's just put it like that. And by the way, they have plenty of 3D patents to boot. If you're taking patents alone as proof positive of actual real world implementations, then you're going to be heading down the wrong path.
 
I know lots of people just don't want to believe misterxmedia BUT his rumors are looking quite real from my rigoursly patent backed ideals...

Some gems from him

Misterxmedia said:
Neogaf is full of gamers who don't know what is coming in the future. MS their enemy number 1. But don't be fooled by those angry opinions. They just angry because MS steeled the thunder from Sony this gen. They will always fight every little cons MS have to dead. That is their war. They have much more to prove to theirselfs. Lets leave them alone with each other. Sure they will find the truth inside a lie.

http://misterxmedia.livejournal.com/103483.html

Stop posting his stuff here :)
 
Heat with 3D is definitely an issue even if there are multiple ways around it, but I'd be impressed if they manage to pull it off in a mass produced consumer level device so soon. 2.5D gives similar performance benefits but in a much simpler layout (but bigger) to put together so I think they'll do that. Maybe two DRAM stacks then? Or a single stack 8 high? It really depends on the kind of densities DDR4 will be in at launch.

The chip packaging part of Orbis is the most interesting part to me and is completely overlooked by flop and GB figures. It's this IC tech that'll keep Moore's law going for the next 5-10 years.
Stacked DDR4 would provide the low performance of DDR4 for a higher price. If they go DDR4 I don't understand what they would gain from putting it on an interposer, it looks like a big waste of technology.
The only reason to go with memory on the interposer is for the very wide I/O type (which isn't ready, but might be, with luck).
If the memory on interposer would only provide 192GB/s or less, and costs more, or the GPU isn't fast enough to take advantage of the faster memory, they might as well use GDDR5.
 
Stacked DDR4 would provide the low performance of DDR4 for a higher price. If they go DDR4 I don't understand what they would gain from putting it on an interposer, it looks like a big waste of technology.
The only reason to go with memory on the interposer is for the very wide I/O type (which isn't ready, but might be, with luck).
If the memory on interposer would only provide 192GB/s or less, and costs more, or the GPU isn't fast enough to take advantage of the faster memory, they might as well use GDDR5.

The reason it needs to be on the interposer is that's how the connection will be made to the logic dies (APU, GPU). The whole point is to keep everything really close together which is how Wide IO and HBM is possible.

It's about future costs too. Being stuck on a 256-bit controller with GDDR5 will add up in costs more in the long run than straight DDR4 from the start. Also it doesn't need to draw as much power, so way less watts and heat. Less TDP from one part of the system means more room for elsewhere.
 
BOOM!!!
http://club.tgfcer.com/thread-6593085-22-1.html

The Sapphire HD7970 poison Edition graphics card PCB front and back each welding 12 from Hynix H5GQ2H24AFR GDDR5 memory particles, formed 6GB/384bit memory specifications.

Some people say that the memory is very expensive, people go to Taobao can search a search the particles, to see how much not Luanxiang, just say the cost of video memory, because I'm not Microsoft, can not see the final product in the end with what
 
BOOM!!!
http://club.tgfcer.com/thread-6593085-22-1.html

The Sapphire HD7970 poison Edition graphics card PCB front and back each welding 12 from Hynix H5GQ2H24AFR GDDR5 memory particles, formed 6GB/384bit memory specifications.

Some people say that the memory is very expensive, people go to Taobao can search a search the particles, to see how much not Luanxiang, just say the cost of video memory, because I'm not Microsoft, can not see the final product in the end with what

What?
 
Concerning RT:

1) Would require a LARGE amount of memory (try several Gig) to store the entire scene to test for intersections - primary or secondary rays. And the rendering engine can't utilize a delayed load geometry set as we need to see the entire scene. We aren't dealing with bucket rendering (i.e. offline).

2) How are you going to deal with aliasing? You'd need to cast several rays per pixel and each intersection would have to evaluate the shaders every time.

3) What would your depth limit of the ray type be before quitting? If you can't bounce at least 2-3 indirect rays, then you won't get very good results. What if you had 2 geometric objects that both refracted one behind the other? Objects would suddenly have to have "thickness" (which means even more geometry).

4) RT direct lighting would only be beneficial if you had area lights. But that would require even more samples. Doing specular lighting would most certainly require importance ray sampling (firing rays from the lights as well as the materials) with PDF and special sampling algorithms for both the light and BSDF. This would be the only way to get rid of the noise.

5) Notice how the Kepler demo that Nvidia showed only had 3 objects in the scene. LOL! Not even close.

In short, if they came out with a viable hardware device for RT, the film industry would get it first. :D And I don't see that happening for at least 2-3 more generations.
 
I think he's trying to say that gddr5 is not expensive and that maybe microsoft can acquire it. I think microsoft is well prepared this time with ram. Sure it's slower but it might be enough for 1080p long term whereas ps4 might hit a bottleneck long term even if it's faster. I hope they both thought about their ram configuration thoroughly. All the peripherals, apps and features that will come out plus how games will improve throughout the years. The question is there enough ram for the consoles for all that and is it fast enough? Find out in the next episode of dragonballZ.
 
FWIW apparently Kaveri prototypes have 7750Ms in them which are like 0.6 TF.

Also I don't know how feasible a 1.8TF GPU can be rammed into an APU but I've always believed they'll go with APU + GPU in the end anyways.



I think same thing here something customised like -> A10 APU (5800->384SIMD/6 CU/800MHz/246mm^2= 614,4Gflops ) + HD8770 Bonaire GPU XT (768 SIMD/12 CU /800MHz/160mm^2-> 1228.8Gflops) + 4 GB GDDR5 at 192GB/sec.

(APU + GPU without "Special sauce" -> =~ 406mm^2)
 
Just a quick ray tracing question: Would they need to repeat the lighting calculation for every frame or would it be something that could say be refreshed every say X seconds?
 
I don't think anyone knows anything. I am just going to have to wait for the specs because I don't believe much out there right now. It's all garbage and that is what Microsoft probably wants.

This forum is just like the rest of the Internet right now. Some people who claim they know something but they really don't. Some people are making the next xbox 720 out to be crap and the other people make it out to be insane and it's hard to tell what is true and what is false right now, so I am throwing everything out.
 
Just a quick ray tracing question: Would they need to repeat the lighting calculation for every frame or would it be something that could say be refreshed every say X seconds?

Every frame. The camera would always be moving and there are a lot of camera dependent calculations in shading.

-M
 
I don't think anyone knows anything. I am just going to have to wait for the specs because I don't believe much out there right now. It's all garbage and that is what Microsoft probably wants.

This forum is just like the rest of the Internet right now. Some people who claim they know something but they really don't. Some people are making the next xbox 720 out to be crap and the other people make it out to be insane and it's hard to tell what is true and what is false right now, so I am throwing everything out.

Sure, no one knows anything, MS are better than Apple at keeping secrets and all this stuff that's leaking is a sneaky ploy by them to get Sony to underestimate their system.

Now, where did I leave my tinfoil hat...?

it's hard to tell what is true and what is false right now, so I am throwing everything out

Well, no one said it was going to be easy...
 
Status
Not open for further replies.
Back
Top