Intel, Integrated video and Eurasia

My thoughts exactly when I saw Anand's article. I would have posted something here to that effect but couldn't be arsed. :)

Certainly the current Intel GMAs are less than impressive so you would hope that the well-specified PowerVR SGX cores would make a very good alternative for integration on the package.
 
Particularly when one considers that Intel needs D3D9.0 support for Windows Vista - there's possibly little point in developing on die graphics that don't have D3D9 for Vista.

This has SGX written all over it.
 
intel already has Dx9 hardware :p

and besides. PowerVR technology sux. if it was as good as some people claim, it would be used in desktops too. But its not .
 
chavvdarrr said:
intel already has Dx9 hardware :p
To be more accurate, Intel already has SM2.0 hardware. Eurasia is SM3.0+ (and perhaps even DX10 , but I would guess it to be more similar to Xenos, feature-wise).
and besides. PowerVR technology sux. if it was as good as some people claim, it would be used in desktops too. But its not .
I would appreciate it if you didn't post such things for no other reason than potentially creating a flamewar. This is even more annoying when your post is factually incorrect and provocating. Could you please do an effort to limit similar future postings?

Uttar
 
GMA900 doesn't have a geometry processor of any sort last time I checked.

I hear Intel also prepares "handtops" with Windows Vista for the foreseeable future.
 
Uttar said:
I would appreciate it if you didn't post such things for no other reason than potentially creating a flamewar. This is even more annoying when your post is factually incorrect and provocating. Could you please do an effort to limit similar future postings?
Uttar
Sure, the post was too sarcastic. I'm just tired of reading year after year that powervr has great hardware... and always these talks become vapor.... I'll believe PowerVr can make videochip for desktops only after i see it, and even then i won't be sure :cry:
I still don't see where is my post factually incorrect. last time i checked DX9 specs, SM2 was considered "DX9 in hardware". go at www.ati.com if you don't believe me ;)
As for missing geometry processor - has anyone doubts that intel can add it if they want? And we need to use somehow the second core of iP-D cpus after all :LOL:
 
Is it just me or is the slides about the power management BS?

We have chips the can predict when the user is going to demand more processor power in advance? People wonder why I don't trust hardware companies.
 
Last edited by a moderator:
chavvdarrr said:
Sure, the post was too sarcastic. I'm just tired of reading year after year that powervr has great hardware... and always these talks become vapor.... I'll believe PowerVr can make videochip for desktops only after i see it, and even then i won't be sure :cry:
PowerVR doesn't "make hardware". They design chip IP. This implies, among other things, that there is still work to be done by the companies licensing their technology to make an actual product using their chip design; it could be SoC integration or board creation, for example.
One of the many problems that have plagued them in the PC market is that even if a random manufacturer would license their technology the day it was announced, it would be quite likely that it wouldn't be high-end anymore the day it was released in stores. Applying the "High-End creates the brand name for the Low-End" strategy of NVIDIA and ATI just can't work for them with this business model, imo. In the PC market that is; in the SoC Mobile Phone market, everyone has equivalent big delays, so it really isn't such a big problem.

I still don't see where is my post factually incorrect. last time i checked DX9 specs, SM2 was considered "DX9 in hardware". go at www.ati.com if you don't believe me ;)
Agreed. My point just was that it wasn't the same hardware generation at all, and would thus have to be compared to future Intel GPUs rather than current ones, if you wanted a fair comparaison.

As for missing geometry processor - has anyone doubts that intel can add it if they want? And we need to use somehow the second core of iP-D cpus after all :LOL:
It has been said a number of times on this forum and in other places than Intel's GPU team doesn't like the concept of vertex texturing at all. It's hard for me to make any real conclusion out of that, but somehow I doubt they'd be willing to make a SM3.0. part with Vertex Shaders in them for that and some other reasons.
Intel's strategy in the past was to just hit for features, not performance. With DX10, this won't be possible anymore, afaik.

bloodbob said:
Is it just me or is the slides about the power management BS?
I don't think so personally, powering it up/down only takes 1 to 10 cycles most of the time according to Intel, so considering this is an OOO processor with a 14 stages pipeline, it doesn't seem completely out of this world to me. It most definitively is an interesting use of speculative logic, though.


Uttar
 
Dave B(TotalVR) said:
http://www.anandtech.com/tradeshows/showdoc.aspx?i=2511

It occured to me, althought this prototype is using intels own 855GM chip, Intel has licensed Eurasia and on the core of a processor would be an ideal place for it.

One can only speculate, we still have to find out exactly what Intel plans to do with its powervr tech license.

Sorry Dave,

It may have been more appropriate to post here but article has already been discussed in this thread http://www.beyond3d.com/forum/showthread.php?t=22977
 
Uttar said:
I don't think so personally, powering it up/down only takes 1 to 10 cycles most of the time according to Intel, so considering this is an OOO processor with a 14 stages pipeline, it doesn't seem completely out of this world to me. It most definitively is an interesting use of speculative logic, though.
Uttar
So the chip is running at say 100 Hz? Seeing as the diagram is showig a ball park of 1/10 of a second as oposed to 1 second on the old tech.
 
chavvdarrr said:
intel already has Dx9 hardware :p

and besides. PowerVR technology sux. if it was as good as some people claim, it would be used in desktops too. But its not .


No, the trouble is it's made in England, that is why it is not successful on the desktop.

But then need I remind you how much the KYRO II shit on the GF2MX whilst costing the same amount.
 
Dave B(TotalVR) said:
No, the trouble is it's made in England, that is why it is not successful on the desktop.
hmmm ?!

Kyro line was fine. But none followed. Uttar is right that current IP-model makes life harder, but users want real cards. Delivering them is not our problem. Come and take our money !!!

hey, we have icons for ati,nv,s3 and xgi but not for powervr?! add one please! ;)
 
Dave B(TotalVR) said:
No, the trouble is it's made in England, that is why it is not successful on the desktop.
Surely you jest.
Dave B(TotalVR) said:
But then need I remind you how much the KYRO II shit on the GF2MX whilst costing the same amount.
KYROII is and has been just as dead and irrelevant as the GF2MX for a long, long time now.
 
If you really want to have a fruitful debate then kindly stay on topic. Intel holds two licenses for the MBX family of IP cores and has licensed Eurasia too.

Now please pick up from there if there's anything you'd want to contribute or just leave it be until Intel announces the according products in the future.

Uttar,

It has been said a number of times on this forum and in other places than Intel's GPU team doesn't like the concept of vertex texturing at all. It's hard for me to make any real conclusion out of that, but somehow I doubt they'd be willing to make a SM3.0. part with Vertex Shaders in them for that and some other reasons.
Intel's strategy in the past was to just hit for features, not performance. With DX10, this won't be possible anymore, afaik.

The ballpark between not having any geometry unit at all (hell I even doubt their cores are capable of simple transformations) and reaching WGF2.0 compliance out of the blue is huge.

Having a scalable unified shader architecture licensed can become handy for whatever Intel wants it to. If you read deep enough between the lines in the related Eurasia/SGX announcements and especially the geometry related notes, it's easy to see how far they've gone. There's probably one of your old wet dreams hidden in there and I'm not sure if you've detected it yet. It starts with a :p
 
anaqer said:
Surely you jest.

Actually, no, it's a simple case of currency. Nvidia makes more money because they are dollars through and through, but given IMGTEC holds its wealth in pounds it is totally at the mercy of the £/$ exchange rate. We all know computer hardware is cheaper in the USA


KYROII is and has been just as dead and irrelevant as the GF2MX for a long, long time now.

yes and as Ailuros says we should divert back on topic;)
 
The ballpark between not having any geometry unit at all (hell I even doubt their cores are capable of simple transformations) and reaching WGF2.0 compliance out of the blue is huge.

Having a scalable unified shader architecture licensed can become handy for whatever Intel wants it to. If you read deep enough between the lines in the related Eurasia/SGX announcements and especially the geometry related notes, it's easy to see how far they've gone. There's probably one of your old wet dreams hidden in there and I'm not sure if you've detected it yet. It starts with a :p

Well it looks to me like if you have Eurasia you have vertex processing. The reason I think it is ideal is because of its small silicon area. Given this it should **NOT** make a huge impact on chip yields for Intel. On top of that, given their excellent ability to produce highly optimized logic blocks for making their processors one could envisage the Eurasia core running at the full speed of the processor.

That's a scary though, having a GPU running in the ~3Ghz region. Couple this with an integrated memory controller - lets say DDR 400 dual channel as a minimum, probably higher. thats 6.4 gb/s. 1600x1200x32 at 60 fps requires about 440 MB/s for framebuffer writes. The question is, how much memory bandwidth will the texturing and scene composition require? anybodies guess but clearly this is a bandwidth restricted system, a system where PowerVR would shine above its competitors.

Now i'd really like to see a prototype of that.


One final possibility, should an AIB become a reality, shunting tiles (before or after texture reads so you choose bandwith load balance) off the on chip graphics core to the AIB over PCI express could seriously boost the performance of the AIB. The bueaty of the PVR architure is that tiles can be packaged and sent around to different processing elements with ease. it appears to be th emain selling point of Eurasia in fact, along with its portability - you can do almost anything with those ALU's.
 
Last edited by a moderator:
Well it looks to me like if you have Eurasia you have vertex processing. The reason I think it is ideal is because of its small silicon area. Given this it should make a huge impact on chip yields for Intel. On top of that, given their excellent ability to produce highly optimized logic blocks for making their processors would could envisage the Eurasia core running at the full speed of the processor.

I've lost you here....
 
Eurasia offers a unified shader similar to Xenos, doesn't it?

I think Dave's saying that this ought to lead to a smaller silicon area as no additional vertex shaders need to be added. Whether or not this is the case I don't know (would separate pixel & vertex shaders take up more space?), but I can certainly see the logic behind it.
 
I don't even know to what target market he is exactly refering to, because 3GHz for a graphic core is just wishful dreaming and that for years to come.

The Intel SoCs where 2700G (MBX Lite) has been integrated so far, are with CPUs over 600MHz and the graphics unit merely at 75MHz.

Since in the according Eurasia/SGX announcements things like procedural geometry are being mentioned (and that the specifications exceed dx9.0) it doesn't take a wizzard to assume that it's able to handle at least as complex geometry calls as required in WGF2.0, but that's not my question mark.
 
Back
Top