Welcome, Unregistered.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Reply
Old 30-Apr-2012, 06:26   #51
Ninjaprime
Member
 
Join Date: Jun 2008
Posts: 337
Default

Quote:
Originally Posted by 3dilettante View Post
The big chip would be targeting compute and professional graphics segments.
$1000 would be way too low for a top-end Quadro.
So you think its a compeletly professional/compute product with no consumer desktop component? That was my original thought. Though the buzz seems to be "wait for BigK its going to blow away the 680" ect...
Ninjaprime is offline   Reply With Quote
Old 30-Apr-2012, 06:35   #52
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,487
Default

From how it's described, it seems like it is going to be tailored to fit those markets.
It might still fit into a high-end enthusiast single-GPU product as long as it doesn't lose to a single 680.
The transistor count should give it plenty to work with, and Nvidia has left the upper TDP range empty, which in these power-limited scenarios means a chip running in that range should be able to win.

The 690 card does mean that the big chip may not have the top gaming bracket.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 30-Apr-2012, 07:32   #53
Davros
Regular
 
Join Date: Jun 2004
Posts: 11,082
Default

I wonder if nv will try a little experiment. release it as a quadro only product and see if the high end gamers buy if.
__________________
Guardian of the Bodacious Three Terabytes of Gaming Goodness™
Davros is online now   Reply With Quote
Old 30-Apr-2012, 07:51   #54
AlphaWolf
Specious Misanthrope
 
Join Date: May 2003
Location: Treading Water
Posts: 8,143
Default

Quote:
Originally Posted by Davros View Post
I wonder if nv will try a little experiment. release it as a quadro only product and see if the high end gamers buy if.
You could build a quad sli system for the price of a high end quadro, so I doubt it.
AlphaWolf is offline   Reply With Quote
Old 30-Apr-2012, 08:44   #55
Silent_Buddha
Regular
 
Join Date: Mar 2007
Posts: 10,491
Default

Quote:
Originally Posted by Dooby View Post
So, 690 wasn't BigK, it was dual 680. How utterly boring:
Pretty much exactly what I expected though. BigK is unlikely to show up before the 7xx series which is probably slated for the fall or winter quarter.

So 680 will be the top single chip solution while 690 will be the top card solution for the 6xx line.

Heck, it wouldn't even surprise me if BigK was relegated to the ultra enthusiast (~1000 USD) segment when it launches in the 7xx series with the chips focus being on prosumer/professional/HPC markets the consumer space will just be there for inventory bleed off and/or salvage parts. With that Nvidia using smaller dies tailored for consumer use fitting everything from 780 on down. I certainly wouldn't be surprised if Nvidia abandoned the big die strategy for the consumer space.

Now to see what Nvidia comes up with in the lower segments.

Regards,
SB
Silent_Buddha is offline   Reply With Quote
Old 01-May-2012, 02:35   #56
Ryan Smith
Member
 
Join Date: Mar 2010
Posts: 170
Default

Quote:
Originally Posted by Davros View Post
I wonder if nv will try a little experiment. release it as a quadro only product and see if the high end gamers buy if.
They would have to release GeForce drivers for it. The Quadro drivers aren't exactly performant (never mind the update schedule).
Ryan Smith is offline   Reply With Quote
Old 01-May-2012, 09:13   #57
Ailuros
Epsilon plus three
 
Join Date: Feb 2002
Location: Chania
Posts: 8,715
Default

Quote:
Originally Posted by Silent_Buddha View Post
Heck, it wouldn't even surprise me if BigK was relegated to the ultra enthusiast (~1000 USD) segment when it launches in the 7xx series with the chips focus being on prosumer/professional/HPC markets the consumer space will just be there for inventory bleed off and/or salvage parts. With that Nvidia using smaller dies tailored for consumer use fitting everything from 780 on down. I certainly wouldn't be surprised if Nvidia abandoned the big die strategy for the consumer space.
GK110 doesn't sound like it'll appear for desktop all that soon. If by that time 28nm yields/capacities and in extension manufacturing costs haven't normalized it won't be good news for both AMD and NVIDIA for desktop sales (well it'll be most likely high margins, low volume).

As for NV abandoning the big die strategy in some way for desktop I wouldn't be much surprised either in the longrun, but for the time being it doesn't seem likely that professional market sales (despite big margins) can absorb the R&D expenses for such a high complexity chip.
__________________
People are more violently opposed to fur than leather; because it's easier to harass rich ladies than motorcycle gangs.
Ailuros is offline   Reply With Quote
Old 03-May-2012, 00:41   #58
iMacmatician
Member
 
Join Date: Jul 2010
Location: United States of America
Posts: 462
Default

I'm not sure if this has been pointed out before in the long Kepler thread… but someone at the SemiAccurate forums noted that the Kepler GPUs for the Oak Ridge upgrade will have 6 GB memory. That seems to indicate the GK110 will have either a 384-bit bus or a 512-bit bus that's disabled to 384-bit on the particular cards they'll use.
iMacmatician is online now   Reply With Quote
Old 03-May-2012, 01:42   #59
trinibwoy
Meh
 
Join Date: Mar 2004
Location: New York
Posts: 10,102
Default

Quote:
Originally Posted by iMacmatician View Post
I'm not sure if this has been pointed out before in the long Kepler thread… but someone at the SemiAccurate forums noted that the Kepler GPUs for the Oak Ridge upgrade will have 6 GB memory. That seems to indicate the GK110 will have either a 384-bit bus or a 512-bit bus that's disabled to 384-bit on the particular cards they'll use.
Nice find. I doubt they would saddle such a high profile deployment with salvage chips so maybe it's 384-bit.

Quote:
Originally Posted by Ailuros View Post
GK110 doesn't sound like it'll appear for desktop all that soon.
Damn them all to hell.
__________________
What the deuce!?
trinibwoy is offline   Reply With Quote
Old 03-May-2012, 01:57   #60
ninelven
PM
 
Join Date: Dec 2002
Posts: 1,460
Default

Meh, I imagine they could sell them at $799 and still make decent coin... Only reason I can see for delaying consumer availability would be supply constraints (which is probably an issue); why sell for $799 when you can sell the same chip for much much more.
__________________
//
ninelven is offline   Reply With Quote
Old 03-May-2012, 14:40   #61
tunafish
Member
 
Join Date: Aug 2011
Posts: 408
Default

Quote:
Originally Posted by Ailuros View Post
As for NV abandoning the big die strategy in some way for desktop I wouldn't be much surprised either in the longrun, but for the time being it doesn't seem likely that professional market sales (despite big margins) can absorb the R&D expenses for such a high complexity chip.
The professional segment is no longer just "high-margin", it's also high-revenue. Last quarter, the "Professional Solutions" business unit (quadros and teslas) brought in $221M, while all the rest of their GPU products brought in $621M. There's no way, no way at all, that the GF100-based products were a third of their consumer sales. I'd wager that all GTX models aren't a third of their consumer sales. The consumer GPUs really aren't needed to support the professional business anymore.
tunafish is offline   Reply With Quote
Old 03-May-2012, 17:23   #62
Silent_Buddha
Regular
 
Join Date: Mar 2007
Posts: 10,491
Default

Quote:
Originally Posted by tunafish View Post
The professional segment is no longer just "high-margin", it's also high-revenue. Last quarter, the "Professional Solutions" business unit (quadros and teslas) brought in $221M, while all the rest of their GPU products brought in $621M. There's no way, no way at all, that the GF100-based products were a third of their consumer sales. I'd wager that all GTX models aren't a third of their consumer sales. The consumer GPUs really aren't needed to support the professional business anymore.
In terms of unit volume, I wouldn't be surprised if the 580/570 still sold more units than their Quadro and Tesla variants.

However, in terms of revenue I wouldn't be surprised if the Quadro and Tesla variants brought in more revenue.

The consumer side still serves as a good avenue to bleed off excess supply (more wafers = cheaper per wafer costs I'd imagine) as well as a good area to dump salvage chips. And additional revenue even at significantly lower margins is still additional revenue. But it probably is true that they don't absolutely need to sell their big die GPUs in the consumer market to make a profit anymore.

I suspect that's going to be the main fuction of the consumer version of GK110 (assuming there is one). As I wouldn't be at all surprised if GK110 was only minorly faster than GK114 (assuming there's a GK114) in the majority of gaming workloads. All pure speculation, of course. It may or may not end up like this.

Regards,
SB
Silent_Buddha is offline   Reply With Quote
Old 03-May-2012, 18:07   #63
Gipsel
Senior Member
 
Join Date: Jan 2010
Location: Hamburg, Germany
Posts: 1,450
Default

Quote:
Originally Posted by tunafish View Post
The professional segment is no longer just "high-margin", it's also high-revenue. Last quarter, the "Professional Solutions" business unit (quadros and teslas) brought in $221M, while all the rest of their GPU products brought in $621M. There's no way, no way at all, that the GF100-based products were a third of their consumer sales. I'd wager that all GTX models aren't a third of their consumer sales. The consumer GPUs really aren't needed to support the professional business anymore.
IIRC most of that revenue comes from the Quadro line. And you should have a look what kind of GPUs are sold there. That is quite a bit more than the top of the line model (they sell each crappy and slow GPU there too, they are just more expensive than the consumer versions). So I doubt that more than a third of the professional solutions revenue actually comes from GF100/110.

Last edited by Gipsel; 03-May-2012 at 18:13.
Gipsel is offline   Reply With Quote
Old 03-May-2012, 23:12   #64
swaaye
Entirely Suboptimal
 
Join Date: Mar 2003
Location: WI, USA
Posts: 7,336
Default

So perhaps BigK is going to be heavily slanted towards GPGPU then? That would be interesting to see...
swaaye is offline   Reply With Quote
Old 04-May-2012, 22:24   #65
Davros
Regular
 
Join Date: Jun 2004
Posts: 11,082
Default

fermi was wasnt it ?
__________________
Guardian of the Bodacious Three Terabytes of Gaming Goodness™
Davros is online now   Reply With Quote
Old 05-May-2012, 01:34   #66
Grall
Invisible Member
 
Join Date: Apr 2002
Location: La-la land
Posts: 6,855
Default

Quote:
Originally Posted by swaaye View Post
So perhaps BigK is going to be heavily slanted towards GPGPU then?
Considering the brutal specs that are being bandied about, I think that's very much a given.

And yes, it will be very interesting to see...! I myself would like to see proper pre-emptive task switching support on GPUs, dunno how much extra hardware that would require though, probably quite a lot since there hasn't been any such moves made so far, but it would be quite helpful in a lot of situations, if GPGPU is to move out of the fringe rut it's been stuck in so far.
__________________
"Du bist Metall!"
-L.V.
Grall is offline   Reply With Quote
Old 05-May-2012, 21:23   #67
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,487
Default

I thought the general directions for GCN and Nvidia's compute architectures had preemption planned as one of the next steps, if not buried somewhere in current hardware.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 05-May-2012, 22:27   #68
lanek
Senior Member
 
Join Date: Mar 2012
Location: Switzerland
Posts: 1,228
Default

Quote:
Originally Posted by 3dilettante View Post
I thought the general directions for GCN and Nvidia's compute architectures had preemption planned as one of the next steps, if not buried somewhere in current hardware.
For Nvidia this is the case, but they are in a bad position with their chips in 28nm ( not their cards, the GK104 is excellent ) , for an unknown reason it seems they have not been capable to transit fermi to Kepler in this "first" ( i insist on this first ) generation of 28nm...

Lets take the fact... the GK110 will be released before the initial Kepler GK100... the GK104 is pushed to his limit for be the flagship... The good of Kepler have been pushed in marketing like a crazy, asking to forget the bad.

The GK110 will appear, but it is nearly clear after the compute conference of Nvidia they are not specially ( or maybe not at all ) aimed at a "mainstream" flagship ( understand a new flagship in gaming cards ) .

This will certainly bring to the born of an hybrid card taken from the GK110 at this end of the year.. Something in between.

I have follow the conference, and... nothing.. i was imagine like during the conference of GCN in June 2011 we will get some solid infos of what is the so called " BigK" architecture ... nothing have come .. We have been overflowed of future possibilies. but nothing solid..

Last edited by lanek; 05-May-2012 at 22:38.
lanek is online now   Reply With Quote
Old 06-May-2012, 01:04   #69
rpg.314
Senior Member
 
Join Date: Jul 2008
Location: /
Posts: 4,274
Send a message via Skype™ to rpg.314
Default

Quote:
Originally Posted by 3dilettante View Post
I thought the general directions for GCN and Nvidia's compute architectures had preemption planned as one of the next steps, if not buried somewhere in current hardware.
IIRC, for AMD compute preemption is planned for 2013 and graphics preemption is planned for 2014.
rpg.314 is offline   Reply With Quote
Old 08-May-2012, 15:27   #70
Blazkowicz
Senior Member
 
Join Date: Dec 2004
Posts: 4,970
Default

Quote:
Originally Posted by tunafish View Post
The professional segment is no longer just "high-margin", it's also high-revenue. Last quarter, the "Professional Solutions" business unit (quadros and teslas) brought in $221M, while all the rest of their GPU products brought in $621M. There's no way, no way at all, that the GF100-based products were a third of their consumer sales. I'd wager that all GTX models aren't a third of their consumer sales. The consumer GPUs really aren't needed to support the professional business anymore.
thanks. I was going to point out they aim for relatively high volume, just like other vendors do build big, non-consumer chips. POWER7, sparc etc.

I can see even african universities buying BigK cards. just stuff one card in a cheap PC with 32GB memory, add another card later to upgrade. it's easy.

though I have to wonder which is easier to program for, or be able to run more simulations or stuff. GPGPU or small cluster?

for about quadro 6000 price, you could in the near future run four consumer Haswell PCs, connected to each other with cheap dedicated 10Gb RJ45 ethernet. (one card with four ports in each PC)


Quote:
Originally Posted by rpg.314 View Post
IIRC, for AMD compute preemption is planned for 2013 and graphics preemption is planned for 2014.
great, I've been willing for this to happen, I would like to run some multiplayer gaming for at least four players, from a single GPU . with multi-seat and thin clients.
I have a feeling this might require firegl/quadro drivers and windows server though, linux hacking will get there but with lower performance.
Blazkowicz is online now   Reply With Quote
Old 09-May-2012, 03:01   #71
3dcgi
Senior Member
 
Join Date: Feb 2002
Posts: 2,231
Default

Quote:
Originally Posted by Blazkowicz View Post
great, I've been willing for this to happen, I would like to run some multiplayer gaming for at least four players, from a single GPU . with multi-seat and thin clients.
I have a feeling this might require firegl/quadro drivers and windows server though, linux hacking will get there but with lower performance.
Why not just use 4 lower cost GPUs if that is your use case?
3dcgi is offline   Reply With Quote
Old 09-May-2012, 11:02   #72
denev2004
Member
 
Join Date: Apr 2010
Location: China
Posts: 143
Send a message via MSN to denev2004 Send a message via Skype™ to denev2004
Default

Quote:
Originally Posted by 3dcgi View Post
Why not just use 4 lower cost GPUs if that is your use case?
I guess it should be the driver's problem. If you put it in a single computer, than you'd better just use 1~2 GPU(s)
__________________
Well I'm not a native English speaker so there might be misuse through my words. I just hope it won't cause too much misunderstanding.
denev2004 is offline   Reply With Quote
Old 09-May-2012, 13:09   #73
Blazkowicz
Senior Member
 
Join Date: Dec 2004
Posts: 4,970
Default

Quote:
Originally Posted by 3dcgi View Post
Why not just use 4 lower cost GPUs if that is your use case?
lazyness and it isn't really guaranteed to work for now. nowadays you can even use a real graphics card from a virtual machine, on Intel server hardware or AMD server or consumer hardware but from what I've read it breaks down after one card. or some can't get their storage card recognised etc. ; patch this if you run an AMD card, etc.
the tech, or the software and support are still in their infancy. I'm waiting it out a bit, I want to tinker with this but I'm sure I will get quirks and hurdles and that it will make me totally mad.
Blazkowicz is online now   Reply With Quote
Old 09-May-2012, 13:26   #74
Blazkowicz
Senior Member
 
Join Date: Dec 2004
Posts: 4,970
Default

Quote:
Originally Posted by denev2004 View Post
I guess it should be the driver's problem. If you put it in a single computer, than you'd better just use 1~2 GPU(s)
I'm pretty sure it will be a quadro feature.
nowadays you can run one instance of windows on one big PC and have it run the show for 40 $100 thin client terminals, the tech is fully included in XP pro and 7 pro but you need a Server windows with a lot of licensing.

so yeah, one bigK running the show for the whole household, beamed on desktop, tablets, laptops, you get what I mean you could even store it in the attic or basement, or just in the most bad ass desktop with another GPU. a private Onlive cloud network.
not sure they will give it away and lose revenue on more, smaller consumer GPUs.

well if someday I have a big household with a nice income and many kids I'll be tempted to do this and pay the big microsoft and quadro licenses, just because I want to make a point.
Blazkowicz is online now   Reply With Quote
Old 09-May-2012, 14:14   #75
Psycho
Member
 
Join Date: Jun 2008
Location: Copenhagen
Posts: 681
Default

Quote:
Originally Posted by Blazkowicz View Post
I would like to run some multiplayer gaming for at least four players, from a single GPU . with multi-seat and thin clients.
I have a feeling this might require firegl/quadro drivers and windows server though, linux hacking will get there but with lower performance.
Why would you need preemption for that?
Our rendering servers delivers the highest throughput with 2 instances (of our application, ie 1 d3d9 context pr instance) per graphics card, and adding another card with it's own load can barely affects performance (ie it scales nicely).
This is Windows7 (for the particular machine with 2 cards, but 2008R2 works the same) and radeons.
For geforces things really slow down with several contexts per card though (and the cpu load is much higher, and drivers breaks down after a few weeks, so that's not what we use in servers anyway...).

Sure you could have more fine grained multitasking than this with preemption - but for normal realtime loads it shouldn't be necessary. Don't know how fine grained it is on the radeons, but it's more than once per present() at least.

Quote:
Originally Posted by Blazkowicz View Post
nowadays you can even use a real graphics card from a virtual machine, on Intel server hardware or AMD server or consumer hardware but from what I've read it breaks down after one card.
Yeah, seems to be some problems with acceleration and using the 2nd card from remote or service contexts - will have another look at that soon. (traditionally our app is running from an auto logged in local user to get the acceleration).
Psycho is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 15:40.


Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.