NV50 SPECS ???

MPI said:
No, a cache isn't addressable, they're supposed to be "transparent". Local memory(like for instance instruction reordering buffers, registers or whatever) aren't necessarily caches.

what if you implement a cache on top of something that is addressable
does that make it not a cache anymore ?
 
LeGreg said:
what if you implement a cache on top of something that is addressable
does that make it not a cache anymore ?

Huh? All caches are on top of something addressable, usually the main memory space, but can of course be harddrives or whatever.

But you still can't address the _cache_.

Or didn't I get the question right?
 
Just something to note .. I thought Nvidia weren't going to use the NV5X naming convention anymore .. or at least that's what I thought when the CEO talked in December.

US
 
RussSchultz said:
I think you're proffering a very narrow definition of 'cache'.

Yeah, maybe... But I think it's a good definition. I mean, the _word_ cache in and of itself _could_ mean just any temporary storage, but for 20-30 years pretty specific implementations of intermediate storage in the world of computer architecture. It's all semantics, of course, but there are so many other words to use(buffers et.c) for various local memory variants, so why muddle this one up? If you need one at all, that is... 'Local memory' fit the bill just fine, IMHO.

There's a reason the local memory of the SPUs in the Cell arch. aren't called 'caches' by S/I/T for instance.

Same for distributed/parallel computing, things would be very muddled if the terms 'local memory' and 'cache' would be interswitchable. Best to use the accepted terms, I think.
 
I'd like to make a couple of comments on the original rumor:

I've come accross some interesting new details on the nv50. remember it can be just a rumor or dunno, so take it with a little grain of slat :)))

These are the Specs for Nvidia's NV50

# 375-400 million of transistor on 90-nm process
# 550-600 MHz core
# 1600 MHz 256Mb (Support for upto 512Mb) GDDR3
# Graphics Core 256-bit
# Memory Interface 256-bit
# 32 (24 real) pixels pipelines (on the Ultra model)
# 32 Vertex Shader Engines
# 51.2 GB/sec Bandwidth (GDDR 3)
# Fillrate 12.8 billion texetls /sec.
# Textures per Pixel 32 (maximum in a single redering pass)
# Nvidia's LMA - IV
# DirectX 9.0d UMI-23
# Release date Q3 2005.
1. First of all, I think we're all expecting a refresh to the NV40 in the first half of this year, so a new architecture in Q3 seems questionable. I'm willing to bet that nVidia will attempt to keep milking the NV4x architecture until WGF.

2. What the hell does a 256-bit graphics core mean? Seriously?

3. 32 (24 real) pixel pipelines could mean 32 pixel pipelines with 24 ROP's. That doesn't sound unreasonable given the (rumored, probably guessed) clockspeeds.

4. Same number of vertex and pixel pipelines would imply unified pipelines. A wealth of more solid information to date has seemed to indicate that nVidia will not be implementing a unified architecture, at least in this next generation.

5. The Texels/sec calculation doesn't make any sense at all (closest to that would be 24 * 550MHz, but that's 13.2 BT/sec).

In sum, total and complete rubbish. I'm sure the Inquirer either has a story in the works, or has already posted on this.
 
From a programmer's point of view, GPU doesn't directly address any kind of memory. The cache I mentioned earlier, is some kind of stuff in between, it can not be directly addressed, just like conventional video memory. Yet the programmers have certain control over its usage, i.e. what kind of data should be stored in it, how frequently it should be updated, etc. The rumored 10MB edram in Xenon GPU is a specific implementation -- it can only contain framebuffer data, the programmer has no control over it.
 
MPI said:
Yeah, maybe... But I think it's a good definition. I mean, the _word_ cache in and of itself _could_ mean just any temporary storage, but for 20-30 years pretty specific implementations of intermediate storage in the world of computer architecture. It's all semantics, of course, but there are so many other words to use(buffers et.c) for various local memory variants, so why muddle this one up? If you need one at all, that is... 'Local memory' fit the bill just fine, IMHO.

There's a reason the local memory of the SPUs in the Cell arch. aren't called 'caches' by S/I/T for instance.

Same for distributed/parallel computing, things would be very muddled if the terms 'local memory' and 'cache' would be interswitchable. Best to use the accepted terms, I think.
I was actually thinking of "disk cache", or "font cache", or maybe even "texture cache", "tlb cache entries". none of these have anything to do with addressable memory, and may use local memory, etc. They all, however, represent the idea that there is a "closer"/"faster" temporary storage for the elements you're using.

In the discussion memory architectures, of course, a cache is more specific and refers to exactly what you've presented. But that isn't the end-all-be-all of what a cache can be.
 
I certainly see what you mean, but I responded to a response(heh) in the context of local memory on a GPU, you know...

Furthermore, those caches you mention are usually also transparent and non-addressable, seen from one abstraction layer up. If we're talking about disk cache for instance like in Linux or Windows there's no way for the application to see whether the data it is requesting or accessing is coming from the OS disk cache or from the disk itself. Even if they are implemented in regular memory space they are basically functioning like a hierarchial cache, just that the 'addressability' isn't referring to the main memory space(which is one abstraction layer down) it is implemented in, but the disk memory space.
 
NV50 Rumors

Nvidia has cancelled the "NV50" project.

Any information about nvidia's next generation chip is top secret.

Nvidia could be trying to keep things quiet to work on their project(s) without having competators have an idea of what they are going to produce, therefore having a "surprise" advantage.

The following is speculation:

They also could be dropping the NVXX model number scheme (or specificially "NV50") and creating a new product.

Nvidia might also want to hold off because of...
Longhorn - They don't want to release a new (expensive) card that isn't Longhorn ready.
WGF 2.0 - Compatability and features in "DirectX 10" in Longhorn may not be ready yet. (Shader Model 4.0)
Nvidia might be waiting for 3d apps, CPUs, and other hardware/software limitations to catch up. The GeForce 6 series is currently fast enough to run most games and the introduction of a new achitecture might not be available yet. High-end video cards don't pay as much as their mainstream cards.
 
Re: NV50 Rumors

secured2k said:
Nvidia has cancelled the "NV50" project.
You seem quite certain of yourself. As far as I know, this was never put forth with any more credibility than the Inquirer can offer, i.e. none.

Nvidia might be waiting for 3d apps, CPUs, and other hardware/software limitations to catch up. The GeForce 6 series is currently fast enough to run most games and the introduction of a new achitecture might not be available yet. High-end video cards don't pay as much as their mainstream cards.
Performance isn't the issue. I guarantee you that nVidia wants to always have the highest-performing part available. The reason why nVidia may not release a NV5x part this year would be that they can extend the performance of the NV4x parts. If anything, they would only be holding back on a new architecture because they feel that their current parts have enough technology: they want to be certain they always have leadership in performance.
 
Re: NV50 Rumors

secured2k said:
Nvidia has cancelled the "NV50" project.

Any information about nvidia's next generation chip is top secret.

Nvidia could be trying to keep things quiet to work on their project(s) without having competators have an idea of what they are going to produce, therefore having a "surprise" advantage.

The following is speculation:

They also could be dropping the NVXX model number scheme (or specificially "NV50") and creating a new product.

Nvidia might also want to hold off because of...
Longhorn - They don't want to release a new (expensive) card that isn't Longhorn ready.
WGF 2.0 - Compatability and features in "DirectX 10" in Longhorn may not be ready yet. (Shader Model 4.0)
Nvidia might be waiting for 3d apps, CPUs, and other hardware/software limitations to catch up. The GeForce 6 series is currently fast enough to run most games and the introduction of a new achitecture might not be available yet. High-end video cards don't pay as much as their mainstream cards.



Nvidia has not cancelled NV50 for certain. that was only internet rumor mongering by theinquirer and its ilk. the R400 was officially confirmed to be cancelled and-or reworked into Xbox2 graphics and R600. but the NV50, Nvidia has said nothing on it.
 
NV47 first (spring-summer), NV50 later (winter-spring). I suspect that NV50 will be WGF chip.

As for naming - is it really that important? NV47 is a new NV4x-class hi-end chip whether this will be known as NV48, G70, NV50 or whatever. NV50 is the next generation NV's core, probably WGF-compartible.
 
Chalnoth said:
I'd like to make a couple of comments on the original rumor:

I've come accross some interesting new details on the nv50. remember it can be just a rumor or dunno, so take it with a little grain of slat :)))

1. First of all, I think we're all expecting a refresh to the NV40 in the first half of this year, so a new architecture in Q3 seems questionable. I'm willing to bet that nVidia will attempt to keep milking the NV4x architecture until WGF.

2. What the hell does a 256-bit graphics core mean? Seriously?

3. 32 (24 real) pixel pipelines could mean 32 pixel pipelines with 24 ROP's. That doesn't sound unreasonable given the (rumored, probably guessed) clockspeeds.

4. Same number of vertex and pixel pipelines would imply unified pipelines. A wealth of more solid information to date has seemed to indicate that nVidia will not be implementing a unified architecture, at least in this next generation.

5. The Texels/sec calculation doesn't make any sense at all (closest to that would be 24 * 550MHz, but that's 13.2 BT/sec).

In sum, total and complete rubbish. I'm sure the Inquirer either has a story in the works, or has already posted on this.

In addition: why GDDR3 for a 2006 product?

WGF 1.0, or WGF 2.0?

He obviously meant WGF2.0.

By the way anyone noticed that one?

The sessions explore the technical details and opportunities with advances in Microsoft DirectX 10, Microsoft ClearType, gamma, desktop window manager, the monitor stack, next-generation printing, Windows Color System, and the Windows Longhorn Display Driver Model.

http://www.microsoft.com/whdc/winhec/tracks2005/w05tracks.mspx#EIAA

WGF is being mentioned further down the line of course....(credit for Demirug for pointing it out).
 
DaveBaumann said:
He obviously meant WGF2.0.

Not necessarily. It depends on what's been looked at to formulate the opinions.

Considering that even Intel's integrated accelerators might make WGF1.0 (or not? well at least say R3xx), I don't see why meaning just WGF1.0 makes any sense. What am I missing here?
 
Back
Top