More Nv-30 speculation !!

My issue is that a bus running at 500 Mhz SDR will provide more data then a DDR part that is clocked at 250 Mhz. Yes the both have the roughly the same theoritical through put. But DDR does have some small amount of overhead vrs SDR.
 
I must apologize to you demalion.
I take pride in staying out of, and not starting any bad fights, and I certainly failed that here. Yes, that was a low blow. And yes, the "you lose me" phrase isn't a reason to give someone a punch. (It wasn't the reason for me either.)

I (quite possibly incorrectly) sensed that you were deliberately "not understanding" what I said to make it easier to argument. And that's something I realy dislike. I'll repeat that I could have been wrong, and then I'm sorry.

I've lately been in a state of mind that I should know is incompatible with internet discussions, sorry that you were the one to cross my path. This alone should be a reason for me to leave the discussion.


But since you brought up "completely decoupled" in the last post, I'll make one more try at it, at a different angle.

It was just a small sidestep, and no biggie. The point was the difference that in a synchronous communication both sender and receiver are well aware that the symbols is to be sampled, and the exact timing when it's sampled. When sampling a time continuous audio stream only the receiver (sample&hold circuit, A/D converter, ...) care about the sampling. The transmitter (singer or whatever) doesn't try to generate "symbols" in sync with the sampling.

The reason I mentioned it was just to say that even if there is a difference (both sides vs only one side care about the sampling), it's still close enough to use the same terminology.
Somwhere else I've seen someone objecting against the memory//audio parallel, and saying it was two different cases. And I got the feeling that the reason he had was this difference.
 
jb said:
My issue is that a bus running at 500 Mhz SDR will provide more data then a DDR part that is clocked at 250 Mhz. Yes the both have the roughly the same theoritical through put. But DDR does have some small amount of overhead vrs SDR.

I'm not so sure. While today's DDR memories certainly do have some overhead in bandwidth, I doubt that's the case for all similar technologies. That is, I was pretty certain that the overhead was essentially all on the memory side of things. There doesn't appear to be any bandwidth overhead at all in the basic signalling, meaning that there doesn't need to be any bandwidth overhead for, say, a point-to-point communication protocol.

And, as it is directly related to memory, listing 250MHz DDR as 500MHz is still closer to what you get than just saying 250Mhz.
 
I believe the main inefficiency in DDR vs. SDR is in the addressing. Apparently DDR transfers 2x the data per clock cycle of SDR, but only transfers address information at 1x the speed of SDR. I remember seeing somewhere that this makes DDR only about 80% faster on average than SDR at the same clock speed, but it probably depends a lot on how you address your data (e.g. block writes would give you much closer to 2x the performance than random accesses).
 
Basic said:
I must apologize to you demalion.
I take pride in staying out of, and not starting any bad fights, and I certainly failed that here. Yes, that was a low blow. And yes, the "you lose me" phrase isn't a reason to give someone a punch. (It wasn't the reason for me either.)

I apologize in turn for responding in kind. We all are capable of slipping, and I could have exercised more self control in my response.

I (quite possibly incorrectly) sensed that you were deliberately "not understanding" what I said to make it easier to argument. And that's something I realy dislike. I'll repeat that I could have been wrong, and then I'm sorry.

I've lately been in a state of mind that I should know is incompatible with internet discussions, sorry that you were the one to cross my path. This alone should be a reason for me to leave the discussion.


But since you brought up "completely decoupled" in the last post, I'll make one more try at it, at a different angle.

I'll address your alternate explanation.

It was just a small sidestep, and no biggie. The point was the difference that in a synchronous communication both sender and receiver are well aware that the symbols is to be sampled, and the exact timing when it's sampled. When sampling a time continuous audio stream only the receiver (sample&hold circuit, A/D converter, ...) care about the sampling. The transmitter (singer or whatever) doesn't try to generate "symbols" in sync with the sampling.

Aha, the singer and their voice, music, etc is "completely decoupled" from the A/D converter, but the A/D converter and its clock is not decoupled at all from the 44.1kHz, etc, output that I was discussing (the way I took your statements).

Your original comment was "This is very much like the case with DDR, the only difference being that the data you sample is sent in sync with your sampling, not completely decoupled as in the audio sampling case." Your use of the word "sent" in the statement, in the context of the statements directly preceding it ("There is no waveform repeated at 44100 times per second. And it does not refer to a physical clock running at 44.1kHz driving the sampling unit.") seemed to be stating that data sampled at 44.1kHz was completely decoupled from the clock of the A/D converter.

This was further confused by your use (or, rather, my understing) of the term "audio stream"...to me that was sampled data, in the context of the resampling under discussion.

The reason I mentioned it was just to say that even if there is a difference (both sides vs only one side care about the sampling), it's still close enough to use the same terminology.

Hmm...I understand what you mean now, but I addressed your associated comments about sampling/resampling, etc in that previous post.

Somwhere else I've seen someone objecting against the memory//audio parallel, and saying it was two different cases. And I got the feeling that the reason he had was this difference.

Well, there are a lot of opportunities for confusion in trying to discuss it in this context. :-?
 
I doubt they´ll opt for a multichip sollution even for professional cards, probably only if they can´t outperform the competition with a single chip sollution.

Other than that if the triangle setup is shared between the chips, then theoretically maybe twice or close to twice the performance with a dual chip setup. It´s still not cost-effective for the time being.
 
What about having a combination of chips for certains tasks. Rampage was one of 2 chips to be incorperated on the 3dfx Spectre boards along with a separate chip called Sage which performed all geometry operations. Are there any substantial advantages to an array of chips. 3dLabs used a chip array for their Wildcat III cards and they turned out great.
 
demalion said:
Basic said:
I must apologize to you demalion.
I take pride in staying out of, and not starting any bad fights, and I certainly failed that here. Yes, that was a low blow. And yes, the "you lose me" phrase isn't a reason to give someone a punch. (It wasn't the reason for me either.)

I apologize in turn for responding in kind. We all are capable of slipping, and I could have exercised more self control in my response.

I (quite possibly incorrectly) sensed that you were deliberately "not understanding" what I said to make it easier to argument. And that's something I realy dislike. I'll repeat that I could have been wrong, and then I'm sorry.

I've lately been in a state of mind that I should know is incompatible with internet discussions, sorry that you were the one to cross my path. This alone should be a reason for me to leave the discussion.


But since you brought up "completely decoupled" in the last post, I'll make one more try at it, at a different angle.

I'll address your alternate explanation.

It was just a small sidestep, and no biggie. The point was the difference that in a synchronous communication both sender and receiver are well aware that the symbols is to be sampled, and the exact timing when it's sampled. When sampling a time continuous audio stream only the receiver (sample&hold circuit, A/D converter, ...) care about the sampling. The transmitter (singer or whatever) doesn't try to generate "symbols" in sync with the sampling.

Aha, the singer and their voice, music, etc is "completely decoupled" from the A/D converter, but the A/D converter and its clock is not decoupled at all from the 44.1kHz, etc, output that I was discussing (the way I took your statements).

Your original comment was "This is very much like the case with DDR, the only difference being that the data you sample is sent in sync with your sampling, not completely decoupled as in the audio sampling case." Your use of the word "sent" in the statement, in the context of the statements directly preceding it ("There is no waveform repeated at 44100 times per second. And it does not refer to a physical clock running at 44.1kHz driving the sampling unit.") seemed to be stating that data sampled at 44.1kHz was completely decoupled from the clock of the A/D converter.

This was further confused by your use (or, rather, my understing) of the term "audio stream"...to me that was sampled data, in the context of the resampling under discussion.

The reason I mentioned it was just to say that even if there is a difference (both sides vs only one side care about the sampling), it's still close enough to use the same terminology.

Hmm...I understand what you mean now, but I addressed your associated comments about sampling/resampling, etc in that previous post.

Somwhere else I've seen someone objecting against the memory//audio parallel, and saying it was two different cases. And I got the feeling that the reason he had was this difference.

Well, there are a lot of opportunities for confusion in trying to discuss it in this context. :-?


I do not believe that the memory/audio discussion of sampling is valid because the word 'sampling' is used in two different contexts.

In the audio context 44.1KHz does not mean that the audio is sampled at that constant rate, rather that is he maximum rate that can be sampled. Think of the 44.1KHz as a carrier wave, in that the signal can be modulated at a maximum rate of 44.1KHz. Middle C on a piano is 440Hz, or in other words, you would observe (via an oscilliscope or frequency counter) that the waveform produced would "move" 220 times in a positve direction & then 200 times in a negative direction each & every second. (We will leave tonal quality for another discussion on another day :)). If this were sampled. it would not take our maximum bandwidth of 44.1Khz to represent this analog waveform as an digitial representation. (By the way, 44.1KHz was picked because the maximum rate yields{via the Nyquist rule}an accurate 8 bit sample of an 22.5KHz analog waveform. This dovetails nicely into the accepted human hearing range of 20 to 20,000Hz. Nevermind that most people can not make out discreet tones over 13,000Hz. Many commercial CD's use an "oversampling" technique in their A/D playback ramdacs to make psuedo bits, that push the LSB further to the right, in some cases up to 16-bits to generate a 24-bit word. DVD-Audio & SACD start out at the oversampled {96KHz} rate. The theory of Low Noise Amplification is built on the same premise.)

Memory specifications on the other end are entirely in the digitial domain & therefore the quantitization aspect (in the context of audio archival/reproduction) does not apply.

As an aside. Some posters in this thread related that , while the arguments may have been acedemically interesting, they were indeed boring. I would just like to point out that to some of us, sampling of analog waveforms, their subsequent encoding/decoding, and amplification are fascinating subjects.


ed. spelling
 
sumdumyunguy said:
Middle C on a piano is 440Hz, or in other words, you would observe (via an oscilliscope or frequency counter) that the waveform produced would "move" 220 times in a positve direction & then 200 times in a negative direction each & every second.
[nitpick]
Actually, 440 Hz would be 440 waves and 440 troughs... What you are counting are only half cycles. (A full cycle is one-up and one-down.)
[/nitpick]
 
Johnathan256 said:
What about having a combination of chips for certains tasks. Rampage was one of 2 chips to be incorperated on the 3dfx Spectre boards along with a separate chip called Sage which performed all geometry operations. Are there any substantial advantages to an array of chips. 3dLabs used a chip array for their Wildcat III cards and they turned out great.

And dual chip Spectre was estimated to cost twice as much as the single chip version (1 rampage + sage), somewhere around 500$ if my memory serves me well. Rampage was to have around 30M transistors and sage ~25M, with merely dx8.0 compliance.

Do you dare to make an estimate what a dual chip setup with twice the complexity would cost?

Multi chips sollutions might be a thought for the professional market, but for PC graphics even for the hardcore gamer, prices have to be kept within certain limits.
 
OpenGL guy said:
sumdumyunguy said:
Middle C on a piano is 440Hz, or in other words, you would observe (via an oscilliscope or frequency counter) that the waveform produced would "move" 220 times in a positve direction & then 200 times in a negative direction each & every second.
[nitpick]
Actually, 440 Hz would be 440 waves and 440 troughs... What you are counting are only half cycles. (A full cycle is one-up and one-down.)
[/nitpick]

You are indeed correct, kind sir. I say to you, "Kudos!" :)
 
Hmmm... good point. This might be why Nvidia chose to stay with the 128-bit bus for the memory. Also, the .13u process is a bit cheaper to produce even though it is smaller and more advanced. Plus, 3dfx was desperate for money at that time which could have easily influenced prices.
 
I doubt .13um turns out that much "cheaper" with the current yields. NV most probably chose the specific manufacturing process because:

a) they needed the extra die space

b) 13um can yield higher clock speeds

Plus, 3dfx was desperate for money at that time which could have easily influenced prices.

Huh? The more chips on board = the higher the cost = higher price.

Think of the prices for the cards within the VSA100 line and see how prices scaled in analogy with the number of on board chips.

The financial situation of the company was completely irrelevant to the pricing schemes.
 
both graphics and cpu makers are looking at multi core designs for cpu/gpu's in the near future ,the thinking behind it is they could spread the heat accross a greater area and get more performance out of each core,something like that anyway
 
sumdumyunguy said:
Middle C on a piano is 440Hz

Musician to the rescue... musician to the rescue! :)

Actually Middle "C" is not 440Hz. The first "A" above Middle "C" is 440Hz. Hence, the popular phrase (among musicians anyway) 'A-440.'
 
Bigus Dickus said:
sumdumyunguy said:
Middle C on a piano is 440Hz

Musician to the rescue... musician to the rescue! :)

Actually Middle "C" is not 440Hz. The first "A" above Middle "C" is 440Hz. Hence, the popular phrase (among musicians anyway) 'A-440.'

That is correct :)

"Middle C" would be roughly 262 Hz.... give or take a little depending on your tuning. ;)
 
borntosoul said:
both graphics and cpu makers are looking at multi core designs for cpu/gpu's in the near future ,the thinking behind it is they could spread the heat accross a greater area and get more performance out of each core,something like that anyway

Hopefully you don't mean hyperthreading for CPU's :eek:
 
Threading is quite distinct, the extra mileage you can get from it is limited. It runs into diminishing returns almost as fast as superscalar processing.
 
Bigus Dickus said:
sumdumyunguy said:
Middle C on a piano is 440Hz

Musician to the rescue... musician to the rescue! :)

Actually Middle "C" is not 440Hz. The first "A" above Middle "C" is 440Hz. Hence, the popular phrase (among musicians anyway) 'A-440.'

yup, Apollo 440 immediately comes to mind.
 
Back
Top