Pelly Opens his Site: PC Perspective!!!

digitalwanderer

wandering
Legend
You read it right! Pelly late of [H] late of nVnews and beloved friend of many has opened up a new site called PC Perspective and has started out with a bang by putting up a new article entitled "Preparing for PCI-Express".

They've also put up a "Welcome Editorial" to introduce PC Perspective and you can find out a bit more by reading their "About" here.

Congratulations and best of luck to all involved in PC Perspective, I look forward to seeing what they do! 8)
 
nice :)

Our goal in creating PC Perspective is to become the reference of choice for all readers regarding the PC industry by offering the most unbiased and thorough perspective

It's allways good to hear that :)
 
ClyssaN said:
It got even better for me when I clicked on his new forums and found out I was already a member! :LOL:

I'm still reading his PCI article, it's very in-depth but still understandable by a thicky like me! :D
 
Well, I've given him a bookmark; let's see if he can keep it. I will say it looks very nice for just opened the doors. Doesn't have that 1/2 finished look I usually expect from a brand-new entry.
 
I'm not sure I get this:

Moving to the next topic, we see that ATI is claiming that a truly native solution will have better reliability due to lower complexity and fewer failure points. In the case that it is a truly native solution developed around PCI-Express, this is certainly the case. However, to make things a bit more complicated and dramatic a very reliable source has provided us with information which questions whether ATI's current solution has brought bridged complexity within the core. Supposedly, the following photos compare an AGP-based RV360 mobile core (M11) to a native PCI-Express RV380 mobile core (M24). Comparing the two images, we find it difficult to distinguish one core from the other. However, if you look at the left-most edge of each core you will see the only appreciable difference between the two. With no confirmation from ATI thus far on what the purpose of this section of logic, we cannot make a judgement. If possible, we will update this article with an appropriate response from the company.

What does these images actually tell us? That there are some differences between the two cores? Ummmm, I'd kinda expect that from the pure fact that (presumably) one is PCI-Express and one is AGP - but instead the article decides to FUD the point without any clear clue as to what is going on there. Why even bring this up if you don't know what it is? Perhaps the "very reliable sources" suggested some things about adding "complexity" because these sources have their own agenda? Dunno, but bringing this up as it was, not actually concluding anything looks like its just FUD'ing.

Even if it is the case that there is an internal bridge, what’s the issue? The bridge will be AGP to PCI-Express, not PCI-Express converted into AGP, so natively the interface would be PCI-Express, the AGP bridge would only operate for legacy AGP systems and in PCI-Express systems the full benefits will be there. OK, so there may be a small amount of internal complexity in the chip, but that’s an issue for ATI - externally, which is what I assume the "higher reliability and lower complexity" quote is about, that won't be an issue. This solution will also reduce complexity board wise, and also result in fewer board differences between the AGP and PCI-Express versions than NVIDIA's solution, which will require very different boards to accommodate the external bridge.

So, what do we have? Another article that doesn't attempt to explore what PCI-Express can actually do for us in the future and also manages to spread some unfounded FUD. Great start.
 
Easy there tiger...put down the flamethrower and read the entire article...

I'll try to address your concerns in order:

What does these images actually tell us? That there are some differences between the two cores? Ummmm, I'd kinda expect that from the pure fact that (presumably) one is PCI-Express and one is AGP - but instead the article decides to FUD the point without any clear clue as to what is going on there. Why even bring this up if you don't know what it is? Perhaps the "very reliable sources" suggested some things about adding "complexity" because these sources have their own agenda? Dunno, but bringing this up as it was, not actually concluding anything looks like its just FUD'ing.

I believe you missed the obvious couching by the blatent disclaimers throughout that paragraph such as "supposedly" and "no confirmation" and "cannot make a judgement". Now one might say, if you cannot make a judgement why bother showing it? Well, with Ryan and I each having degrees in Electrical and Computer Engineering I think we are more than qualified to comment here.

This solution will also reduce complexity board wise, and also result in fewer board differences between the AGP and PCI-Express versions than NVIDIA's solution, which will require very different boards to accommodate the external bridge.

NVIDIA dictates whether a board will be AGP or PCI-Express by either soldering this chip on or not soldering it on. Aside from that, the only appreciable differences between AGP and PCI-Express boards is their physical interface. In ATI's case, they have a specific chip for each...

So, what do we have? Another article that doesn't attempt to explore what PCI-Express can actually do for us in the future and also manages to spread some unfounded FUD. Great start.

Evidently, you missed the following...

Although the immediate success of ATI’s native implementation remains to be seen, there are some very exciting possibilities on the horizon which could take advantage of the enormous bi-directional bandwidth of PCI-Express. With this bandwidth at their disposal, developers can now begin exploring methods of offloading the CPU by bringing some tasks to the VPU. One example we were provided with is assigning the handling of physics calculations to the VPU instead of the CPU. Here, the architecture of the VPU will likely be able to process these calculations faster than the CPU would. As a result, games which were once CPU-bound now have more headroom and performance should increase accordingly. In addition, developers can also explore methods of controlling artificial intelligence with the VPU instead of the main processor. Again, this will likely add a healthy boost in performance to a taxing game engine. Despite being excellent additions to gaming, we must remember that this is all a work in progress. No titles on shelves today can take advantage of these features, so we are once again faced with a timing issue.

as well as

Taking a few steps back and looking at the big picture, we realize that neither vendor disputes the benefits nor potential performance increases PCI-Express brings to the industry. Rather, the main point of issue here seems to be when these appreciable performance increases will become evident. As it stands, the only application which we currently know of which will have any noticeable and immediate performance enhancements from PCI-Express is HDTV editing. Although this is definitely a legitimate application and an area which will surely be capturing more interest over the next few years, this is hardly a mainstream area of focus. What our audience wants to know is, “will framerates for the latest and greatest games be contingent upon having a native PCI-Express card?â€￾ Like all things in life, there is no simple answer as the real answer is dictated by your upgrade plans and time of reference.

To further ensure that this article was fair, we have a conclusion which clearly outlines the fact that the statements made were based upon the information we had been given as boards were not yet available. A follow-up with these boards is promised before we make any judgements...
 
Just a couple comments:

I also don't get the reason / purpose behind posting a side-by-side of two cores, and then implying that ATI has perhaps just incorporated a "bridge" on die.

I mean, RV380 is suppossed to be RV360, but with a different interface. How different are the cores "supposed" to look?

Also ATI has in fact stated that they WILL be offering bridged solutions, although they claimed it would be limited to the "reverse" implementaion. That is, bridging future PCI-E parts to work with AGP slots.

So, ATI will in fact, be able to use a "single chip" for both slots, just like nVidia. It's just that in the short term ATI will have native PCI-E solutions, and nvidia won't.
 
If your degree in Electrical & Computer Engineering lets you understand Nvidia’s approach of 16x AGP by eliminating the AGP slot, why can’t you understand that by eliminating 2 chip packing & the PCB wire trace you can achieve even higher performance.

Its common to dissect competitor’s product but feeding such information is …..
 
Good work Pelly. :)

Guys, can we start another thread that critiques Pelly's articles.
We are congratulating him in this thread. :)
 
LoL....so many forums....so little time...

Plyy...

why can’t you understand that by eliminating 2 chip packing & the PCB wire trace you can achieve even higher performance.

Actually, we never say that a native solution won't offer better performance when running an application which will take advantage of the additonal bandwidth PCI-Express offers. In today's current applications, it is unlikely that we will see any appreciable advantages over a hybrid solution done well...

In the end, this should really be a dead issue for forums until we have cards in hand and benchmarks posted...We shared our opinions but noted several times that the physical cards could prove us wrong. Our opinions were based upon all the information we were provided (some the public has not yet seen)...

Let's just sit back and wait.....the answer will be out soon enough and then we'll all be able to debate with logic, reason, and fact...

Thanks K.I.L.E.R for the post...appreciate the support... :D
 
...

nice article pelly... :)

hope the transition goes smoothly... sounds like the site is gonna be quite nice once everything's converged!! :D
 
I was under the impression that M24 would be a next-gen R420 derivative? Looks to me like it'll just be a speedbump on a PCIE interface, and should perhaps be called M12.
 
I believe you missed the obvious couching by the blatent disclaimers throughout that paragraph such as "supposedly" and "no confirmation" and "cannot make a judgement". Now one might say, if you cannot make a judgement why bother showing it? Well, with Ryan and I each having degrees in Electrical and Computer Engineering I think we are more than qualified to comment here.

Eh? Your electrical engineering degree tells you how to decode the logic from groups of 130nm transistors one a picture that size and then understand what its actually doing? Apparently not seeing as you've already put massive disqualifications on it. The image tells you nothing and it tells us nothing with regards to the bus interface - conclusions have been suggested for no appreciable reason, hence it looks like FUD - your "I've got a degree in EE" comment here is just more FUD, otherwise you would have been sure of what you are writing and not saying "supposedly" and "cannot make a judgement" etc.

NVIDIA dictates whether a board will be AGP or PCI-Express by either soldering this chip on or not soldering it on. Aside from that, the only appreciable differences between AGP and PCI-Express boards is their physical interface. In ATI's case, they have a specific chip for each...

Leaving an inch between the GPU core and the bus interface. Ever questioned why most boards don't do that? You can rest assured that the native PCI-Express versions won't have a large gap, with the associated traces, between the interface and the GPU, just like their current AGP boards don't. Already they have to make different board layouts for their bridged AGP GPU's.
 
Your electrical engineering degree tells you how to decode the logic from groups of 130nm transistors one a picture that size and then understand what its actually doing?

In a word....."Yes"....Though I obviously cannot go through and name each individual component, I certainly know what is going on...how this would be difficult for a EE/CE with any interest in graphics cards is beyond me...

In addition, I have the original full-sized images to reference as well...time to move on to something else here man...


Leaving an inch between the GPU core and the bus interface.

I understand that this introduces latencies...but (as the article suggests)...there are no current applications (aside from HDTV editing) that take advantage of the full bandwidth of PCI-Express. By the time there are, you will certainly see native solutions by both vendors. Until then, it is doubtful we'll see any applications that can exploit the benefits a native solution has...

End of NVIDIA page in article
What we have seen thus far is that much of ATI’s criticism of NVIDIA’s approach is based upon the use of a generic bridged solution and not NVIDIA’s HSI solution. Specifically, these higher latency claims target an architecture with a 32B request size. The 32B request size not only amplifies the effect of latency within the system, it can also constrain the overall effective bandwidth of the design. As such, it is important to note than NVIDIA’s architecture supports a 64B request size which minimizes the effect of any introduced latencies dramatically and allows for the maximum possible bandwidth.
 
In a word....."Yes"....Though I obviously cannot go through and name each individual component, I certainly know what is going on...how this would be difficult for a EE/CE with any interest in graphics cards is beyond me...

So, you’ve studied the cell design of AGP and PCI-Express and you can clearly identify that there is a proprietary bridge design in there as well? If you really could, then your statements aren’t sounding at all sure of it. Either you are sure of what you are looking at and you write that, or you aren’t as your suggests.

I understand that this introduces latencies...but (as the article suggests)...there are no current applications (aside from HDTV editing) that take advantage of the full bandwidth of PCI-Express.

The latencies are introduced from the inclusion of the bridge – I’m talking about the different board designs required for the non-bridged version in comparison to the bridged version. You said the difference would be whether they choose to solder a bridge on it or not, but that’s not the case because the trace lengths for the non-bridged part would be excessively long. Two significantly different board designs are likely to be required for the bridged vs non-bridged designs (as is already the case for the current range of NVIDIA’s AGP GPU’s).
 
Back
Top