Do you think there is collusion in the industry?

skilzygw

Newcomer
Exhibit A)
Dual Core - how convenient they both realease around the same time

Exhibit B)
Graphics - Minor jumps ahead from one brand to another

Exhibit C)
I'm spent Goodnight!


I seriously was asking this more about the cpu sector than anything else. I mean why does it seem like they are always aorund the same performance even if at different clocks?
Why no radical shift by one vendor? Is it just a matter of intel or someother semiconductor shrinking the process then they can go quad core, faster etc...
 
I think there is informal collusion and I think there was between Nvidia, and ATI.

I have gone over why I think this in previous posts. I do not think they meet together and plan, I think that is just happens when you have 2 players, they have little incentive to start a huge price war for example if they each are making good stable profits. Intel has a greater reason to as AMD has been eating away at them.
 
skilzygw said:
Exhibit A)
Dual Core - how convenient they both realease around the same time

Until the 90 nm node, dual core x86 processors would have been completely impractical. Intel barely kept up with AMD's release using a somewhat kludgy multichip package, this was not a sign of coordination with AMD's more polished product.

Exhibit B)
Graphics - Minor jumps ahead from one brand to another
Both vendors go as far as process technology and their design expertise can take them. Both ATI and NVIDIA are the best in their field, which means that barring some mistake (FX vs 9700 anyone?), they will approach a similar ceiling for every hardware generation.

Memory is often a limiting factor shared by both. Cards with either ATI or NVIDIA chips will still use memory modules from a very limited pool of manufacturers such as Samsung, and board partners do not want to go hog wild with exotic layouts.


I seriously was asking this more about the cpu sector than anything else. I mean why does it seem like they are always aorund the same performance even if at different clocks?

Performance is similar despite clock speed differences because of trade-offs in design.

The P4 is an aggressively speculative speed demon that is actually very good at straight line execution with optimal software. However, it suffers from larger penalties due to branching and poorer performance in suboptimal situations.

The K8 relatively conservative compared to the P4, and it is more robust when dealing with funky code. The price is that it cannot be driven to the high clock speeds that the P4 can. AMD was lucky that Prescott turned out to be thermally limited, since it would have scaled beyond 4.5 Ghz otherwise.

There are still compute-intensive applications that only care about clock speed, while others have room for efficiency improvements.

Regardless of the approach, performance improvements take work, money, and risk. Designers must weigh possible performance and feature gains against uncertainties in how future manufacturing processes and software environments will work out. A high-level chip design is set down years before it ever reaches silicon. A lot can go wrong with that.

AMD and Intel have sucked out most of the easy performance gains, so only incremental improvements can be expected. This is particulary true with regards to memory latency. Most of the time, processors are stalled waiting for RAM accesses to go through. Unsurprisingly, the various chips are running from essentially the same pool of RAM types.

Why no radical shift by one vendor? Is it just a matter of intel or someother semiconductor shrinking the process then they can go quad core, faster etc...


Semiconductor scaling is still king when it comes to x86 chips, which rely on massive volumes to pay for design teams to beat their heads and throw transistors against one of the most inelegant and frustrating ISAs still in existence.

Radical shifts may or may not pay off. Even if they do, it is unlikely they will provide a definitive advantage against a competitor that played it safe. Low-hanging fruit in single-threaded execution is basically all gone.

Multicore to a point will provide much better performance scaling, since two cores will provide twice the resources on concurrent code. A single core that could match that would probably be four times as large and extremely hot, since a lot of performance improvements suffer from quadratic (or worse) increases in complexity and power consumption.

At the same time, transistor scaling is massively expensive. Fabs and the research into them are multi-billion dollar investments, so there better be something for them to produce when they are up and running. Pretty much all vendors outside of Intel must cooperate to maintain research and manufacturing growth.

Future nodes will also no longer allow designers to ignore power considerations. The fastest transistor is no longer going to be the coolest running or the smallest. The smallest transistor will not necessarily be the fastest or cool. The coolest running transistor will not be fastest or smallest.

As a result, pushing silicon gets too hard. Multicore gets around this for now because it deemphasizes clock speeds that can lead to a cubic increase in power draw, and can leverage density improvements that are easier to make.
 
Last edited by a moderator:
AMD and intel target the exact same application space, with the same instruction set and with low level compatibility restrictions. Small wonder they track each other within fairly narrow percentages in spite of intel generally being a bit ahead on the lithography curve.

While they are both talking multi-core, an AMD executive was cautious when he predicted how effective adding additional cores would actually be. He remarked that the true value in the x86 market was the existing billions of lines of x86 code, and that it would be prudent to take that inertia into account when projecting forward. Simply put, going to 4+ cores are going to give negligeable benefit for the overwhelming majority of codes, at the cost of substantially larger dies with the lower die yields per wafer and higher costs that follow from that. Otherwise, adding cores is one place where intel should be able to leverage their being ahead of AMD on the process curve, and they might, but even if they do, it probably won't mean much for the overall market. Too expensive.

ATI and nVidia are in a different competitive situation, in that their products are programmed via an API and that they are relatively free to change things around at the hardware level. Additionally, while they must maintain a means for backwards compatibility, they have a faster evolving code base, giving larger opportunities for architectural innovation. Still, assuming similar/identical process tech (both fab primarily at TSMC), similar engineering skill and similar priorities of 3D-performance vs. power management vs. video quality enhancement vs... they should come up with parts that are in more or less the same ballpark. And they do. With the same tasks to perform, the same economies of production, the same target audience it's only reasonable that they come up with similar solutions. Add that both have to be reasonably in sync with Microsoft and developer trends, and the two can easily be seen to walk in a lockstep that smells of duopoly. But while you could reasonably assume that they at least partly act in a cartel-like maner, it's not necessarily so, there are other forces that would align their schedules.
 
Back
Top