Design lessons from PS2 applied to PS3

dcforest

Newcomer
Every engineering design has weaknesses. Since Engineers have to make cost, size, power, performance, & time-to-market tradeoffs, you can never design something to be "perfect" or the "best". Often times weaknesses aren't fully apparent at the design & test stage, but become visible during actual customer usage.

With the benefit of 20/20 hindsight, we can reflect on the design weaknesses of the PS2 and see how Sony is addressing those weaknesses in the PS3. This is not a criticism of the PS2 architecture, which was state-of-the-art at the time of its design and delivery, but rather a look at how to learn from and build upon past designs. Note: All information is from publically available sources.

CELL vs. EE
------------------

FACT: EE core CPU was a MIPS 64-bit derivative clocked at 300MHz
FACT: CELL contains a Power 64-bit derivative clocked at 4+GHz

EE Weakness: The Core CPU in the EE was slower than state-of-the-art PC microprocessors of the time (Intel was shipping a 500MHz Pentium III in Feb. 1999 and demoed a 1GHz Pentium III on Feb. 23, 1999) and ended up being slower that either the XBOX core (733 MHz Celeron) or the GameCube (PowerPC 750 485MHz). Note the XBOX & GameCube CPU's did ship later in time than the PS2.

CELL Fix: The 4+GHz clock rate of the PPU is competitive with the fastest shipping PC microprocessors (Pentium 4 3.8GHz or Athlon 64 2.6GHz). (Note: While it is silly to compare microprocessors based solely on clock rate, Power/PowerPC vs. x86 comparisions have been extensively done for the past 5+ years based on actual application code, so relative comparisions can be made). The CELL PPU is a simplified in-order design, so it won't have quite the performance of a fully out-of-order design, but the balance between number of instructions in flight vs. clock rate seems to have been well managed. XBOX 360 and Revolution are expected to use Power/PowerPC based derivative microprocessors and are unlikely to be clocked faster than the CELL PPU.

FACT: EE core CPU has a 16K Instruction and 8K Data cache with a 16K scratchpad and zero L2 cache
FACT: CELL PPU has a 32K Instruction and 32K Data cache and 512K L2 cache

PS2 Weakness: The small size of the L1 caches on the EE and the lack of L2 cache made the MIPS based core actual performance much slower than it should have been.

EE Fix: CELL has a proper L2 cache, which at 512K size will be sufficient to extract good performance from the PPU core. CELL has an integrated memory controller which will allow for lower latency access to memory. AMD has used an integrated memory controller in Athlon for sometime, with has contributed to it's performance advantage vs Intel. The combination of a properly sized L2 and integrated memory controller will yield performance dividends.

FACT: EE had 2 vector units VU0 & VU1, but they were architecturally different (i.e., different resources and instruction sets).
FACT: CELL has 8 SIMD units (SPE's), all identical in resources and instruction sets

EE Weakness: Because VU0 & VU1 were programmed differently and implemented different instruction sets, creating code and reusing that code proved to be more difficult than expected.

CELL Fix: The SPE's in CELL all use a consistent programming model, any SPE code can execute on any of the 8 SPE's. The instruction set and large register file has been designed to be more easily programmed in a higher level language (e.g., C, C++). The SPE's can also be virtualized.

FACT: EE had very small cache/scratchpads, 8K total for VU0 and 32K for VU1
FACT: CELL uses 256K local store for each SPE

PS2 Weakness: The small size of the scratchpads for VU0 and VU1 made scheduling of data more difficult and made effective bus utilization harder than expected.

CELL Fix: CELL has greatly increased the size of the local store to avoid the problems of the VU's. The original CELL patents referenced 128K local store, which was upped to 256K during implementation phase.

FACT: EE had a relatively slow I/O bus to the graphics processor (GS). The EE to GS bus was 1.2 GB/s performance, both downstream or upstream.

FACT: CELL uses Rambus FlexIO technology to deliver 44.8GB/s downstream and and 32GB/s upstream from the CELL to the GPU. (Note: some of the FlexIO lanes may be used for other I/O, not GPU traffic, but the majority will be used for the GPU).

PS2 Weakness: The EE to GS bus (GIF) was relatively slow at 1.2GB/s. It was half the speed of the internal EE bus and much slower than the 3.2GB/s memory bandwidth. This greatly limited how much triangle and texture information you could upload to the GS every frame.

CELL Fix: The FlexIO bandwidth to the GPU will be greater than the bandwidth from XDR memory (25GB/s) ensuring the CPU to GPU bus does not become a bottleneck as it was in the PS2.


GPU vs. GS
------------------

FACT: Sony used the in-house developed GS for graphics processing. The GS contained an embedded DRAM frame buffer (4MB) and supported a fairly limited set of graphics processing features.

FACT: Sony has partnered with NVIDIA to deliver a GPU for the PS3.

GS Weakness: The GS had very high pixel fill rate, with a limited set of graphics functions/effects. Some effects could be done in software, but others were impractical due to performance considerations. GS did not offer vertex processing as that was expected to be done by the EE.

CELL fix: Sony has partnered with NVIDIA to deliver a GPU that has the same rich functionality (vertex/pixel shaders, anti-aliasing, scaling, etc) as today's latest PC GPU's. CELL will be able to focus on gameplay with the GPU doing the graphics processing.

PREDICTION: PS3 GPU will not use embedded DRAM. The PS3 will use external highspeed memory for best performance.

FACT: None of the highest performing graphics cards produced by NVIDIA or ATI during the past 5 years have used embedded DRAM.

COMMENTARY: Graphics cards for PC's can afford to pay for the highest performing GPU and memory combination. ATI and NVIDIA have looked at embedded DRAM multiple times over the past several years, but an embedded DRAM design has always yielded lower performance than the alternative external memory design. Embedded DRAM does offer two significant advantages: 1) cost and 2) power consumption. The power consumption advantage is why you see embedded DRAM in most mobile/portable designs (e.g., PSP, DS, PDA's, cell phones) The cost advantage is why you often see it consoles (PS2, Gamecube, XBOX 360). NVIDIA was able to use an external memory design (NV2A) in the XBOX to yield better graphics than PS2 and they will use the same technique in PS3.

PREDICTION: PS3 GPU will offer better performance than the XBOX 360 GPU. The difference will not be 2X or some other very large number, but rather 20-50% faster depending on the application.

COMMENTARY: Both ATI and NVIDIA have been reasonably close to each other on performance of their latest generation of PC graphics cards. ATI is faster on some benchmarks (e.g., HL2) and NVIDIA on others (e.g., Doom 3). The GPU's for the XBOX 360 and PS3 will be based on the designs of the upcoming PC GPU's. ATI, NVIDIA, Microsoft and Sony will all do their best to deny that the console GPU's are based on the same microarchitecture as the PC GPU's, but that will be the case. Just as the NV2A in the XBOX was a derivative of the NV20 and NV25 designs, so the XBOX 360 GPU will be a derivative of the R520/R600 and the PS3 GPU will be a derivative of the G70. In both cases, ATI and NVIDIA will make significant modifications to the designs to adopt them for consoles, but fundamentally they will be based on the respective PC GPU. The PS3 GPU will be a half-generation ahead of the XBOX 360 GPU timewise (i.e., it will ship later) and it will take advantage of the tremendous bandwidth of the CELL architecture (both Rambus memory and FlexIO).


SUMMARY: Sony has clearly learned from the design decisions it made for PS2 and has made appropriate improvements to the PS3 design. Again, no design is "perfect" or the "best" and the PS3 design has had to make it's share of tradeoffs (cost, size, power, performance, & time-to-market) as well. Over time, we will learn where the "weaknesses" and "bottlenecks" in the PS3 architecture are and learn what improvements Sony should make in PS4.
 
"EE Weakness: The Core CPU in the EE was slower than state-of-the-art PC microprocessors of the time (Intel was shipping a 500MHz Pentium III in Feb. 1999 and demoed a 1GHz Pentium III on Feb. 23, 1999) and ended up being slower that either the XBOX core (733 MHz Celeron) or the GameCube (PowerPC 750 485MHz). Note the XBOX & GameCube CPU's did ship later in time than the PS2."

The Megahertz Myth is still alive? Seriously?
 
FACT: None of the highest performing graphics cards produced by NVIDIA or ATI during the past 5 years have used embedded DRAM.

Well, that is true... for now. Seems the R500 will have eDRAM though.

It might be external to the chip also... and if that is the case I really REALLY hope ATi brings something like that to the PC. It would probably need to be a little bigger, BUT 256GB/s of effective bandwidth would be a nice solution to some of the bottlenecks preventing more standard use of AA, etc...
 
Who wrote this crap? You, or did you copy and paste it from some website or such?

dcforest said:
CELL Fix: The 4+GHz clock rate of the PPU is competitive with the fastest shipping PC microprocessors (Pentium 4 3.8GHz or Athlon 64 2.6GHz). (Note: While it is silly to compare microprocessors based solely on clock rate
It's not just silly pal, it's downright fraudulent to do so.

Power/PowerPC vs. x86 comparisions have been extensively done for the past 5+ years based on actual application code, so relative comparisions can be made).
No it can't! Just because comparisons between processors with the same instruction sets have been done doesn't mean comparisons with THIS multi-GHz powerpc processor will be comparable to THOSE multi-GHz x86 processors!

Or do you think just because VIA's CPUs are x86-compatible, their performance is comparable to AMD and Intel chips? :LOL:

The CELL PPU is a simplified in-order design, so it won't have quite the performance of a fully out-of-order design
Um pal, you have any idea what you're talking about here? ;) That it's in-order rather than out of order means performance might be vastly lower.

but the balance between number of instructions in flight vs. clock rate seems to have been well managed.
This is just gobbledygook that doesn't really mean anything, as in-flight instruction counts is something that affects out-of-order processor designs. Having higher in-flight numbers typically means the CPU has a greater choice of instructions to pick and choose from when searching for something to execute. In-order processors don't need stuff like this, so hence they don't use it; it would just be a waste of transistors to implement a re-order buffer for example in an in-order processor, which IS THE WHOLE POINT of going with an in-order design anyway: you want something that is light and trim on the tranny-count!

In-flight instruction count should hence pretty much mirror the number of processor pipeline stages from cache to write-back buffers, times number of pipelines (two, I believe), with the additional catch that the LOWER the number the higher the relative performance (meaning less branch miss penalty).

Besides, in-flight instruction count hasn't been announced for the cell PPC core.

XBOX 360 and Revolution are expected to use Power/PowerPC based derivative microprocessors and are unlikely to be clocked faster than the CELL PPU.
Final clockspeed for production cell chips hasn't been announced, so that's a little early to say.

CELL has an integrated memory controller which will allow for lower latency access to memory. AMD has used an integrated memory controller in Athlon for sometime, with has contributed to it's performance advantage vs Intel. The combination of a properly sized L2 and integrated memory controller will yield performance dividends.
What are you talking about? EE has integrated memory controller also, whilst Gekko hasn't, yet has some of the fastest main memory access times in any computer system save supercomputers.

FACT: EE had 2 vector units VU0 & VU1, but they were architecturally different (i.e., different resources and instruction sets).
No, that's not a fact.

While they're somewhat architecturally (VU1 has bigger on-chip memory and a few more execution units and is independent of the main CPU), they both run the same instructions set.

FACT: EE had a relatively slow I/O bus to the graphics processor (GS). The EE to GS bus was 1.2 GB/s performance, both downstream or upstream.
While what you say is essentially true, I'm not sure what conclusions you want to draw from it as the bus speed isn't really any kind of major problem in the PS2; the machine tends to bottleneck in the CPU much sooner than it does at the EE interface... So it's sort of a moot point really, other than to use as comparison to show PS3 will be much faster - which everybody expects anyway. ;)

PREDICTION: PS3 GPU will not use embedded DRAM. The PS3 will use external highspeed memory for best performance.
That statement isn't even coherent, much less logical, as eDRAM is the alternative that offers the highest performance (by a potentially VAST margin).

FACT: None of the highest performing graphics cards produced by NVIDIA or ATI during the past 5 years have used embedded DRAM.
Yeah, so what? That doesn't have anything to do with anything, and besides, FACT: ATi designed X360 GPU with eDRAM, so what do you say about that huh? :LOL:

COMMENTARY: Graphics cards for PC's can afford to pay for the highest performing GPU and memory combination. ATI and NVIDIA have looked at embedded DRAM multiple times over the past several years, but an embedded DRAM design has always yielded lower performance than the alternative external memory design.
Aw, now you're just making shit up.

Embedded DRAM does offer two significant advantages: 1) cost and 2) power consumption.
Actually, the main reason to include eDRAM in a GPU is bandwidth. Just look at the GS for an example of that, have you ever heard of any other chip with a 150MHz memory bus that offers 48GB aggregate read/write bandwidth?

The cost advantage is why you often see it consoles (PS2, Gamecube, XBOX 360).
Um, it really isn't any cheaper to go with eDRAM, as it makes the GPU die vastly bigger. Just look at a shot of the GS die, depending on revision, 50-60% of the core is DRAM. Bigger chip = more expensive chip due to fewer chips/wafer and more chips lost due to defects.

NVIDIA was able to use an external memory design (NV2A) in the XBOX to yield better graphics than PS2 and they will use the same technique in PS3.
:LOL: It's not because of the external MEMORY that xbox games typically have better graphics, pal! :LOL:

PREDICTION: PS3 GPU will offer better performance than the XBOX 360 GPU.
Don't you rather mean, "WISHFUL THINKING: PS3 GPU..."

We have no facts to base such a guess upon, it's much too soon for that. While cell has been demo'd and its features been (fairly) well explained, nothing at all has been said about the GPU save that it's based on a next-gen NV design (from the PC world, presumably) and that NV is designing it together with Sony. So any GUESSES that it will be faster/slower/shorter/taller than ATi's chip in x360 is just that: GUESSES.

SUMMARY: Sony has clearly learned from the design decisions it made for PS2 and has made appropriate improvements to the PS3 design.
LOL, I could have summarized it in fewer words for you: "PS3 is newer than PS2, therefore it is faster". The End! :D
 
I personally think that, what he stated is for the most part true. At least for what we know about it. Why take offense to his take on the design of the PS3?
 
Great first post, dcforest! Overall, I think your conjectures were well thought out. The asymmetry of the VU's must have made MIPS budgeting on the PS2 a royal pain. If your initial design assumptions don't pan out, it's going to be very hard to re-partition what you've already developed for VU0 and VU1.

Some may like to selectively nit-pick your conjectures, but it's much easier to criticize than to propose.
 
Shrike_Priest:

My point was not that Megahertz is the only key to performance, but rather that it is an important metric along with how many cores and/or functional units you put in a design and that Sony focused more on having competitive MHz in CELL than they did in EE. The trend in CPU design today is to move to muli-core. Most of the multi-core designs are slightly lower clock rates than single-core designs due to thermal considerations, not because MHz doesn't matter anymore. All multi-core designers still try to achieve the maximum MHz they can in a given silicon process. To give an example, CELL has 1 PPE and 8 SPE's. 9 cores in total that yield 256 Gigaflops of Single Precision performance at 4GHz (per Sony/IBM/Toshiba presentation). What if Sony had decided that MHz no longer matters and decided to make the CELL clock rate 2x the EE (600MHz)? All of a sudden, the performance doesn't look very impressive. Now, that's an extreme example, but hopefully you get my point. Sony focused very heavily on having a high clock rate in CELL (Sony/IBM/Toshiba presentation noted it was a fully custom design) to go with the multi-core design in order to yield the best performance.


Acert93:

If the R520 could use EDRAM to give ATI better performance than NVIDIA, I'm sure ATI would do so. I would be very surprised if the R520 (when it shows up for the PC) has any EDRAM, or the G70 for that matter.

Guden Oden:

You make some interesting points, but unfortunately you decided to start your post with an ad hominem argument and thereby disqualified youself from further discourse in this thread.
 
You make some interesting points, but unfortunately you decided to start your post with an ad hominem argument and thereby disqualified youself from further discourse in this thread.

The hell? His points are quite valid if I may say so myself....
 
Paul said:
You make some interesting points, but unfortunately you decided to start your post with an ad hominem argument and thereby disqualified youself from further discourse in this thread.

The hell? His points are quite valid if I may say so myself....

He just said that he did make some valid points. If you look at the first line from his post though, he insults the post right from the start. If dcforest doesn't want to converse with him, then that's his right.
 
ondaedg said:
Paul said:
You make some interesting points, but unfortunately you decided to start your post with an ad hominem argument and thereby disqualified youself from further discourse in this thread.

The hell? His points are quite valid if I may say so myself....

He just said that he did make some valid points. If you look at the first line from his post though, he insults the post right from the start. If dcforest doesn't want to converse with him, then that's his right.

That's just a silly excuse for having been thoroughly owned and not wanting to get owned any further.
 
dcforest said:
My point was not that Megahertz is the only key to performance
Actually it isn't any key at all, just as max revs/min of an engine isn't any indication of how fast the top speed of a car is. Actually, that's a bit of a bad analogy as with the car, the faster the engine spins the faster the car will go (assuming it's not stuck in mud or snow or such :D), while it's perfectly possible for a CPU to cycle at tremendous speed not doing anything at all whilst waiting for data to arrive...

but rather that it is an important metric along with how many cores and/or functional units you put in a design
These are all non-metrics of performance, but of course having more on a specs sheet looks more impressive to a layperson, so you might call it marketing metrics if you like. It isn't PERFORMANCE metrics however.

If the R520 could use EDRAM to give ATI better performance than NVIDIA, I'm sure ATI would do so.
1 word: die-size... Or is that two words?

Anyway, those chips will be large enough as it is, and the need to support much greater resolutions on a PC chip part means it'd be unrealistic to squeeze in 32+ MB of DRAM along with a top of the line GPU all on one die. That it would mean a sizeable performance increase if the part was designed around taking advantage of the eDRAM is indisputable, but no matter how big the speed increase, it would simply cost too friggin much to manufacture the chip if it included enough memory for PC resolutions.

On a console, hardware will be targetted towards a particular maximum TV resolution, and software will be targetted towards the hardware. PCs are too general in nature; neither will be possible. Perhaps future advances in semiconductor fabrication will allow on-chip memory also in the PC arena, but it seems that space will be used for more functional units instead if the current trend continues (and there are no signs of it slacking off I might add).

an ad hominem argument
Calling a post "crap" isn't an ad-hominem argument. Sorry, but you simply made too many factual, and/or deduction errors to call your post anything else. If it's you who wrote it originally that is. I mean, where on earth did you get the idea xbox gets better graphics from having off-chip memory? :oops:

and thereby disqualified youself from further discourse in this thread.
:LOL: You don't seem qualified enough for "discourse" anyway. *shrug*
 
The guy's first post, and you guys jump over him.

Give the guy some slack, or at least be polite with your responses.

Speng.
 
dcforest said:
...
SUMMARY: Sony has clearly learned from the design decisions it made for PS2 and has made appropriate improvements to the PS3 design. Again, no design is "perfect" or the "best" and the PS3 design has had to make it's share of tradeoffs (cost, size, power, performance, & time-to-market) as well. Over time, we will learn where the "weaknesses" and "bottlenecks" in the PS3 architecture are and learn what improvements Sony should make in PS4.

Okay, this is how I basically see this. PS1 was Sony's first entry into the market and was obviously very succesful at becoming market leader.

However Sony's success allowed them to also take more risks with exotic architectures for PS2. They did some great things with the archiitecture but also made compromises too. I believe now that they have 2 generations under their belt, they will be more experienced at getting a better balance for PS3.

A simple analogy would be that if you had a beginning (PS1) and an end goal (PS3), and a middle (PS2), drawing a straight line with these three points would be difficult! :p i.e. the middle, PS2, takes the scenic route but gets to it's destination, PS3 safely! :p
 
However Sony's success allowed them to also take more risks with exotic architectures for PS2. They did some great things with the archiitecture but also made compromises too. I believe now that they have 2 generations under their belt, they will be more experienced at getting a better balance for PS3.
the same could have been said about nintendo when they were preparing to launch the n64. but in the end nintendo's quest for the perfect balance lead to compromises like cart based media and tradeoff of proccessing sound via the cpu. nintendo had a great track record in hardware compromise up until that point just like sony has but they fumbled.

i'm not predicting the downfall of sony. in fact after watching the xb360 launch i think i heard sony exhale. i'm just pointing out that anything can happen between generations.
 
Not disagreeing with you Jaws... but what you got me thinking.

I would look at the PS3 CPU and Xbox 360 CPU as really the first steps in a new era of sorts--the multi-threaded era. Just as the PS1, N64, and SS were the first 3D era consoles (we now enter the 3rd), this next generation will mark the first console era of multicore as the standard (I say standard because they are not the first of course) this is a really big turn in how developers and hardware makers look at a system and games.

In that regards I believe we will see a lot of ineffeciency and bottlenecks that were not anticipated, just as we saw in the first 2 gens of the console 3D era. It will take a while of actually seeing how developers are able to USE hardware designed with the concepts of multithreading and streaming... once there is a good understanding of what people are doing with these chips and seeing where they excell and where they fall short we will see them change them up to better meet their acutal uses.

Right now I see big pros and cons to the CELL and xCPU.

CELL is obviously a new concept and really designed from the ground up to be treated as a streaming processor. Of course as a first entry there are a lot of uknowns (will the SPE cache sizes be big enough? Will the SPEs be utlized or will there be too many stops? Was the focus on FP a good one?). It will obviously do a lot of game oriented tasks like Physics and animation very well.

xCPU has the benefit of having 3 standard PPC cores and fairly familiar VMX units. I think having standard cores can be a big advantage in certain situations (no need to pigeonhold the code to stuff an SPE can handle) and the 115GFLOPs is not bad. But the negative is pretty clear: This is like 3 cores glued together. Unlike the CELL which is really designed around the concept of streaming, the xCPU seems to be more of an evolution of the standard CPU pushed into a multithreaded environment.

In the end I think CELL like designs are probably the better idea (Intel has the same concept on their roadmaps). It will be interesting, at least to me, what is better in the short term. While it will vary game to game, it will be interesting to see which chip is most effecient, less of a pain, and can minimize the negatives while capitolizing on the positives in different situations.

So we can connect the points... each new console is one step closer to a vision.

I think that is really true about the PS3. The CELL is really a token first step in many ways (one dang fine one at that). If I were MS, I would be more worried about PS4. What happens when Sony cranks up the speed to 8GHz in 6 years and is able to fit 8 (eight!) 1:8 4th generation CELLs into the PS4?

The design TOTALLY works because a single CELL is designed around the concept of streaming, and in principle should not lose as much effeciency as we are seeing in normal multicore chips. And developers get the benefit of basically working on the same machine with PS4--just with a ton more SPEs to use. So when they are pushing an 8 SPE PS3 to the max in 5 years, they will be estatic when Sony hands them a machine with SPEs that are not only 2x as fast each, but they hand them like 64 instead of 8. That machine will be in the 4TFLOPs range (hypothetically with that type of setup). And I am sure there will be a lot of tweaks and improvements on the design.

This will be a big advantage in development times and good software should come quickly... not to mention backwards compatibility will be pretty easy.

In that regards I look as the PS3 as the glimmer in the eye... the storm will hit with PS4. And I am not sure how MS will respond. The CELL will be like a platform so tools will be more of a gradual extension and not a break and transition.
 
see colon said:
However Sony's success allowed them to also take more risks with exotic architectures for PS2. They did some great things with the archiitecture but also made compromises too. I believe now that they have 2 generations under their belt, they will be more experienced at getting a better balance for PS3.
the same could have been said about nintendo when they were preparing to launch the n64. but in the end nintendo's quest for the perfect balance lead to compromises like cart based media and tradeoff of proccessing sound via the cpu. nintendo had a great track record in hardware compromise up until that point just like sony has but they fumbled.

i'm not predicting the downfall of sony. in fact after watching the xb360 launch i think i heard sony exhale. i'm just pointing out that anything can happen between generations.

I agree with your points. But I do think the blunder of the N64 days is less likely as the industry has matured since those days. And no one can afford to be as stubborn as Nintendo was then...
 
Does Sony think they have lessons to draw from the PS2?

After all, it was wildly successful in the face of not one but two very well-financed competitors.

Isn't their public stance that the PS2 hasn't come close to being fully tapped by most games yet?

Are their design considerations being guided by a desire to correct what they perceive to be the PS2's weaknesses?

Or are they guided by the goal of fitting in as much technology as they can into some manufacturing budget, along with attention to key strategic goals (e.g. Blu-Ray)?
 
dcforest said:
Guden Oden:
You make some interesting points, but unfortunately you decided to start your post with an ad hominem argument and thereby disqualified youself from further discourse in this thread.

Just to be sure, he didn't attack you personally, he criticised your post. That's fine and dandy no matter which 'discourse' department you're comming from. ;)

And Oden, give this guy a break would you?! It's his first post. Okay a few bits were't perfect, but do you see much first posts like this? Isn't Beyond3D meant to be a *Good Community*? ;)
 
JF_Aidan_Pryde said:
dcforest said:
Guden Oden:
You make some interesting points, but unfortunately you decided to start your post with an ad hominem argument and thereby disqualified youself from further discourse in this thread.

Just to be sure, he didn't attack you personally, he criticised your post. That's fine and dandy no matter which 'discourse' department you're comming from. ;)

And Oden, give this guy a break would you?! It's his first post. Okay a few bits were't perfect, but do you see much first posts like this? Isn't Beyond3D meant to be a *Good Community*? ;)

Yep, no ad hominem to be found. And I think that Guden was so harsh only because he though this was pasted from some disreputable forum.
 
Back
Top