How would you react when the PSX3 spec reads 240~480 GFLOPS?

Morris here, though I go to Hunterdon a lot. ^_^

Back on things, though, why would a PS3 need to care about PC gaming at all? As I recall, consoles not only don't have to worry about games; they are also more lucrative. Certainly it'd be great to see CELL somehow able to emulate them, but its brand of gaming would be one of the main ATTRACTIONS to adopting it for other use should the software and support be there. Being unable to play PC games wouldn't suddenly make people gasp and say "why, no!" (Meanwhile, we'll probably be seeing more games like FF11 in this generation and the next emphasizing console/PC gaming online anyway.)

You'd also be surprised just how many people wouldn't notice the difference between MS Office and Open Office if they didn't see the boot-up screen. (A few might register some visual differences.) Personal users don't care that much and don't use that much from the programs, and so long as the right one loads easily, works easily, and doesn't fail utterly in the tasks they have for it, the general public won't care.

CELL likely isn't going to get picked up by any OEMs or make any large thrusts instantly, but it establish itself where they are already quite present, and in those areas (likely to be pushed primarily by IBM) where the architecture provides the most advantages. Everything else "depends." Depends on how well-received the chips are, how supported they get, how adopted they may be at home... Hell, a lot in MANY areas may depend on just what Longhorn is carting along behind it and how much growth and exposure alternatives get in the meanwhile.

Lotta funky going on, lots that CAN go on, and little information right now as to what direction anyone is taking. So while it's fun to conjecture, things are going to stay at "depends" for quite some time yet.
 
...

Did Alpha have many problems with FX!32 because of not supporting 80 bits floats natively like x86 does ? No, of course.
FX32 didn't exactly blow away native X86 in benchmarks. People will not put up with second-rate X86 performance, and this is proven with Itanium and Transmeta.

As for PPC615, it WAS specifically designed for X86 compatibility.

Also, PC CELL could differentiate itself from other CELL implementations
What PC CELL????

One, in part, the Emeotion Engine ( > VS 3.0 , so ?
So it is INCOMPATIBLE with DX code.

You can then go and expose CELL more through OpenGL ( DOOM III "engine"/Quake IV "engine" codepath if they approached John Carmack nicely ? )
How many OpenGL games do you know??

or emulate DirectX using CELL's APUs and Rasterizing logic: again the Shaders can be JIT re-compiled and optimized by the driver for CELL
Easier talked than done. Try emulating PSX2 successfully first, then we will talk.

You sure seem overly optimistic today Deadmeat.
I can be confident because the accuracy of my past predictions.
 
Re: ...

Vince said:
DeadmeatGA said:
I can be confident because the accuracy of my past predictions.

LMAO! I'll post again when I stop laughing.

I do not mean to be offensive, but if his modesty were water we could turn the Sahara desert in a large scale and fertile paradise.

Deadmeat,

First,

How accurate was your prediction that all developers were going to acknowledge the failure that you recognized the garbage-like PlayStation 2 architecture and jump ship to Xbox and GCN ?

Let me answer for you, you were not very accurate.

And that is the tip of the iceberg and not the exception which proves the rule.

Also, easier said than done ? A guy already did it, a single coder and his software rasterizer runs, optimized for x86, 100x faster than Microsoft's own Reference Rasterizer ( and he didn't test it on a Pentium 4 EE at 3.2 GHz either ).

About the Alpha and IPF comment, the FX!32 is a respected technology the Alpha guys invented, the failure of the Alpha business is not technical, but due to management and lack of funding.

Oh and IPF is only moving from specific HW support for x86 to a FULL software emulation layer for x86 which goes against your conclusions.

What is PC CELL ? If you noticed the use of "could" you would understand that I was implying that if a PC "optimized" implementation of the CELL architecture were to be designed and launched then we could tweak certain parts of the micro-processor to ease the code-morphing/JIT re-compilation process, all without negating the standardized nature of the APUs.
 
...

How accurate was your prediction that all developers were going to acknowledge the failure that you recognized the garbage-like PlayStation 2 architecture and jump ship to Xbox and GCN ?
Necessity of survival. No one likes to work on a broken architecture like PSX2, but Sony's marketing men and developer buyout fund are saving the day....

Also, easier said than done ? A guy already did it, a single coder and his software rasterizer runs, optimized for x86, 100x faster than Microsoft's own Reference Rasterizer ( and he didn't test it on a Pentium 4 EE at 3.2 GHz either ).
Meaning what, some coder's software DX pipe runs 100x faster than MS's own reference DX pipe??? That kind of gain is fairly common in software engineering. Big deal, software pipes can never match the raw performance of hardware pipe anyway.

About the Alpha and IPF comment, the FX!32 is a respected technology the Alpha guys invented, the failure of the Alpha business is not technical, but due to management and lack of funding.
Same deal with PSX2 in opposite sense. The PSX2 as an architecture is a disaster, but the management and multi billion-dollar funding saved it. Had it been a MS or Nintendo console, it would not have survived.

If you noticed the use of "could" you would understand that I was implying that if a PC "optimized" implementation of the CELL architecture were to be designed and launched then we could tweak certain parts of the micro-processor to ease the code-morphing/JIT re-compilation process, all without negating the standardized nature of the APUs.
1. JIT codes are slow; FX32 was slow, Transmeta is slow, and Java JIT compiled codes are slow.
2. APU is a good deal slower than Pentium4 on general purpose code. The idea of emulating a faster hardware with a slower hardware is not good.
 
Necessity of survival. No one likes to work on a broken architecture like PSX2, but Sony's marketing men and developer buyout fund are saving the day....


This necessity of survival,Marketing and developper buyout are nothing new .If you don't count these from the start ,don't even start this business...

But still we have some great developpers having fun with this piece of freak....

software pipes can never match the raw performance of hardware pipe anyway.


not in 2019 with quantum computing 8)
 
Necessity of survival. No one likes to work on a broken architecture like PSX2, but Sony's marketing men and developer buyout fund are saving the day....

Hehe, nobody has to work on a PSX2 because it doesn't exist.... :p Also, developers usually aren't swayed by marketting men, and I don't know too many who have been "bought out" as you say...

Big deal, software pipes can never match the raw performance of hardware pipe anyway.

Depends on how you define a hardware and a software pipe... ;)

Had it been a MS or Nintendo console, it would not have survived.

You have that little faith in MS and their capital? Astonishing!

1. JIT codes are slow; FX32 was slow, Transmeta is slow, and Java JIT compiled codes are slow.

Actually FX!32 was quite fast once several passes were run because it was basically running native by then... Also I've seen Java code occasionally outrun compiled C code (mainly on string parsing stuff) mainly because it could perform optimizations at run-time based on decisions it could see that was not possible with compiled code).
 
I agree with Deadmeat, PSX2 might be an architectural garbage, but PlayStation 2 isn't so let's stay on the point.

Team Silent has great coders, but they are humans and not gods, they cannot take the proverbial rabbit out of the hat: I suggest you to "play" Silent Hill 3 on a decent TV and to keep in mind the Hardware it is running on is 1+ years older than your beloved Xbox.

I think I know what you are going to say, but I will just avoid to state the obvious this time.

FX!32 was hardly slow and when you have a code that repeats certain loops over and over the time from interpreting the code to have the code basically re-compiled and native on your architecture is not that large and overall is something we can dismiss ( as "t" grows large enough ).

Archie already covered this point though, so I am not adding anything new there.

Also, by what it seems t be nVIDIA definition of nVFLOPS, that is the number of FLOPS an architecture would need to achieve to do in software what the NV30 FP pipes do in Hardware, if you reach or beat 200 GFLOPS you should be able to emulate what the NV30 does in real-time.

512 GFLOPS or more for a CELL chip that would retail at around $399-499 each is not that impossible ( warning [I know you Deadmeat ;)]: do not desume the price and performance of the CELL processor used in the PlayStation 3 by what I just stated ).

Achieving 200 GFLOPS out of a peak of 512 is less than 50% efficiency ( 39% efficiency to be exact ).

Would be users so unhappy to have a CELL processor that achieves NV30 levels of performance in software ? ( it would eb a more positive situation than that things like filtering of textures would still be done in Hardware with the visualization optimized PEs which would leave more OPS and FLOPS for some other task ).

On normal modern processors this poster's "virtual" DirectX implementation was achieving decent fps performance: you would agree that in terms of graphics processing performance a CELL processor is potentially much faster and more efficient than a common Athlon XP or Pentium 4C ?

You probably will not acknowledge that, but that is a known when talking about you and SCE related products.

That processor would allow DX9 levels of performance without needing a specialized DX9 GPU: two birds with a one stone and Longhorn would be happy too.

With DirectX going more and more generalized and GPUs evolving even more in the direction of CPUs it is not difficult to see why processors like CELL could be the ones that enable software developers to go back to software rendering instead of a Hardware implemented features as found in most GPUs which, even by ammission of people like John Carmack, are only a stop-gap solution until CPUs get fast enough to resume their role in the rendering process.

You might not agree with that, but I see already architectures like NV30 as quite neat general purpose CPUs ( not as fast as a Pentium 4 in some tasks, but faster in others [some company implemented a database program basically on the NV30 and it was faster than on the Pentium 4 system this company had] )...

Imagine a NV50 evolved along the lines of the NV30 architecture and you won't see something MUCH dissimilar from CELL.
 
Hmm, seriously CELL PC is being VERY optimistic IMHO. CELL workstations and PDEs are more realistic, even though i expect at best we will see CELL in SONY main hardware, home electronics(TV/DVD/BR).

Dont be forgetting that by the time CELL goes mass production or what, PC hardware have already evolved. Bigger cpus, bigger bits, bigger speed, bigger bandwidth etc. althonXP, P4C and NV30 are eeeeeeh-sloooooooowth by then.
 
Chip architectures can slip just about anywhere they either make sense to, or are supported enough in. Most consumers don't know Intel from AMD... there's a good chance they recognize Apple a lot more. "Windows" and "Pentium" are, of course, pretty intrinsic at this point, but they still don't really know what program they use and how--just the icon/button to click and how to do a few things inside. Give them something that "works" and is in general recognizable, and plenty of consumers won't really care where it comes from--they just need to spend some time getting used to it. Throw a word processor, an email client and a web browser on something and BAM! That's a "computer" to most people. (Ok, maybe it needs Solitaire on there somewhere, too. :p ) Everything else is just headroom. Extra features people go "neat!" at, and then only use if it's easy enough or they have reason to do more than once a month.

Certain companies would go after certain features on the chip, then build the software they need for their purposes, but "Joe Consumer" doesn't program squat--they just use what is available. Hell, many seem to think "Microsoft" people and "Apple" people log onto different internets. Folk are pretty oblivious to hardware AND software, and they basically use A) what is easier to understand, and B) what is right in front of them. (And may times C) what is cheapest.) If a machine sits by their TV and lets them play cool games on it, and also tosses some recordability functions on it, some PVR use, some music/video playing and storage, and can also let them use a browser and answer email...? Well hey, why not?

The software end just needs to EXIST and be easily adopted by the public, and that just needs support from interested parties. There's plenty of tech leakage from one sector or another, and everything else just goes with the flow, or perhaps provides a bit of a boost to get things moving faster.

There are certainly no "nevers"--nor probably even "highly unlikelies"--just companies, platform dedication, and properly reading (and guiding) the public.
 
Back
Top