Playstation 3: Hardware Info and Price

Arwin said:
But much more importantly, the Cell architecture is way more suited for showing webpages than traditional computers are. It's one of the core features the Cell was designed for. Rendering and downloading a page is something that can best be done in a number of different threads. Heck, Cells will be used for heavy weight servers, never mind a few clients. It's all about streaming data from multiple channels. The Cell was built to work together with other Cells over networks including the internet from it's very core.

I agree with Shifty Geezer. and further, I put the rumour/PR/buzz/bullshit that said that Cells would help each other over the network and the internet, in the same category as Saddam Hussein buying PS2s to drive missiles. or the Pentium 4 accerelating the internet. there's nothing magical in the Cell that makes it more network friendly. it has a NUMA link so you can put another cell next to it, like two opteron 2xx, but it's irrelevant.

And likely, only the PPE would be used in a heavy web/streaming scenario. PPE is worse than any recent single core PC CPU. if you need to chew big number of such threads you want a Sun Niagara (eight cores, 4-way SMT and only one FPU) rather than a PPE (one core, 2-way SMT) with seven SPE made for lots of SIMD FP32 computations.
 
Blazkowicz_ said:
And likely, only the PPE would be used in a heavy web/streaming scenario. PPE is worse than any recent single core PC CPU. if you need to chew big number of such threads you want a Sun Niagara (eight cores, 4-way SMT and only one FPU) rather than a PPE (one core, 2-way SMT) with seven SPE made for lots of SIMD FP32 computations.
These days Intel tries to integrate TCP/IP offload engine in their chipset and AMD adds co-processor interface to their CPU.

http://www.research.ibm.com/journal/rd/494/kahle.html
Real-time responsiveness to the user and the network

From the beginning, it was envisioned that the Cell processor should be designed to provide the best possible experience to the human user and the best possible response to the network. This “outwardâ€￾ focus differs from the “inwardâ€￾ focus of processor organizations that stem from the era of batch processing, when the primary concern was to keep the central processor unit busy. As all game developers know, keeping the players satisfied means providing continuously updated (real-time) modeling of a virtual environment with consistent and continuous visual and sound and other sensory feedback. Therefore, the Cell processor should provide extensive real-time support. At the same time we anticipated that most devices in which the Cell processor would be used would be connected to the (broadband) Internet. At an early stage we envisioned blends of the content (real or virtual) as presented by the Internet and content from traditional game play and entertainment. This requires concurrent support for real-time operating systems and the non-real-time operating systems used to run applications to access the Internet. Being responsive to the Internet means not only that the processor should be optimized for handling communication-oriented workloads; it also implies that the processor should be responsive to the types of workloads presented by the Internet. Because the Internet supports a wide variety of standards, such as the various standards for streaming video, any acceleration function must be programmable and flexible. With the opportunities for sharing data and computation power come the concerns of security, digital rights management, and privacy.
 
Sis said:
TCP/IP has what, a 10% bandwidth overhead and a negligible execution overhead.
Intel doesn't think so.
http://www.intel.com/technology/ioacceleration/
Intel® I/O Acceleration Technology (Intel® I/OAT) moves data more efficiently through Intel® Xeon® processor-based servers for fast, scaleable, and reliable network performance.

Performance

A primary benefit of Intel® I/OAT is its ability to significantly reduce CPU overhead, freeing resources for more critical tasks. Intel® I/OAT uses the server’s processors more efficiently by leveraging architectural improvements within the CPU, chipset, network controller, and firmware to minimize performance-limiting bottlenecks. Intel® I/OAT accelerates TCP/IP processing, delivers data-movement efficiencies across the entire server platform, and minimizes system overhead.
 
Sis said:
Sure: "Intel® I/OAT uses the server’s processors more efficiently". The client is doing a fraction of the kind of work that a server would be and I thought we were talking about the PS3 as a client browser.
Oh you meant it only for clients? Since Arwin's original argument was about servers and Blazkowicz_ brought Niagara which is obviously a server processor I thought your second sentence was just an additional question.

Anyway, P2P "clients" such as BitTorrent act as client and server. Or Xbox Live participant nodes if you like a more gaming related example.

EDIT: one more example, PS3 acts as a DLNA server which sends media to PSP and other home network appliances.
 
Last edited by a moderator:
one said:
Oh you meant it only for clients? Since Arwin's original argument was about servers and Blazkowicz_ brought Niagara which is obviously a server processor I thought your second sentence was just an additional question.

Anyway, P2P "clients" such as BitTorrent act as client and server. Or Xbox Live participant nodes if you like a more gaming related example.

EDIT: one more example, PS3 acts as a DLNA server which sends media to PSP and other home network appliances.
Arwin's statement was "But much more importantly, the Cell architecture is way more suited for showing webpages than traditional computers are... Rendering and downloading a page is something that can best be done in a number of different threads." This is not about web servers, but web browsing.

Your other point is interesting, though I would wager a bet that tcp/ip overhead of particpant nodes in an online game is still a fractional cost of the overal performance. Wringing any extra perf out of it will likely not generate noticable or relevant gains. Even in the DLNA standpoint, how many clients are you expecting to hit the PS3? But again, in this scenario it certainly wouldn't hurt to have that overhead removed, particularly if DLNA server + simulataneous game play is enabled.

No, I just don't see any need further than the PPE for doing client-based tasks. But certainly introducing DLNA into the mix clouds the issue.
 
Blazkowicz_ said:
another exemple, geeks put linux on their 486 DX33 to make a file/printer/firewall/schmoo server.
So I think I am officially lost in this thread. Can you restate the relevance the PS3 has with this comment? If a 486 is good enough for file server, what does the PS3 bring to the table? And are others suggesting that the PS3 is now going to be used in a server environment or in server use?
 
Sis said:
Arwin's statement was "But much more importantly, the Cell architecture is way more suited for showing webpages than traditional computers are... Rendering and downloading a page is something that can best be done in a number of different threads." This is not about web servers, but web browsing.
But Arwin added after those statements
Arwin said:
Heck, Cells will be used for heavy weight servers, never mind a few clients. It's all about streaming data from multiple channels. The Cell was built to work together with other Cells over networks including the internet from it's very core.
suggesting Cell's server abilities can be leveraged to aid web browsing.
 
Shifty Geezer said:
But Arwin added after those statements
suggesting Cell's server abilities can be leveraged to aid web browsing.
Ah, ok. I thought Arwin was using that to back up the statement that Cell will be good at multithreading tasks. I didn't realize this thread had turned into the viability of using PS3 as web servers.
 
Blazkowicz_ said:
another exemple, geeks put linux on their 486 DX33 to make a file/printer/firewall/schmoo server.
You too switched to the client usage argument and it's not about "a heavy web/streaming scenario" now? ;) For the simplest client usage, network load has to be as little as possible since you play games on the same CPU as Sis writes.
Sis said:
Your other point is interesting, though I would wager a bet that tcp/ip overhead of particpant nodes in an online game is still a fractional cost of the overal performance. Wringing any extra perf out of it will likely not generate noticable or relevant gains. Even in the DLNA standpoint, how many clients are you expecting to hit the PS3? But again, in this scenario it certainly wouldn't hurt to have that overhead removed, particularly if DLNA server + simulataneous game play is enabled.
TCP/IP is just one of protocols. If you think we are going to receive the same kind of online gaming service as is provided now for the next 10 years, networking workload may be a fractional cost. But even today 100M FTTH (fiber-to-the-home) service is available in some areas of the world, and what you expect for network gaming will grow richer along with DRM platforms and media encoding formats in the future.
 
Sis said:
So I think I am officially lost in this thread. Can you restate the relevance the PS3 has with this comment? If a 486 is good enough for file server, what does the PS3 bring to the table? And are others suggesting that the PS3 is now going to be used in a server environment or in server use?

I just want to say that you don't need a nuclear explosions simulator to move a few millions zeroes and ones around.
 
Sis said:
It's official: I have now heard it all. The Cell actually makes internet browsing better.

So internet browsing is a single threaded affair then, that does not benefit at all from a multi-core approach. How much internet / browser related programming have you done? At least some?

Wait--I spoke too soon. Now I've heard it all. The day a gigabit connection speeds up internet browsing is the day we have HD on-demand.

Instead of speaking, or listening, try learning how to read. I said that with a gigabit connection, the PS3 is pretty decently future-proofed. I didn't say it would help now, although there are in fact people with internet connections that supercede 100Mbit/s (try talking to students in a university related dorm), so a 1000Mbit/s connection would already help. But when it comes to future proofing, note that in my area, the maximum speed offered by just my provider went up from 4Mbit last year to 20Mbit this year.

Of course not all webservers are going to provide this speed, and if everyone surfed at this speed noone would actually reach that speed any of the time, but things are changing and fibre optic networks rapidly expanding.

Anyway, it would behove you to give me some credit (thanks to the other people in this thread that did) and not immediately assume my brain is one-celled. ;)
 
Outside of compute farms you are not going to see CELL in servers, last of all in webservers. Context switching SPEs is extremely expensive, so virtualizing them is, for all practical purposes, impossible. That makes them 100% unfit for server workloads.

Cheers
 
Arwin said:
So internet browsing is a single threaded affair then, that does not benefit at all from a multi-core approach. How much internet / browser related programming have you done? At least some?



Instead of speaking, or listening, try learning how to read. I said that with a gigabit connection, the PS3 is pretty decently future-proofed. I didn't say it would help now, although there are in fact people with internet connections that supercede 100Mbit/s (try talking to students in a university related dorm), so a 1000Mbit/s connection would already help. But when it comes to future proofing, note that in my area, the maximum speed offered by just my provider went up from 4Mbit last year to 20Mbit this year.

Of course not all webservers are going to provide this speed, and if everyone surfed at this speed noone would actually reach that speed any of the time, but things are changing and fibre optic networks rapidly expanding.

Anyway, it would behove you to give me some credit (thanks to the other people in this thread that did) and not immediately assume my brain is one-celled. ;)

Meh, my University sucks, dorms are capped at 1Mbit per second (easily reachable), and with what appears to be a 10Mbit/s cap (usually 6 to 8Mb/s sustained) on the wireless signals.
 
Arwin said:
So internet browsing is a single threaded affair then, that does not benefit at all from a multi-core approach. How much internet / browser related programming have you done? At least some?
Nope, haven't done any browser work. But I do know that the Cell is not needed to speed up a web browser. A web broswser is likely just as effecient with software threads as hardware threads, since the threads are waiting for server responses. If they aren't waiting for server responses, then they're rendering, which would be a serial operation, AFAIK. And as I said elsewhere in this thread, the PPE is perfectly suitable for that task.
Instead of speaking, or listening, try learning how to read. I said that with a gigabit connection, the PS3 is pretty decently future-proofed. I didn't say it would help now, although there are in fact people with internet connections that supercede 100Mbit/s (try talking to students in a university related dorm), so a 1000Mbit/s connection would already help. But when it comes to future proofing, note that in my area, the maximum speed offered by just my provider went up from 4Mbit last year to 20Mbit this year.
Of course not all webservers are going to provide this speed, and if everyone surfed at this speed noone would actually reach that speed any of the time, but things are changing and fibre optic networks rapidly expanding.
At work, transfering large files from machine to machine, I've never seen the 100Mb connection maxed out. Nowhere close, in fact. Future proofing is great and I'm all for it. I mean, it can't hurt, right? But let's not act like it helps in any meaningful way.
Anyway, it would behove you to give me some credit (thanks to the other people in this thread that did) and not immediately assume my brain is one-celled. ;)
I think I've rightly called out your post. In my opinion, your assertions are wrong. But for what it's worth, I wasn't questioning your intelligence, only your optimism.
 
Gigabit certainly helps at my workplace.. cuts Ghost times from 15/20 minutes to 3-5 minutes on images that are a few Gigabytes big.
 
Tahir2 said:
Gigabit certainly helps at my workplace.. cuts Ghost times from 15/20 minutes to 3-5 minutes on images that are a few Gigabytes big.
What are the specs of the system on the 100Mb machines versus the 1000Mb machines? Or did you just swap out NICs and saw the increase?
 
Umm it's not quite as simple as that.. we use networking to load OS's onto customer machines using the Windows OPK software and Ghost.

The specs vary widely - the Gigabit controllers are generally faster.. today even used an old 10Mbit card. That was a bit painful when dealing with such large transfers.
 
Back
Top