How was Ps2 thought to work ???

If I where a PS2 or GC developer, I would put a lot more of my resources into finding a way of doing detail texturing as efficiently as on xbox. Is that possible? And if it is, why isn’t it done?
Killzone does that, AFAIK.
 
marconelly! said:
If I where a PS2 or GC developer, I would put a lot more of my resources into finding a way of doing detail texturing as efficiently as on xbox. Is that possible? And if it is, why isn’t it done?
Killzone does that, AFAIK.

yeah but we still dont know at what extent. even halo, lauch title for xbox, already had detail texturing that made its textures look better than pretty much everything i've seen on other consoles. dont know whether ps2 has enough resources (mainly memory really) to replicate that kind of detail...
 
Detail texturing has limited uses. That's the main reason you don't it in that many games. It's not that it can't be done on PS2 or Cube (and I know TS2 at least has it on Cube). Of course the larger amount of RAM in the Xbox certainly also makes a difference.
 
function said:
So Shenmue 2 quite blatantly didn't push many (or possibly any) of the DC's advanced features. It may not even have used mip mapping anywhere, just as the very first Naomi game I saw, House of the Dead from 1998 seemed to lack. Texture aliasing central. (I know mip maps take 33% more memory, but with experience this is obviously something that Sega's developers chose to use heavily).
Agreed.
IIRC, house of the dead was using point sampling! :oops: Why?!! Well I was told that some of the developers prefered the look of point sampling over texture filtering(!). Either that or there was a flawed belief that there was a performance benefit to using it <shrug>

Sadly I never saw Shenmue 2, but Shenmue 1 didn't even use the DC's hardware support for shadows! (I thought at first it did but then saw anomalies which proved it didn't. They appeared to be burning extra polys explicitly drawing them.) I suspect this was because Shenmue was originally targetting a previous system.

london-boy said:
no reason to call him a fonboi...
Be fair! I said 'troll', because of his deliberate baiting attempts. A fanboi (far more insulting IMHO) is surely just ignorant of all systems other than his favoured machine and is unwilling to change that state of affairs. It appeared to me that YPO did not fall into that category.

YPO said:
"I don't believe that I said that DC was superior at multi-texturing - only questioned the assumption that DC was inferior"

So what exactly are you saying? That the 2 machines are equal in that aspect? That's the only other choice left.
No, not at all. They are, for want of a better word, orthogonal. I personally would like a machine with large amounts of brute force fillrate (eg PS2 style) with an elegant and rich set of blending functions (eg DC).
Like I said, I'm no techie so I can't argue with you on the technical front.
... this sounds like more Princes Bride to me :D ...
Maybe DC has more hardwired function or whatnot and the PS2 has brute force. In the end of the day, the games that are out showed PS2's better at it. Not only in multi-texture effects but shadows, particles, etc.
We keep going in circles here.
1) The DC development ended too early to see what could have been done later games. As I've said, and I feel with some justification that this is the case, I'm pretty qualified to know what is "under the bonnet" (hood for those stateside) in the CLX2.
2) I have not said that DC is superior to PS2. The PS2 has N* as much silicon so it would "bleedin' well want to be better". How much better in terms of visual "wow" is an uncertain factor. Obviously developers should play to the strengths of a system and avoid the weaknesses if possible.
3) As for multi-texturing, there seems to be some assumption in some postings (not necessarily from yourself) that because it wasn't used much in DC it was therefore difficult to use. I know of a multi-texturing PC game that was ported to the DC in a weekend.

"One reason for the frequent use of multi-texturing on PS2 is that it is rather lacking in the texture memory department. One way of increasing the apparent texture information is to use multi-texturing avoiding correlation between the applied textures."

That's just weak.
I was giving an example of why its use might have been accelerated on the PS2. There is no reason why the same could not have been used on DC if so desired - the bump mapping HW support in DC is one such example.
Crazyace said:
as well as the cool per pixel sorting ( very slow, but amazing that it is there at all ) It isn't perfect though..
What makes you think that? (Both in terms of the speed and your "not perfect" comment)? The sorting is N* faster than the texturing and it was deterministic in the ordering.

function said:
I think the comment about "scratching the surface" meant that there were unused (or largely unused) features that could have had a big impact on how games looked, rather than the DC actually being able to secretly outperform a PS2. [icon_wink.gif]
That is exactly what I meant. There wasn't a magic "turbo" register that enabled 10 more texturing engines. :)

Fafalada said:
Simon wrote:
No, 2 tris|passes for trilinear. The Aniso 'mode 1' was single pass but needed more cycles.

So I was right on this one after all [icon_razz.gif] Anyway, the question I wanted answered is whether or not you had to setup multiple triangles to draw multiple texture passes? What I'm getting at here, is that if you could really do single pass multitexture you could do quite a passable 'software' trilinear without messing with multiple triangle setup...
Nah it had to be 2 tris.

BTW, Thanks for the other info.
 
It is a pity IMG/SEGA didnt have polybump at the time. With all due respect to the uber-skillz of console devs, I think most werent quite ready to themselves write automatic conversion tools from high poly to low poly models with normal maps at the time.
 
cybamerc said:
Detail texturing has limited uses. That's the main reason you don't it in that many games. It's not that it can't be done on PS2 or Cube (and I know TS2 at least has it on Cube). Of course the larger amount of RAM in the Xbox certainly also makes a difference.


Actually this was discussed some months ago here

Extract:
Squeak said:
We’ve seen detail textures in most games on xbox and in some games on Gamecube, but curiously never on PS2, is it impossible for some reason?
Tagrineth said:
Probably takes too much memory to be used effectively on PS2...
Squeak said:
I hardly think that it’s memory limitations holding PS2 back from doing detail textures.
What I regard as detail textures, are small, high-resolution 8 bit or 4 bit monochrome textures, with alpha for fade in out and blending.
I don’t think S3TC would do much good on such a texture, as there wouldn’t be much bit reduction to be done. S3TC has big limitations with alphablending too.
Fafalada said:
Anyway, you're right about it not being a memory problem.
Detail mapping is one of those things that just fit very nicely on VUs, including stuff that can't be done efficiently on other hw like drawing only relevant detail layers as dictated by distance etc.
However...
if you're not already using finely tesselated geometry, you stand a good chance of running into precision problems with UVs on GS(which can result in swimming texture artifacts and worse), so you face forcing your artists to tesselate low poly stuff (say, walls) by hand or complicate your shaders by including automatic subdivision as required... neither of which is particularly desired.
Although it's also true that above kinda fits with GS just generally liking small polygons much better then big ones anyhow... - but it doesn't change the fact that it's an annoying thing :p

I don't agree about detail texturing having limited uses. It can be used on almost any surface with advantage, if done right.

But then, detail texturing isn't just one technique, but one name for a variety of methods to get extra detail without the cost of a hi-res texture.
Take a look at Enclave on xbox for example, its textures are made in an entirely different way from say, Halos. They look very detailed, until you get close to them, then they appear smutty and "dirty", with clearly distinguishable black "detail" texels.
 
Simon F said:
IIRC, house of the dead was using point sampling! :oops: Why?!! Well I was told that some of the developers prefered the look of point sampling over texture filtering(!). Either that or there was a flawed belief that there was a performance benefit to using it <shrug>

HOD2 uses point sampling most of the time, but it also uses bilinear in some particular cases, so yes, i'm inclined to think it was a pursued effect (say, it gave them the feeling of 'dirt noise' -- a thing specifically coded for in SH3). btw, Simon, there's nothing wrong with devs choosing to use an 'inferior' filtering mode for a base texture - there are pusposes for that too. and a sidenote inferring from that: i, for one, would be majorly pissed off if some 'too clever' hw/api switched to aniso behind my back when i had specifically asked for point sampling (on a base tex, that is).
 
IMO the only thing that really separates xbox games from PS2 games (and to a lesser extent GC games) is detail texturing. Sure some xbox games use a little bump-mapping, and others multisample AA, but not to such an extent as to really make a difference in overall appearance.
It's the detail texturing that makes xbox games look so much better/different, again IMO.
If I where a PS2 or GC developer, I would put a lot more of my resources into finding a way of doing detail texturing as efficiently as on xbox. Is that possible? And if it is, why isn?t it done?
While that's a nice observation, I believe primary concern is working with the target platform - not trying to mimick the look of another platform.
Which isn't to say detail texturing isn't a good idea on the said platforms ;), both stand to benefit from memory saving it offers given their 'lack' of memory relatively speaking.
Anyway, it's not that PS2 games never used detail texturing, it's mostly high profile titles that steer away from it - eg. Drakkan PS2 was definately doing it, but how many people here even as much as saw the game, let alone played it? (feel free to go back to your previous notion of accusing big names of incompetence if you wish :LOL:)

london-boy said:
dont know whether ps2 has enough resources (mainly memory really) to replicate that kind of detail...
Performance issues aside, detail texturing is foremost a memory saving technique.

Simon,
I also tend to think HOTD used point sampling on purpose - partially because R5 on PS2 did the same thing for car textures (but only car textures) magnification - very much points to an artistic choice (well either that, or it was a bug they didn't catch given how rushed the game was :LOL: ).
 
Squeak:

> I don't agree about detail texturing having limited uses.

It works for noise and repeating patterns but it's no substitute for a hi-res texture.

> But then, detail texturing isn't just one technique

I've never heard of any alternative definition?

> Take a look at Enclave on xbox for example, its textures are made in
> an entirely different way from say, Halos.

They are just regular color maps. Perhaps they have more detail but that doesn't necessarilly relate to texture size.

> then they appear smutty and "dirty", with clearly distinguishable
> black "detail" texels.

Er... they just have high contrast to make the details stand out more but they're still just regular base textures.
 
cybamerc said:
Squeak:
1:
> I don't agree about detail texturing having limited uses.

It works for noise and repeating patterns but it's no substitute for a hi-res texture.


2:
> But then, detail texturing isn't just one technique

I've never heard of any alternative definition?

3:
> Take a look at Enclave on xbox for example, it's textures are made in
> an entirely different way from say, Halos.

They are just regular color maps. Perhaps they have more detail but that doesn't necessarilly relate to texture size.

> then they appear smutty and "dirty", with clearly distinguishable
> black "detail" texels.

Er... they just have high contrast to make the details stand out more but they're still just regular base textures.
1: Of course not, but if you can't afford a hi-res texture, it's better than just plain blur.

2:Maybe I didn’t put that right? What I meant was that it can be used in many different ways, for example on things far away, close up, with or without base texture, in different colours or resolutions and so on.

3:Well, you’ve got to admit that there is an unusual amount of black or very dark texels in the maps? Could be a result of S3TC compression, but doesn’t really look like the kind of artefacts that S3TC would produce.
Or, it could be a YUV like compression that was talked about earlier.
 
Squeak:

> Well, you’ve got to admit that there is an unusual amount of black or
> very dark texels in the maps?

I think they just boosted the black levels to add a bit of relief to the textures. A lot has been said about the texturing in Enclave but I think it's obvious that the textures aren't particularly hi-res because, as you mention yourself, when up close to them they get blurry. Or rather, they may be hi-res but if so they are stretched across large surfaces.

They do have a lot of detail however. I think a lot of them are based on photographs. And that's probably why the various texels stand out so much.
 
PS2 can be a competent multitexturer with its raw speed when resources are allocated for it; DOT3-type bump mapping could even be used as a showcase effect in a level of some game. On the other hand, multitexturing was one of the principles of PowerVR2DC's design.

The Dreamcast was designed to be able to sustain a majority of its hardwired blending, filtering, and anti-alaising effects together. If it truly could only sustain a minority of effects like many of the system’s games seemed to indicate, the PowerVR2DC design would be exceedingly non-optimal to waste die area, logic, and transistor budget like that which could've otherwise been used to add pipelines and fillrate to the chip. ImgTech's engineers weren't loading the chip with impractical features to spice up the spec sheet for marketability in some hype driven, consumer marketplace; they were delivering to SEGA for evaluation, whose engineers would gauge its true performance through testing.

I don’t think the DC comparisons that were being made in this thread were in direct relation to the PS2. They were assertions about what the DC was capable of, in relation to its own performances.

cthellis42:
I'm just skeptical of anyone who things there was magic coming... Slow advancements and some new tricks--same as always.
It's understandable to think Dreamcast had potential for only gradual improvement left as is usual with established systems. However, much of its feature set was too far ahead of the time, and developers would've only been able to deliver the graphical look originally intended by the hardware's design once they became familiar with utilizing those hardware-accelerated features. Conventions of the typical DC look, which too often didn't push much passed bilinear and mip-mapping as mentioned, would be defied with more natural filtering helping to hide characteristic mip lines, and potentially even hardware support for anti-alaising helping to smooth raw images. Through bump mapping or any sort of possible texture combining, object detail could've been given a new dimension in shading with bumps, highlights, or deterioration effects, showing off a dynamic personality independent from the base texture. These new properties given to the graphics would define a generation of software distinct from the previous one.
I meanly figure if the DC had more "free" capabilities, even if they were really HARD to program for, ONE of the developers would have been pushing insanely for it, just to show off to the others. And that, of course, would leave a title of note sitting around for us to look at and take into effect while continuing to ponder lamentables...
Surprisingly, the system did provide some example from its notoriously neglected feature set. Le Mans used anisotropic filtering; note how the grass textures alongside the track remain a consistent integrity at all distances from the camera... no characteristically jarring blur levels in view. Self-shadowing, in titles like World Series Baseball 2K1 and 2K2, Crazy Taxi 1 and 2, and Sonic Adventure 2, and better shading became more prevalent all the time as general modifier volumes were put to use. Multitexturing was shown at both 30 and 60 fps, racers like the aforementioned Le Mans and Tokyo Xtreme Racer 2 demonstrating as expected.

Dot product bump mapping was even used in some Dreamcast games like the Tomb Raider titles (I think the effect saw use mainly in Windows CE games... perhaps the Kamui libraries didn't facilitate it as well with regards to memory management?) As lazy as the versions were otherwise, Eidos was proud to brag in the press releases to get picked up by media sites:

"Features not possible before on console versions will be available through your TV, including bump mapping, environment mapping and volumetric fogging."
http://www.computerandvideogames.com/r/?page=http://www.computerandvideogames.com/news/news_story.php(que)id=10973
"State of the art graphics: single-skin technology, bump-mapping, environment-mapping."
http://www.sega.com/pc/catalog/SegaProduct.jhtml?PRODID=209
But as most point out, many times providing new "stuff" will come at a cost elsewhere, and so the net effect would be...? Slow, gradual steps, I imagine. Same as always.
The trend in later games was actually showing the opposite. The engines, like the one of Le Mans for instance, were performing increasingly higher complexities of polygon counts, lighting, and even audio when they finally became familiar with and put to use the advanced multitexturing, anisotropic filtering, etc. features. Potential is locked into hardwired features, and it's clear from the trend that developer unfamiliarity - not design compromise - was the limiting factor for why that potential went largely untapped in the DC's young library.

zidane1strife:
Well the one thing that really did it for me, with regards to DC gphx... was the draw distance, most games I played featured draw distance similar to that of previous gen systems...

It was as if we had beefed up the gphx of old gen. games...
Unfortunately, a lot of DC games actually were little more than accelerated PlayStation/N64 titles. For an enlightening look at the kind of resource allocation such titles got, read the Postmortem on the popular Dreamcast version of Tony Hawk's Pro Skater at Gamasutra:

"We figured that since for this project we had three programmers (Sean hadn't come on board yet), a Playstation dev system, and a lot of DC familiarity, we have it running in just one month."

"Early on we decided we weren't going to use the Dreamcast's wonderful floating-point math because we didn't want to introduce errors into the code before we even had it up and running."

"We estimated how much we'd have to do to get past limitations in the code that prevented the Dreamcast from hitting its full stride and it ran out into several weeks. Since we weren't doing a fighting game, all that 60FPS would have gotten us would have been a nice marketing bullet point."

"With new power comes new responsibilities, such as fully taking advantage of all those polygons by implementing extra joints in the characters (at the shoulders and necks for example), weighted vertices, and cloth and hair systems. Our other Dreamcast games have these features but we were locked into the animations from the Playstation version of Tony Hawk so we couldn't add shoulders nor could we move pivots for more natural bends. And the skaters in Tony Hawk go into extreme poses while performing stunts. For instance, when they crouch really low on their skateboards their knees get pointy, and when they put their arms over their heads their shoulders dip and they look like balloon animals. We did the best we could by hacking in weighted vertices on the knees but the hack made the shoulders look even worse so we left them the way they were."

"We had similar problems with tools -- we never did fully understand Neversoft's Max plug-in and were limited in what we could do to improve the levels because of it -- as well as with some of the source code."

http://www.gamasutra.com/features/20000628a/fristrom_01.htm (might need to register to view)

wazoo:
Yes, I was refering to this thread. Anyway, if you put so much faith into that, then you have to trust their claim about 20M pol/sec for GPC on the ps2, which put the ps2 well above the DC
In polygon rate, sure. Anisotropic filtering wasn’t used on their PS2 follow-up, and progressive scan was lost I believe. There were also some other complaints regarding the image quality for the way it displayed.

YPO:
Strangely their spec for the PS2 version stated 2 million pps.
That wouldn’t be unprecedented. PS2 ports of DC games commonly result in downgrading. The DC has local access (no streaming needed) to strong texturing capabilities and more room to spare for the framebuffer from its display RAM. It will naturally be more optimal for certain workloads. The PS2 versions typically sacrifice vertical resolution to 448 lines and lose the progressive scan capabilities.
I think you've left out the keyword "engine" here Deadmeat. Just like the GT3 engine was capable of 10 million pps when the actual game didn't push that number.
Melbourne House’s games used multitexturing and resulted in geometry being resent, so it makes sense that the engine would push more polygons than could be seen in on-screen geometry.

marconelly!:
I'd say you haven't took a look at DC games in a while, because right now, there are games in every genre on PS2 that outrun their respective competitors on DC in every single area imaginable. Maybe DC had better IQ and textures on average, but if you compare top tier games (which only makes sense IMO) I think there's no argument, really.
Even among the best programmed PS2 games, full frame rendering and proscan will be compromised at times. Standard inclusion can’t be consistent with engines like the Baldur’s Gate: Dark Alliance type, a top example of PS2 development surely. Developer Lost Boys frankly stated that RAM limitations forced them to choose between their textures and being able to include those IQ standards in the much-hyped Killzone. There were not such contentions in display memory on DC - great textures never came at the expense of framebuffer IQ concerns, and progressive scan was standard for the whole library (and actually supported in the games along with native VGA.) And even in the best PS2 games like GT3 and SH3, it’s soundly behind the DC for getting rid of annoying alaising with texture filtering and mip-mapping usage.

Fafalada:
It's a reasonable argument, but begs the question why DC which happens to be even more memory limited didn't resort to it more often.
I wonder if DC developers didn’t find it easier to get their desired texturing budget than PS2 devs who have to worry about efficiently maintaining as large a stream of imported textures as possible and making sure they’re the right ones needed for rendering.
 
I have a question about the 4MB VRAM of the GS. What do you exactly send there ??? Display Lists and textures ? Or just framebuffers already build by VU1 computation ?
 
ShinHoshi said:
I have a question about the 4MB VRAM of the GS. What do you exactly send there ??? Display Lists and textures ? Or just framebuffers already build by VU1 computation ?



nonono... frame buffer IS contained in the 4Mb. the VU1 only sends through transformed vertices and other things including textures (not sure about that actually. do the texture go through VU or do they have another path? path3?)
 
and SH3, it’s soundly behind the DC for getting rid of annoying alaising with texture filtering and mip-mapping usage.

Well, IIRC sh3's texture aliasing was slightly serious only in one area(the alley.)

Most areas featured nice IQ, etc.
 
nonono... frame buffer IS contained in the 4Mb. the VU1 only sends through transformed vertices and other things including textures (not sure about that actually. do the texture go through VU or do they have another path? path3?)

So the framebuffer is constructed on the 4MB VRAM by the information (DL and vertexs coordinates, textures and texture coordinates) that you send there after previous calculation. Right ?

And then, what's the point of having a front and back framebuffer ?
 
And then, what's the point of having a front and back framebuffer ?

for double buffering.

So the framebuffer is constructed on the 4MB VRAM by the information (DL and vertexs coordinates, textures and texture coordinates) that you send there after previous calculation. Right ?

if by construfted u mean written to and updatred then yes. the FB is always there to be flushed to the output stream.

I can imagine if u wanted to swap it out to main RAM would be tough (and pointless) to implement (tho I dun think it;s posible to write back from the GS).
 
Thanks for the explanation, really.


for double buffering.

Double buffering ??? So the first buffer is used as a temporary one and the second for the final results of the frame ? I mean you use the rear framebuffer as scratch and then update the front and when you are finished the GS paints this front framebuffer.

Is this way ?
 
don't know the specifics for the PS2 but the front buffer is flushed to the output stream whilst updates can be done on the back buffer.

traditionall you'd prolly have 2 full size buffers and alternate the pointer between them to speciafy the active drawing buffer, I not clear whether this is the case for the PS2 tho.
 
Back
Top