It seems the benchmarks works in mysterious ways.
Someone was going to figur this out soon or late. There will be more of those kinda of 'tests' appearing from time to time.
It seems the benchmarks works in mysterious ways.
I do agree with you on the fill rate aspect now. I don't believe the bottleneck is straight computational performance either though. Note how the 1070 is more than twice as fast as the 1060 despite having less than 50% more shader resources while being an identical architecture. It seems the benchmarks works in mysterious ways.
Slug is distributed as a static library that has a basic C++ interface, and full source is included with every license. Slug is platform agnostic and runs on Windows, Mac, Linux, iOS, Android, and all major game consoles. It can be used with Vulkan, Direct3D 10/11/12, OpenGL 3.0–4.6, Metal, and WebGL2. The API is fully documented in the Slug User Manual.
The primary function of Slug is to take a Unicode string (encoded as UTF-8), lay out the corresponding glyphs, and generate a vertex buffer containing the data needed to draw them. When text is rendered, your application binds the vertex buffer, one of our glyph shaders, and two texture maps associated with the font. One texture holds all of the Bézier curve data, and the other texture holds spatial data structures that Slug uses for efficient rendering.
Slug can render each glyph as a single quad. This has the advantage that it’s easy to clip to an irregular shape or project onto a curved surface in a 3D environment. It also means that vertex buffer storage requirements do not depend on the complexity of the glyphs. There is also an optimization that uses a tighter polygon having between three and six vertices that can be enabled to increase speed at moderate and large font sizes.)
This an engineer's version of an infinite number of monkeys banging away on typewriters to recreate Shakespeare.I was not knowing this library using GPU for rendering text. This is shader based.
https://sluglibrary.com/
http://jcgt.org/published/0006/02/02/
The paper and source
http://jcgt.org/published/0006/02/02/paper.pdf
http://jcgt.org/published/0006/02/02/GlyphShader.glsl
Launch PS3s had ps2 hardware. (At Europe)
At least the 60Gb model had it(not sure about the 40gb one, or the 80gb)
But some definetly had it, I have one of those too.
40gb didnt have card readers and some other items I believe
Japan and US models had full PS2 hardware for bc. EU launch modell had partial hardware. I do not remember if it was the ps2 cpu or gpu that got removed at EU launch.
PS3 even the ones without any PS2 hardware received full software BC, with hacks you could use it for every title
Yeah, CPU was removed on CECH-C/E as well as the Rambus memory.
ps2_netemu (the .self that was used for PSN classics) was far far away from full BC even with community configuration patches. For the stuff that is fully playable there might be various audio/visual glitches.
I do agree with you on the fill rate aspect now. I don't believe the bottleneck is straight computational performance either though. Note how the 1070 is more than twice as fast as the 1060 despite having less than 50% more shader resources while being an identical architecture. It seems the benchmarks works in mysterious ways.
Sony have been a long-time contributor to LLVM. It's a shame Microsoft don't also use LLVM because that would lead to better performance for all code produced by the compiler.
Some compiler stuff fron SN System
LLVM traditionally compiles faster, it does not necessarily make the code run faster.Sony have been a long-time contributor to LLVM. It's a shame Microsoft don't also use LLVM because they would lead to better performance for all code produced by the compiler.
LLVM traditionally compiles faster, it does not necessarily make the code run faster.
Well, I disagree. Competition is good also for compilers.Compile times definitely do not translate to runtime performance, but I've not seen any comparison of code performance between the two compilers used by Sony and Microsoft. What I'm saying is, the more people using and - contributing using to the improvement of - compiler tech, the better for everybody.
No, I'm very happy for anybody not concerned with code performance to use something else and not contaminate the code base of something that is.So everyone should be using the GNU toolsets like GCC?
Well, I disagree. Competition is good also for compilers.
Not surprised. The methodology of Gamer's Nexus have being questioned by some, and not only because of thermal pads. We could see on his setup the cables were preventing the shield to be correctly positioned.
Hm... weird, he is getting better temps than Gamer's Nexus. Topping at 97ºC on the memory when covering the console with a blanket. My guess is his results wont get much attraction as they are good.
View attachment 5056