Xbox Series X [XBSX] [Release November 10 2020]

My first result using google
https://www.gov.uk/government/publications/project-griffin/project-griffin

" Project Griffin is the national counter terrorism awareness initiative for business produced by NaCTSO to protect our cities and communities from .."

lol :LOL:


Well, using my googlefu

Could it be this?

project griffin??

Escaping the Build Trap: How Effective Product Management Creates Real Value
By Melissa Perri

An excerpt from the book

55sDW2t.jpg


This project griffin was a Netflix project when they wanted to build an streaming device, that was canned.

@eastmen Are you referring to the streaming stick that was coming last gen, and canned, is still alive, revamped for Xcloud, etc??

ps yes i´m bored
 
Last edited:
Griffin:
a mythical creature with the head and wings of an eagle and the body of a lion, typically depicted with pointed ears and with the eagle's legs taking the place of the forelegs.
What does Griffin symbolize:
In symbolism, the griffin combines the symbolic qualities of both the lion and the eagle. It is the king of birds and lord of the air united with the king of beasts and lord of the earth. One legend involving griffins is the Ascension of Alexander the great.
If we focus on the thematic element of combining the best qualities of two items, I am going to make wild guess with my broadest hopes that Griffin is a Xbox Game OS for Windows 10 PC hardware. Or it's a Win OS for Xbox Series X hardware.

But that already had it's own project name and it remains to be seen if that will see the light of day any time soon.
 
If we focus on the thematic element of combining the best qualities of two items, I am going to make wild guess with my broadest hopes that Griffin is a Xbox Game OS for Windows 10 PC hardware. Or it's a Win OS for Xbox Series X hardware.
Due to the way the Prophet formulated the phrase, it's improbable that's the os.
And when he wrote it, he was touching his nose with the left thumb, so the options are Microsoft golf 2021 or something linked to Gear.
 
Due to the way the Prophet formulated the phrase, it's improbable that's the os.
And when he wrote it, he was touching his nose with the left thumb, so the options are Microsoft golf 2021 or something linked to Gear.

Yeah, it's probably not the OS thing.
 
More info on SFS. Mark S Grossman is actually a former employee of AMD who has been doing research on texture
and streaming since at least 1993(!). Refer to patents "GPU TILE TEXTURE DETAIL CONTROL" and " Texture level tracking, feedback and clamping system for graphic processors" for more in-depth info. This is an effort spannibg decades.
 
MS has a partial solution that's different. At first I thought it was virtual texturing in hardware, but your post describes it as selective MIP-loading which is different. Virtual texturing allows a small fraction of a whole texture, maybe a 64th, to require loading in order to draw the part of the texture onto the visible geometry. MS's solution loads the entire texture but of a smaller MIP level if the largest isn't needed (most of the time), and has the whole texture in RAM to draw the small part of the texture that's actually needed, but it doesn't load the texture in its full resolution and uses a lower MIP level.

With a 4096x4096 texture of a building at moderate range where a small part of it is visible, virtual texturing may load a 128x128 tile whereas MS's solution would load the 1024x1024 MIP-level and draw part of that texture.

It's a solution that sits halfway between loading the whole texture and virtual texturing loading a single tile within the texture. I also would guess it doubles the texture sizes on disk as you'd need the MIP texture to be prebaked whereas typically I think these are derived from the source texture when loaded and kept in RAM.

It goes beyond that. MS's solution has the capability to load only part of a texture. IE - they can load only a fragment of a MIP level rather than the entire asset. So there is savings not only on distant objects but on objects that are close as well.

This should lend itself well to virtual texturing.

Regards,
SB
 
Last edited:
It goes beyond that. MS's solution has the capability to load only part of a texture. IE - they can load only a fragment of a MIP level rather than the entire asset. So there is savings not only on distant objects but on objects that are close as well.

This should lend itself well to virtual texturing.

Regards,
SB

Soooo, it's virtual texturing. That's literally what virtualized texturing is, just loading sparse texture data. Also the idea of "only load the mips you need!" is kinda reckless. Anisotropic filtering, repeated texture patterns at varying distances, technically two mips should contribute to any drawn texture as it is unless the screen to texel size is exactly perfect. Maaaybe there's some mips that aren't needed at the moment. Sure cache it out decompressed to reserved SSD, but a lot of the time the worst case scenario is pretty bad, and you always optimize for worst case.

It's like with the SSD speed everyone's suddenly coming up with ideas the offline guys have dealt with for decades. Which is to say "in core" versus "out of core" rendering. The problem with "out of core" ie out of fast memory, rendering is there's always a potential to stall, and a stall is the enemy of realtime. Honestly perhaps proper virtualized texturing and low mips always present is just the way to go for every engine. Then you have understandable control over the whole pipeline, you can raytrace all you want into the lower mips without worrying about them being there and hey it's a reflection or diffuse GI who's gonna notice, and your primary view can have all the fancy 4k+ texturing you want.
 
Microsoft’s solution is virtual texturing with sampler feedback for accurate mip and tile selection, plus some hardware filters to blend from a low resolution mip and a high resolution mip in case the high resolution mip is not loaded in time for the current frame. So they have some guarantee of the low quality mip arriving on time and then blend to the high quality of it’s late so you don’t notice pop in. It should be overall more efficient in making sure they don’t waste memory on pages they don’t need.
 
My first result using google
https://www.gov.uk/government/publications/project-griffin/project-griffin

" Project Griffin is the national counter terrorism awareness initiative for business produced by NaCTSO to protect our cities and communities from .."

lol :LOL:


Well, using my googlefu

Could it be this?

project griffin??

Escaping the Build Trap: How Effective Product Management Creates Real Value
By Melissa Perri

An excerpt from the book

55sDW2t.jpg


This project griffin was a Netflix project when they wanted to build an streaming device, that was canned.

@eastmen Are you referring to the streaming stick that was coming last gen, and canned, is still alive, revamped for Xcloud, etc??

ps yes i´m bored

No read my other post

Also something big is coming the end of Summer begging of fall and something else in time for the holidays that is not Xbox Series X
 
@3dilettante
Just wondering what your thoughts are on the architecture makeup for Xbox Series X?

And aside from some customizations, I expect PS5 to look like this exactly, with 1 dual compute redundancy per shader engine. Going from 20 dual compute units to 18 dual compute units in total.
The 64 bit memory controller, there are 2 that feed each shader engine, and each respectively serve 1 shader array each. So right now it's pairing 64 bit memory bus with 5 dual compute units. For PS5 one will be 5 and the other will be 4 I suspect.
There are going to be 1 fixed function sections per shader array.
64*4 is exactly the 256 bit bus tied to memory.

With a 320 bit bus, what the heck does XSX actually look like?
will it be a 80 bit bus feeding 7 dual compute units per shader array?
or does the 64bit memory controller sort of mess things up? and they have to make a 1/2 shader engine for another 64 bit memory controller?

wrt the 5500XT, It is 1 Shader Engine, with 6 Dual Compute Units per Shader Array. I'm not sure if that helps, it looks like you can increase the number of CUs per shader array, but the 64 bit bus per shader array stayed in this case, but it's only got 128bit bus to memory overall. This makes sense.

So 320?

There could be an L2 slice per 16-bit channel. The 64x4 arrangement with the 5700 still amounts to a slice per channel. However, the Xbox One X has more memory channels than slices, with the additional channels distributed among existing memory controller groupings. Once in the domain of the GPU, there's a crossbar that allows any L2 slice to service any client in the GPU (primarily one of the L1s), not a specific engine.
Having more L2 slices could provide more internal bandwidth for the GPU, but that may make such a change more pervasive because the L1 in RDNA is subdivided to match the L2 quadrants. Adding a slice to the L1 would give more CU bandwidth, but adds complexity in a densely packed area of the chip.
On the other hand, the fabric outside of the L2 could distribute the extra channels among the same internal L2 slice arrangement.
 
Back
Top