Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Status
Not open for further replies.
That is why this rumor doesn't come off as being from anyone technical and has higher probability of it being a troll.

You don’t have to be technical to see or find out about these things, I was just a salesman in a small indie shop when a Sony rep visited and showed games in development for PS1 at the back of the shop including crash bandicoot before it had even been heard of.
 
And yet that is a level above and beyond what Sony used on the 4Pro.

The point is: Microsoft isn't going to completely drop off their level of improvements.
Since Microsoft didn't offer user-replaceable internal storage, perhaps this is an additional data point in favor of Sony's next-gen not offering upgradeable primary storage either. Even some minor optimizations like a grommeted HDD might be too fiddly for a process Sony wanted as straightforward and error-proof as possible. A custom solution with enthusiast or higher performance wouldn't have a broad base of hardware for users to have replacement units for, and the upside to hardening the design for arbitrary hardware added by people of unpredictable skill would be limited.

SSDs have controller electronics that include licensed standard-based patented technology that have to pay for but may be redundant in a console. If you can remove that, roll your own controller and do it for less, then that's a cost reason. Generally speaking, removing layers of controller/arbitration results in increased performance and gives you more control about the solid state cells are used.
If Sony's going with a scheme like what's in that patent, fingers crossed that the bespoke architecture has equal or better competence to the IP that SSD controllers all have for managing the disks properly.

That patent simplified the controller by having an operating mode that looked a wide swaths of SSD and file system functionality and said it would do them as little as possible (wear-leveling, sub-block writes, garbage collection, writing at all to the file archive data most of the time, file locking, quality of service when file archive traffic is ongoing, unclear level of sophistication for ECC besides retry loops). Even with that, there's still going to be some amount of storage not subject to the simplified translation table scheme, and it's unclear how it might be brought down to earth in terms of performance if the unspecified on-die storage is exceeded and data/tables need to be paged in and out of a controller without a DRAM buffer.

Then there's the risk of doing the job poorly at times, which going by the various established players is not uncommon and where I'm not sure how much Sony's hoping to roll its own. Flubbing something like translation tables, firmware, or drive maintenance could brick or freeze things up pretty well.

What if we're looking at it completely the opposite (runtime compression, then decompression on GPU)
https://software.intel.com/en-us/articles/fast-cpu-dxt-compression
Although if the patent cited before is used, the accelerator seems like it's one or more specialized units. It's also doing decode and tamper checking, which may be among other things decryption and signature checking at the same time. Whether those are as easily handled by a GPU or if a GPU is suited to do all of them at the same time is unclear.
 
Very funny but I was looking for document about Photon mapping and I decided to search "Photon mapping Sony Interactive Entertainment"

https://www.linkedin.com/in/carlovloet

And I discover this linkedin profile. A guy from Imagination/Caustic graphics working since 2016 for Sony... Maybe nothing to do with Raytracing in PS5 but interesting.

I directly managed Carlo for well over a year. During that time, I got to know Carlo rather well. I would strongly recommend him for any application-level position which leverages his considerable skills in 3D Graphics, in general, and Ray Tracing, in particular. When it comes to GLSL shader development, Carlo is absolutely our team's "go to" guy. Carlo is truly passionate, cares deeply about his work, and collaborates effectively with other teams. I watched him lead, design, produce, and develop of one of Imagination's first large-scale, publicly demoed, RayTracing applications. This application has become a key part of Imagination's customer-facing demo portfolio. Without Carlo, this would never have happened

EDIT: Sony has a patent referencing a kd tree constuction patent
http://www.freepatentsonline.com/10255650.html
 
Even though it's a retro look, you're mistaking controller ports with kinect sensors
@Shifty meant that he is going to stay offline and unavailable for the time being until rumours pass, he doesn't handle gossipmongering very well.
 
it seems like the new Anaconda is going to be called Xbox Infinite -a name rumoured quite a few years ago which seems people always have liked). Price revealed. And some important details on how to develop games on both systems.

https://pastebin.com/NUg1AqZm

Xbox Infinite (Anaconda)

Die - 352mm^2

GPU - 11.1TFLOP/s
Details: 64 CU, 8 disabled, 56 active, 1548MHz.

CPU - 8 cores, 16 threads, 3.3GHz.
Details: Zen 2.

Memory - 24GB GDDR6, 4GB DDR4.
Details: Samsung’s K4ZAF325BM-HC14 clocked at 3300MHz, 13.2Gb/s, 12 chips, 384-bit, 634GB/s. 24GB GDDR6 is available for developers, 3GB DDR4 dedicated to the OS, 1GB DDR4 dedicated to the SSD.

Storage - 256GB NVMe SSD, 2TB HDD
Details: Players don’t have access to the SSD, the 2TB drive is replaceable, an external drive works too. The OS manages the SSD for caching, Microsoft is using machine learning algorithms for analyzing games while they are being played on the development kit, the algorithm keeps track of what blocks are being used and what blocks are most likely to be used next while developers are playing the games for thousands of hours. The OS keeps on the SSD only relevant blocks to the last known player position in the game. For example, if the player is in level 3, the OS won't load level 6 to the SSD until the player reaches level 5. The SSD also keeps a compressed memory snapshot when a game is closed for fast launching the game to the spot the player had left it. Developers have some control over what is stored on the SSD by marking a block's priority level if they wish to do so. If a player hasn't touched a game for a while, if necessary its' memory snapshot and or data will get dumped from the SSD.

Cooling - vapor chamber

External media drive - Blu-ray optical drive

Price - 499$

------------------------------------------------------------------

Xbox Infinite Value (Lockhart)

Die - 288mm^2

GPU - 4.98TFLOP/s
Detail: 40CU, 4 disabled, 36 active, 1081MHz.

CPU - 8 cores, 16 threads, 3.3GHz.

Memory - 18GB GDDR6, 4GB DDR4.
Details: 2600Hz, 10.4Gb/s, 9 chips, 288-bit, 374GB/s.

Storage - 120GB NVMe SSD, 1TB HDD

Cooling - Blower fan

External media drive - None

Price - 299$

------------------------------------------------------------------

Some of the recommendations from Microsoft to developers:
- Develop your game to the Xbox Infinite as a lead platform.
- Xbox Infinite Value was built to run even sub-4K Xbox Infinite games in Full HD.
- Xbox Infinite Value version is allowed to run above 1080p, but it isn't allowed to have better graphical features, higher fidelity or frame-rate than the Xbox Infinite.
- Microsoft recommends using any leftover headroom on the Xbox Infinite Value GPU to increase resolution
 
While on the train today I cooked up some baseless next gen console specs which would make them similiar enough so that 3rd party developers could use these features (SSD, RT), but at the same time different enough to provide gaming forum war ammunition for console fanboys.

Yes, the train ride was boring. What do you think, which console would be perceived as the best? crack15x18.gif

Some of the specs like the memory bandwidth are based on previous console generations. Unlike dedicated GPUs which go below 50 GB/s per TFLOP (Polaris, Turing) or some even below 40 GB/s per TFLOP (Pascal, Vega 10) it seems for consoles with a unified memory architecture at least 50 GB/s per TFLOP is desirable.

Code:
xbox 360   ( 22 GB/s | 0.355 TFLOPs | 61.87 GB/s per TFLOP | doesn't include eDRAM but includes CPU FLOPs)
xbox one   ( 68 GB/s | 1.310 TFLOPs | 51.91 GB/s per TFLOP | doesn't inlcude ESRAM bandwidth)
ps4        (176 GB/s | 1.843 TFLOPs | 95.65 GB/s per TFLOP)
ps4 pro    (218 GB/s | 4.130 TFLOPs | 51.90 GB/s per TFLOP | + delta color compression)
xbox one x (326 GB/s | 6.001 TFLOPs | 54.33 GB/s per TFLOP | + delta color compression)

Since Sony says CPU bandwidth reduces GPU bandwidth disproportionately [0] and considering that Zen 2 is much more powerful then Jaguar it could make bandwidth even more important. On the other hand if Navi is more bandwidth friendly it could even out increased bandwidth usage from the CPU and staying only slightly above 50 GB/s could still be enough.

For clocks I looked at current gen consoles as well as their refresh and compared the clock to their closest desktop counterpart.

Code:
Xbox One   - 85% of the HD 7790 clock (853 vs. 1000 MHz)
PS4        - 80% of the HD 7870 clock (800 vs. 1000 MHz)
PS4 Pro    - 81% of the RX 480 base clock (911 vs. 1120)
           - 72% of the RX 480 boost clock (911 vs. 1266 MHz)
Xbox One X - 93% of the RX 480 boost clock (1172 vs. 1266 MHz)
           - 87% of the RX 580 boost clock (1172 vs. 1340 MHz)

For yield boosting measures I looked at how many percent of the entire die were used.

Code:
Xbox One   - 86% of CUs active (12 from 14)
PS4        - 90% of CUs active (18 from 20)
PS4 Pro    - 90% of CUs active (36 from 40)
Xbox One X - 91% of CUs are active (40 from 44)

Other specs are more loose, though with at least some consideration so no 16c/32t CPU running at 4 GHz with a 20 TFLOPs GPU and 1 TB 3D XPoint or whatever, but some of the specs are still totally unrealistic just to make it more fun.

Xbox Streaming Thing
  • AMD Picasso APU
  • 99$

Xbox Lockhart
  • targets 1440p
    • should look good on a 1080p TV and upscaled to 4k as well
  • targets value oriented consumers
  • chiplet based APU
    • better reuse with datacenter
  • 1 standard Zen 2 chiplet as CPU with 8c/16t clocked at 2.8 GHz
    • lower clock for parity with datacenter which loves efficiency
    • also leaves more power for the GPU
    • 100 MHz more than Stadia so xCloud looks better to the average internet user
  • customized Navi 20 (I mean datacenter Navi) GPU chiplet which also includes the IO die functionality for the CPU
    • kinda similiar to the Xbox 360 where the GPU contained the Northbridge
    • 28 CUs after yield boosting measures (~90% of native 32 CU die)
    • ~1500 MHz
    • ~5.3 TFLOPs
    • AMD solution for RT
      • maybe only increased caches and new RT instructions like were mentioned in the RT thread for more flexibility but less peak performance compared to fixed function?
  • 12 GB GDDR6 with 12 Gbps @ 192bit
    • 288 GB/s
    • 54.32 GB/s per TFLOP
  • 4 GB DDR4 dedicated to the OS and apps
  • 1 TB M.2 NVMe SSD
    • QLC drive like Crucial P1 (Micron supplied the DDR for OG Xbox and Xbox One) or Intel 660p
    • ~1.8 GB/s - 2 Gb/s peak read
    • storage extension possible with USB 3.2 Gen 2x2 (USB C port) or USB 3.2 Gen 2 (USB A port)
      • if USB 3.2 Gen 2x2 drive (practical peak ~1.5 GB/s) is fast enough game can be played from it
      • otherwise needs to be installed to internal drive first?
  • 399$

PS5
  • monolithic APU
  • customized Zen 2 with 8c/16t clocked at 3.2 GHz
    • double the clock of PS4
    • same clock as PS3
    • AVX-512F
    • shared inclusive L3 cache instead of L3 victim cache private to each CCX
    • some customizations made for backwards compatibility
  • customized Navi 10 GPU
    • 48 CUs after yield boosting measures
    • ~1500 MHz
    • ~9.2 TFLOPs
    • includes RT hardware from Imagination
      • I'm a PowerVR fanboy since I had a Kyro II in my PC
      • Sony has used PowerVR multiple times in the past
      • RT related ex Imagination employee now works for Sony as was pointed out on resetera
  • 16 GB of GDDR6 with 16 Gbps @ 256bit
    • 512 GB/s total bandwidth
    • 55.65 GB/s per TFLOP
  • 4 GB DDR4 decicated to the OS and apps (depends on how much is not latency sensitive and maybe can even be swapped out to SSD)
  • custom 1 TB SSD
    • soldered to the motherboard (from a customer perspective I don't really like it)
    • raw speed can compete with PCIe 4x4 NVMe SSDs (4.8-5.0 GB/s peak read)
    • but customizations to hardware and software like were mentioned in the Sony patents which were recently posted on resetera
    • if external storage is attached then games need to be installed to the SSD first to be played
    • if a new game is started (no savegame) it can be played before installation is complete like with downloads
  • backwards compatible with PS1, PS2, PS3, PS4 and Vita
    • especially for PS Now server usage
    • for PS3 maybe not all games at first as the emulator will get better over time
    • would allow local installation of digital BC games (which seems to be what the majority prefers if looking at the recent Sony report)
  • 499$

Xbox Anaconda
  • targets native 4k
  • targets enthusiasts
  • chiplet based APU
    • better reuse with datacenter
  • 1 standard Zen 2 chiplet as CPU with 8c/16t clocked at 2.8 GHz
    • lower clock for parity with datacenter which loves efficiency
    • also leaves more power for the GPU
    • 100 MHz more than Stadia so xCloud looks better to the average internet user
  • customized Navi 20 (I mean datacenter Navi) GPU chiplet which also includes the IO die functionality for the CPU
    • kinda similiar to the Xbox 360 where the GPU contained the Northbridge
    • 56 CUs after yield boosting measures (~90% of native 64 CU die)
    • 1675 MHz
    • 93% of 1800 MHz which is what I expect at most for desktop Navi (to be honest I'm skeptical of even that...)
      • good vapor chamber cooling solution similiar to Xbox One X
      • Hovis method like Xbox One X for better efficiency across components
    • ~12 TFLOPs
      • the magical marketing barrier with double the TFLOPs of the Xbox One X
  • 16 GB HBM2 with 2.4 Gbps @ 2048 bit
    • 614 GB/s
    • 51.17 GB/s per TFLOP
    • native 3072bit design (half of NEC SX-Aurora Tsubasa [1] with 6144 bit HBM2) with 1 dummy chip (like Titan V) so only 2048 bit usable
    • 2 4-Hi chips with 2 GB or 2 8-Hi with 1 GB per layer
    • maybe usage of InFO_MS? Consdering the name who is better suited to use it then MS? ;P
  • 4 GB DDR4 dedicated to the OS and apps
  • 1 TB M.2 NVMe SSD
    • QLC drive like Crucial P1 (Micron supplied the DDR for OG Xbox and Xbox One) or Intel 660p
    • ~1.8 GB/s - 2 Gb/s peak read
    • storage extension possible with USB 3.2 Gen 2x2 (USB C port) or USB 3.2 Gen 2 (USB A port)
      • if USB 3.2 Gen 2x2 drive (practical peak ~1.5 GB/s) is fast enough game can be played from it
      • otherwise needs to be installed to internal drive first?
  • 599$ / 699$ (with 2 TB SSD)

Xbox Anthem v3 (xCloud)
  • chiplet based APU
  • 4 standard Zen 2 chiplets as CPU each with 8c/16t clocked at 2.8 GHz
    • allows ro run up to 4 streams at the same time as each stream get's it's own CPU chiplet
    • if unused (e.g. due to single 4k stream) could be alloted to other Azure tasks
    • lower clock for parity with datacenter which loves efficiency (could still be too high for 4 chiplets...)
    • also leaves more power for the GPU
    • 100 MHz more than Stadia so xCloud looks better to the average internet user
  • customized Navi 20 (I mean datacenter Navi) GPU chiplet which also includes the IO die functionality for the CPU
    • kinda similiar to the Xbox 360 where the GPU contained the Northbridge
    • CUs & clocks = for max reuse every combination of CUs and clocks which allows for 12 TFLOPs
  • 24 GB HBM2 with 1.6 Gbps @ 3072 bit
    • 615 GB/s
    • 51.25 GB/s per TFLOP
    • 3 4-Hi chips with 2 GB or 3 8-Hi with 1 GB per layer
  • 4k streams => 1
  • 1440p streams => 2
  • 1080p streams => 4 (reduced texture quality to fit 4 games into 24 GB?)
  • other azure usages
    • machine learning
    • virtual desktop
    • fast storage server (GPU decompression thanks to unified memory pool)
    • --- ...

Again, I don't expect the consoles to look like that (even the Voynich manuscript like scribble with RCCs is more realistic) but I would really like such differences. So many possible arguments each console faction could make in console wars...

burn.jpg


[0] Slide 13: http://rdwest.playstation.com/wp-content/uploads/2014/11/ParisGC2013Final.pdf
[1] https://en.wikichip.org/wiki/nec/microarchitectures/sx-aurora
 
It's as if one of you/us wrote up that PasteBin rumor, taking bits from the B3D posts about hybrid setups and MS fast-start as well as my desire for entirely optional external optical drive.
 
Consoles targeting enthusiasts should not have performance lower than 10Tflops, or it will become very hard to advertise. In fact 12Tflops should be minimum as it is twice the TF number of xbox one x.

And I really don't think a 599 console can be sold well; it may become huge waste of investment.
 
That Navi can reach higher clock speeds than before kind of makes the Gonzalo leak more believable for me. Just a pity you can't get the CU count from that. I hope for 44-48 CUs in next gen minimum.
 
Well PS4 Pro devkit has 40 CUs so anything less than that or slightly more would be downright embarrassing. Also a 40 CU Navi clocked at 1800 mghz would yield 9.2 TF, plus the 1.25X efficiency we get 11.5 TF of Vega equivalent.
Another way to look at it is, RX 5700 is 10% stronger than 2070 and launching in two months, I expect a PS5 launching in 1.5 years to at least match that or a little weaker.
In before someone tells me that's not how it works.
 
I'm reasonable confident now that it's going to be over 10TFlops unless there's going to be a lot of other hardware such as Ray tracing tech also on the APU.
 
I’m actually more in the under 10TF now. Smaller die and a higher clock. Anandtech estimates die size around 275mm add a CPU to that and you’re already around 350mm.

My speculation that the main differences between the current Navi and Zen 2 that are launching this year and PS5 next year will be mainly in continued power efficiency enhancements.
 
Xbox Infinite (Anaconda)

Die - 352mm^2

GPU - 11.1TFLOP/s
Details: 64 CU, 8 disabled, 56 active, 1548MHz.

CPU - 8 cores, 16 threads, 3.3GHz.
Details: Zen 2.


They would need some miraculous gains in transistor density compared to Radeon VII and Zen 2 if they were to put 64 CUs (331mm^2 on Radeon VII) and a 8-core Zen 2 (80mm^2 chiplet) inside a 352mm^2 chip, considering the previous two don't even include the transistors needed for the I/O "uncore".
 
I'm sure given the desire from many to see a departure from GCN that RDNA is likely a marketing nomenclature than a true significant change.

I’m actually more in the under 10TF now. Smaller die and a higher clock. Anandtech estimates die size around 275mm add a CPU to that and you’re already around 350mm.

My speculation that the main differences between the current Navi and Zen 2 that are launching this year and PS5 next year will be mainly in continued power efficiency enhancements.

Im expecting a die size closer to 400 mm than 350 mm
 
With a 25% increase in Navi IPC, that 8~10 TF PS5 would still be something of a beast.

300~350 mm^2 would be my guess too. Not expecting anything above that, unless it's from a top end $$$ SKU (and even then perhaps not). For a single unit approach like PS5 I really can't see something substantially larger than Vega 7 being a goer.
 
Status
Not open for further replies.
Back
Top