Some power users will want to add a 4TB costing as much as the whole console.
You know us so well.
Some power users will want to add a 4TB costing as much as the whole console.
I don't expect a 60 CU GPU I expect anything from 42-48 CUs. I just don't see next gen having the same CU count as the Xbox X.
don't know if this is really true. Personally I think the only console hardware that's launched that was already outclassed by PC hardware has been Xbox One and PS4. Sure, PC hardware has outpaced consoles, but I think if you look back and think of the PC hardware available in, say, 1994, the PS1 would have been considered high end. 3Dfx Voodoo was at least a year away, PC hardware lacked the advanced video decompression of PS1, and it's lighting and geometry engine is very impressive for it's time. PS2 was mighty impressive. It's pixel fillrate is insane, as was it's geometry engine. Xbox launched with what amounts to a Geforce 3, along side the launch of the Geforce 3. Even then, Xbox had twice the vertex shaders. 360 had a tesellator, unified shaders, 4 sample per pixel per cycle MSAA, and competitive fillrate. And the PS3... Well Cell was paper impressive. If you go back further, console's ability to scroll backgrounds and draw sprites destroyed what was available on PC's when they launched.
Anyway, consoles have launched with high end hardware. They just don't anymore. I think it's partly because the PS3 / 360 generation was so long that even midrange parts were a huge upgrade, plus PC hardware's pace, especially in the ultra high end, has really gotten out of hand. I think traditionally (since the launch of 3D hardware) the highest end PC graphics cards have been close to price range of a console, at launch. An X800XT was roughly $450 when the 360 launched, IIRC. A 20GB 360 was $399.99. Now, we've got the ultra-high end PC graphics cards at $1300+. There's no way a console can launch at that price unless they want to get 3DO'd. Which also launched with impressive hardware at the time.
Nah, consoles mostly didn't match high-end pc hardware around the time of release. PSX was towards high-end yes, but i dunno really, how much could a fast P1 166mhz do only on it's CPU alone...
PS2 was impressive yes, but high-end pc matching? I dunno either, high-end pc between march/autumn 2000 would be P3 1GHz/athlon 1.4ghz, 256mb or more main ram, Geforce 2GTS to Ultra 64mb. PS2 kinda needed that fillrate though to get things really done.
Geforce 3 launched spring 2001, Xbox november 2001, NV2A was high-end for the time but constrained in areas where the GF3 wasn't, NV2A had twin vertex shaders. Radeon 8500 had twin vertex shaders, and was probably more advanced then NV2A. For CPU the 733 p3 was outdated by then, as was 64mb total ram.
I wouldn't say directly that OG Xbox matched a high-end pc, perhaps in it's GPU department, but i wonder how a R8500 faired.
X360 was close to X1900 launch, which i think had more raw power to play with, along with much more ram. PS3's hardware came a year too late to be matching high-end parts. Intel C2Q quad cores where around and the 8800GTX. The 360 seemed more capable overall.
For the 80's and early 90's, it depends on if you consider the Amiga etc to be PC's, those outclassed consoles.
All in all i think highest end pc parts outclassed consoles by a large margin for the most, but at a much higher price, sometimes consoles where close or matching like PS1. I agree that consoles seemed to be more 'high-end' back then as compared to now. They used more exotic hardware aswell. Whatever can be said about PS4/Xone, they had a large amount of ram, whilest older gens had tiny amounts of ram.
Anyway's i think it's getting offtopic with this^^
Oversupply is definitely temporary, but NAND makers won't stagnate their offerings either. Adoption of smaller nodes will get them higher density per layer, and all makers are planning on increasingly taller stacks of layers.The problem is that the oversupply is going to be temporary.
Use the entire 1TB SSD as a cache for games from either external or optical - longest since used is the first to be ejected - with an option to 'lock' a game in. DD games are locked by default and need to be removed manually, with OS prompts for the user to ok cleanup when appropriate.
Covers all use cases and preferences, minimum user input required, minimises unnecessary writes!
And while you're at it, distribute the games with an emulator/hiberfile type "save state"at the start screen so the game is ready to play after about three seconds. You'll need them anyway for your cloud streaming instances!
I see several problems with that:Forget what I said. I think you have the better solution. I am also wary of the constant writing to the SDD wearing it down. With 1 TB you can have 8-10 games with instant loading and the OS can prompt for swapping of games if needed.
I only hope though Sony will not go for SSD + HDD as some of the "leaks" are saying. Just give us a 1TB ultra fast SSD and leave an empty slot for a sata 4.0 (or nvme) for future expansion.
Even if high-end PCs had more raw power than the console, what they delivered in games was inferior. Games targeting the console instead of a weaker base PC spec (far less scaling up of assets and resources back then; higher res with tearing or juddering, more shadow casting lights and AA was about it) could do way, way more thanks to a large install base of a solid 'lowest spec' and low rendering overheads. When you then couple what that console hardware was actually capable of, comparing end-of-lifecycle games versus what the best launch PC had to offer, it's obvious why consoles completed so strongly versus PC in gamers minds - producing better games on cheaper (less flexible, unable to do your homework on them also) PCs.Nah, consoles mostly didn't match high-end pc hardware around the time of release.
According to Cerny, their custom solution is faster than any available SSD. As it's custom, they are not dependant of decade old techs mandatory in off the shelf HDDs, even SSD. And to acheive the speeds displayed in the Spiderman demo, you'll need smart caching strategies that wouldn't be as efficient on slower and regular off the shelf SSD.
It's either smart caching with custom SSD + slower HDD or it's the usual loadings strategies using 1TB SSD.
Which defeats your request, if you can add a "slow' consumer grade device it must deal with caching and swapping in and out. If it has that baggage what is the gain of a 1tb SSD something Vs a smaller one and 1tb HDD out the gate.Ok, so it's custom SSD. Is it possible for that custom SSD be 1 TB? That's what I was going for. Or is it too costly prohibitive?
The additional HDD or SSD can be added later.
- Speed: According to Cerny, their custom solution is faster than any available SSD. As it's custom, they are not dependant of decade old techs mandatory in off the shelf HDDs, even SSD. And to acheive the speeds displayed in the Spiderman demo, you'll need smart caching strategies that wouldn't be as efficient on slower and regular off the shelf SSD.
What are you talking about? I've shown that the speeds required for the Spider-Man FastTravel demo are nothing special in this thread. They only loaded anywhere from 600 Meg to 2.25 Gig that are easily hit by SSDs of today. What's really needed are faster decompression and CPU support.
Xbox Scarlet : XB1X GPU with higher clock rate, 7nm, 7 TFlops, Zen2 CPU, 12 GB gddr5 memory, 400$ price.
Xbox Anaconda: 2 x XB1X GPU, 14 TFlops, Zen 2 with more core, maybe Zen 3, 16-24 GB fast RAM, 500-600$ price.
Best cost saving and power efficient solution. Technically possible.
Sony & MS have an agreement... On Azure Servers. Does the agreement extend unofficially beyond ?
Why would it?
What makes you think anybody on here would know that?
Im a believer that for it to be cost effective, Lockhart and Anaconda need to be the same soc with Lockhart being a salvaged chip.