Alternative distribution to optical disks : SSD, cards, and download*

To answer my own question...

I did some more digging, and surprisingly the answer is not in their Specification Sheet but is in their User Manual, Section 2.1 Specification Summary Table. With 14 heads, 7 discs, that puts it at 1.429 TB per platter. [ http://www.seagate.com/www-content/product-content/ironwolf/en-us/docs/100804010a.pdf ]

Still quite a ways to go to hit 2TB per platter. :(

Yeah, I was just about to respond until I saw you already found the answer. Next year, if all goes well they hope to have something out that increases areal density by ~5-10% with TDMR (Two Dimensional Magnetic Recording), however those drives are going to be significantly more expensive than regular PMR and SMR drives. Instead of 1 read head per track, it features an array of heads. And along with that is increased bandwidth and a need for a more robust controller to deal with the added internal bandwidth as well as the ability to do all the calculations to reconstruct the data from the track being read by those multiple heads.

Seagate are still cautiously optimistic about introducing HAMR based drives into the market sometime in 2018, but they will be quite significantly more expensive per GB than PMR based drives. They have to heat the platter to ~450 degrees centigrade. A gold based alloy is pretty much required at this point. Part of the challenge is coming up with a gold based allow that can be repeatedly heated to that temperature 100's of thousands of times without losing its properties. Not to mention integrating a device like that into a computing environment. They don't expect consumer versions of those drives for many years after that.

Anandtech did a relatively short article on it. http://www.anandtech.com/show/10470...near-future-speaking-with-seagate-cto-mark-re

Basically there isn't much on the horizon with regards to expanding cheap mechanical storage. Just about everything being investigated currently is going to incur significant costs.

Regards,
SB
 
Switch load times are far from the instantaneous experience of old ROM carts.

euro gamer fucked up hard . There is no UHS-III what there is is UHS-I Class 3 and UHS-II . They used two UHS-I cards one a class 2 and one a class 3.

Anyway the switches speed is limited by what Nintendo picked. They could have added a UHS-II reader which offers Class 3 (U3 ) speeds of up to 270MB/s . That would have greatly upped the speed from the 95MB/s that UHS-II class 3 cards go up to
 
Presumably there was a reason they didn't, and the only reason I can think of is cost. I guess carts that'd run at that speed would have been just too expensive.
 
Anyway the switches speed is limited by what Nintendo picked. They could have added a UHS-II reader which offers Class 3 (U3 ) speeds of up to 270MB/s . That would have greatly upped the speed from the 95MB/s that UHS-II class 3 cards go up to
From the figures it's evident that regardless how quick data from be pulled from a card or cartridge, there will still be delay for data to be decompressed or data structures built in RAM before the game is playable.
 
Presumably there was a reason they didn't, and the only reason I can think of is cost. I guess carts that'd run at that speed would have been just too expensive.

Perhaps but they could have simply offered a the micro sd download as the better alternative .

From the figures it's evident that regardless how quick data from be pulled from a card or cartridge, there will still be delay for data to be decompressed or data structures built in RAM before the game is playable.
it really isn't because they only tested 2 cards and we really have little information about both cards. I would have liked to see more than 2 cards tested with a different range of speeds
 

Prices must be somewhat inflated at the moment because you can almost buy a 32GB sd card at retail for 10 bucks so I can't believe a indie games really needs a 10 buck card.

You also need to keep in mind there are only two million Switches or so out there and only a handful of games. Prices would probably be (a lot) lower if you had 100+ million consoles and hundreds of games out there.

Obviously flash cards will probably always be more expensive than disks but if the additional price can be brought down to a couple of dollars I can see it being a viable alternative.
 
From the figures it's evident that regardless how quick data from be pulled from a card or cartridge, there will still be delay for data to be decompressed or data structures built in RAM before the game is playable.

This is self evident to people that game on PC. While SSDs offer dramatically faster sequential read and random read, load times aren't improved to a similar extent. There's only so much you can do to speed up load times of games, even if you can read the data off the storage medium instantaneously.

Considering that the flash carts aren't an order of magnitude slower than relatively fast SD cards, it comes as no surprise that load times aren't dramatically improved.

However, that said, they are also significantly faster than any existing optical media. Imagine for a second that the Nintendo Switch was using optical distribution media like the original PSP. Load times would be extremely long in comparison. If the Switch had an internal physical HDD on which to store games, load times again would be longer, but not as dramatically so as optical media.

All of this is why I find it laughable that some people expect to see a console released with 20+ GB of memory in the next 5-10 years, much less be able to use that with anything resembling reasonable load times in games.

Regards,
SB
 
All of this is why I find it laughable that some people expect to see a console released with 20+ GB of memory in the next 5-10 years, much less be able to use that with anything resembling reasonable load times in games.
A lot of memory used by games isn't data loaded from storage but is data dynamically generated by the game engine itself or buffers.
 
There is a console released this year with 12GB of memory, so finding it laughable that there will be one with 20GB in the next ten years is laughable.
 
There is a console released this year with 12GB of memory, so finding it laughable that there will be one with 20GB in the next ten years is laughable.

We're already approaching 1 minute load times in some games. And that's with at most 6 GB of memory being used. There aren't any affordable storage technologies on the horizon that combine large capacity with faster speeds. We saw greatly increased load times moving from the last generation to the current one due to storage technologies not advancing much in anything other than capacity. Doubling or tripling the amount of memory used in games is going to incur a large cost in terms of load times. That is regardless of whether the data is dynamically generated (CPU speeds, IPC, aren't going to be dramatically increasing within a given power envelope) or loaded from storage medium.

Streaming technology and virtual technology could certainly help. But in the case of virtual texturing, that reduces your requirements for memory thus negating the benefits of having more memory.

I could certainly see using more RAM to cache data to reduce load times from relatively slow storage medium, but that isn't going to result in a large increase in graphics fidelity.

So yeah, a console maker could put 20+ GB into a machine in the next 5-10 years, but I don't see the point of doing so. And if there isn't a point in doing it, why would they do it?

Yes, perhaps laughable was too strong a word to use. There could be some unforeseen breakthrough in the next few years that would make a 20+ GB console make sense.

Regards,
SB
 
(CPU speeds, IPC, aren't going to be dramatically increasing within a given power envelope)

There is a low hanging fruit, the Jaguar is really slow. It's in smartphone and netbook territory now, with more cores, shows that you can do quite nice things with a slow CPU. The previous generation's Pentium 4-ish PowerPC was an even lower hanging fruit.
Jaguar saved silicon area, not only power. Its predecessor the E-350 was a cut down Athlon 64.

Zen+ on 7nm might be area efficient enough : you're leaping to Haswell-like or better PC and have an L3 cache.
 
Last edited:
There is a low hanging fruit, the Jaguar is really slow. It's in smartphone and netbook territory now, with more cores, shows that you can do quite nice things with a slow CPU. The previous generation's Pentium 4-ish PowerPC was an even lower hanging fruit.
Jaguar saved silicon area, not only power. Its predecessor the E-350 was a cut down Athlon 64.

Zen+ on 7nm might be area efficient enough : you're leaping to Haswell-like or better PC and have an L3 cache.

Agreed, improving on the IPC of Jaguar would still be possible, but enough to make that large of a difference? Even on PC with powerful CPUs and PCIE drives capable of 1,500+ MB/s sequential transfer rates (an order of magnitude higher than the drives that come included with current consoles, not to mention random access times), load times are creeping up. Interestingly in some situations a Samsung 950 Pro (PCIE drive capable of 2,200+ MB/s sequential read) is actually slower than a Samsung 850 Pro for loading games. And when it's faster, it's not anywhere near a 3-4x decrease in load times as the read speed differential would indicate. Again, not unexpected for people familiar with how storage subsystems work (lots of variables including how the firmware is tuned).

I do expect more work from developers to optimize and reduce any dead time in games, but I'm not expecting much dramatic. Right now, it's mostly hidden behind lengthy non-interactive cut-scenes or semi-interactive cut-scenes. Streaming helps, but you have to be careful to avoid pop-in if the storage subsystem can't stream fast enough and/or performance hiccups if you stress the storage subsystem too much during gameplay. Although some games still go with a traditional load screen (Bloodborne and Dark Souls 3, for instance).

Regards,
SB
 
Loading times is much less of a problem with the current generation thanks to "suspend" modes. It's kinda overlooked, but the times where i actually have to wait for a looong time to play a game, is reduced considerably simply because the game i usually play is ready when i turn on the console. Besides that, i am not that concerned about future consoles storage demands and transfer rates, i think it's evident from the different tests with SSD drives etc that the current bottleneck is not in the Harddrives as such, but other places, like compression, encryption, file systems whatever, something that can and will be improved for Gen Next.
 
We're already approaching 1 minute load times in some games. And that's with at most 6 GB of memory being used. There aren't any affordable storage technologies on the horizon that combine large capacity with faster speeds. We saw greatly increased load times moving from the last generation to the current one due to storage technologies not advancing much in anything other than capacity. Doubling or tripling the amount of memory used in games is going to incur a large cost in terms of load times. That is regardless of whether the data is dynamically generated (CPU speeds, IPC, aren't going to be dramatically increasing within a given power envelope) or loaded from storage medium.

Streaming technology and virtual technology could certainly help. But in the case of virtual texturing, that reduces your requirements for memory thus negating the benefits of having more memory.

I could certainly see using more RAM to cache data to reduce load times from relatively slow storage medium, but that isn't going to result in a large increase in graphics fidelity.

So yeah, a console maker could put 20+ GB into a machine in the next 5-10 years, but I don't see the point of doing so. And if there isn't a point in doing it, why would they do it?

Yes, perhaps laughable was too strong a word to use. There could be some unforeseen breakthrough in the next few years that would make a 20+ GB console make sense.

Regards,
SB
nand my friend

m.2 drives are a thing now. On the low end you have 2050MB/s read . As an end user I can get a 256gig one for $100 (sometimes less with sales) a 512 gig one goes as low as $150 .

If we assume 6,000MB for memory and we are able to read off the drive this gen at 100MB/s . That's 1 minute to fill that ram correct ?

If we double that to 12GB for memory but go to a 2,000MB/s read time that's 6 seconds to fill the ram. 20 gigs would be 10 seconds .

Load times would be almost instantaneous in comparison to todays speeds.

Even if MS or Sony would make a sd type set up you can get 300MB/s out of SD cards now. So you would still cut load times greatly filling 12 gigs in 20 seconds less than you would fill 6 gigs with todays consoles
 
Back
Top