Microsoft Surface tablets

This might be a more useful comment from him in that thread.

Peter Bright said:
The conditions were controlled by Microsoft, and there are certainly other conditions in which the iPad's display looks better. But the conditions they were having us test were reasonably representative of real-world conditions, with realistic viewing distances... and the Surface's display was consistently better, in spite of having only one third the number of pixels.

Will be interesting to see how that holds up on the 26th.
 
This might be a more useful comment from him in that thread.

Thanks for posting. When I'd originally seen the article he hadn't posted that followup yet.

It's interesting that he went as far as to say "consistently better." My personal suspicion of the quality difference was "not as much worse as you'd expect, particularly in bright conditions." I find myself skeptical of such a strong statement, but as you said, we'll see in a week.
 
yea i feel you , even the new ipad's 264ppi is to little for me now that i've seen htc butterfly J with its 440ppi display . Who can live with such sub par screens ? I certianly can't ! :rolleyes:
I use my gadgets 95% for reading text. I suspect I'm not the only one. The iPhone <4, iPad 1 and 2 were excellent devices, but the clunky character rendering always bothered me. It doesn't on iPhone >=4 and iPad 3 and I'm not going back.

440ppi sounds great, but, as always, there's these pesky diminishing returns. I believe we're already there for modern phones, tablets and one laptop, but since I haven't seen 440ppi on an LCD, that's just a feeling. I take your word for it that 440pi is amazing and something to look forward to.
 
I keep asking myself who in this day and age would want a tablet based on Tegra 3 (30T)?

Why didn't they go for Snapdragon, like Windows 8 Phone? It perplexes me.
I don't think it matters. A Tegra 3 is fast enough for the most common use cases, just like I still feel the A4 is still pretty fast on an iPhone 4 when I compare it to the A6.
 
Maybe the Surface is also using sub-pixel font rendering, something the iPad does not?

I don't know if i-devices use any sub-pixel font rendering, but if they don't how hard would it really be to implement it?

With adequate sub-pixel font rendering a lower resolution might not be an issue while zooming into text. However anyone will have a very hard time convincing me that there won't be any difference at all in 3D applications like textures f.e.

On the other hand I've no idea what kind of performance drop anisotropic filtering comes with in the majority of cases on Tegra3 ULP GFs. If it's reasonable it's not really an issue either IMHO.

On an unrelated note to the topic, I'm curious to see if NV considered tiling after all for their GPUs in upcoming SoCs. The majority of competing GPUs are tile based and considering that all applications are usually vsynced the typical drop even for 4xMSAA in the majority of cases is usually small.
 
Will be interesting to see how that holds up on the 26th.
To be honest, the iPad does suffer from reflections on the glass, so lower reflectivity can make quite a difference. But higher contrast will not hide the screen door effect, and for the range of what I consider "realistic viewing distances" 147 ppi is not entirely sufficient to do that, either.
 
Maybe the Surface is also using sub-pixel font rendering, something the iPad does not?
x86 tablets work the same way as any x86 computer (the only difference being the control method, assuming you don't connect a USB mouse & keyboard to your tablet, or a touchscreen monitor to your desktop PC). Sub-pixel font rendering works fine on both Windows 7 and on Windows 8 (tested on a Samsung Slate tablet).

WOA (Windows on ARM) of course is very different, as it's designed only for ARM based tablets and is tightly integrated to the device (not sold separately). But I would assume that Microsoft implements similar sub-pixel font rendering system on WOA as well.
 
WOA (Windows on ARM) of course is very different, as it's designed only for ARM based tablets and is tightly integrated to the device (not sold separately). But I would assume that Microsoft implements similar sub-pixel font rendering system on WOA as well.
I assume the iPad doesn't have it because it takes too much CPU(/GPU?) resources? But I've no idea how much it really takes and how it's done. If it's too much for an A5, it's probably also too much for a Tegra 3?
 
I assume the iPad doesn't have it because it takes too much CPU(/GPU?) resources? But I've no idea how much it really takes and how it's done. If it's too much for an A5, it's probably also too much for a Tegra 3?

It's not CPU intensive at all AFAIK. Windows Phone 7 has it and it runs on relatively low end ARM SoCs..
 
It's not CPU intensive at all AFAIK. Windows Phone 7 has it and it runs on relatively low end ARM SoCs..
In sub-pixel rendering, do you still align the characters to pixel boundaries? That is: the intra-character rendering is sub-pixel aware but the placement as a whole is not? This way, you can still easily use the GPU to blog them into position once you have them rendered to a cache.

(BTW: I'm taking this from memory that the iPad doesn't have it, but I may well be wrong on that one too...)
 
In sub-pixel rendering, do you still align the characters to pixel boundaries? That is: the intra-character rendering is sub-pixel aware but the placement as a whole is not? This way, you can still easily use the GPU to blog them into position once you have them rendered to a cache.

(BTW: I'm taking this from memory that the iPad doesn't have it, but I may well be wrong on that one too...)

Here's how it's supposed to work:
http://www.grc.com/ctwhat.htm
http://msdn.microsoft.com/en-us/library/hh237264(v=vs.85).aspx

The iPad, iPhone or any other devices besides Microsoft's (PC/Tablets running Windows 7+ and Windows Phones) don't have it AFAIK.
Some more talk about it over at http://www.displaymate.com/news.html
 
I assume the iPad doesn't have it because it takes too much CPU(/GPU?) resources? But I've no idea how much it really takes and how it's done. If it's too much for an A5, it's probably also too much for a Tegra 3?

I couldn't imagine why the GPU wouldn't be capable of a form of spatial anti-aliasing which is what Clear Type actually does. While font AA shouldn't cost much on about any of the recent small form factor GPUs, guess whether the SGX or the ULP GF would use more bandwidth and memory footprint for it.

Besides the majority of the linked articles will point out that there's no "ideal" default ClearType setting and that's exactly the reason why Microsoft has a ClearType fine tuning application in its OSs in order for the user to pick what appears best to his eye according to the display medium he's actually using. I eventually even stumble upon users that have ClearType turned off on LCD/TFT displays because it supposedly "blurs" text for their taste. Not really true, but heck to each his own.
 
And yet, despite that, PC's still radically outsell Macs. They have done a bunch of things with the surface to convince consumers to buy them. Suicidally low pricing or an ultra-high resolution display are simply not among them. Whether or not those resonate with conusmers as a whole remains to be seen.

Thats a broad and vague statement since PC come in different price ranges and form factors. Hardly relevant to this specific argument. Apple has a 90% marketshare in the premium laptop segment and gets almost all the profit. So at the same pricepoint Macs destroy PCs. I could easily use a similar vague statement and include iPads in the discussion, wich according to NPD made Apple the worlds largest mobile PC supplier this quarter beating HP




I have seen both iPads and screens that use similar technology to the Surface's screen, and we have hard numbers on the properties of the two screens. Many people have also seen Surface's screen and have spoken positively of its clarity.

So I'd say my argument sounds a lot better than, "But the resolution is lower so it must suck."

http://www.displaymate.com/news.html#11

DisplayMate did the same comparison you did and according to them:

"The Windows ClearType 768p display on the Asus Netbook was significantly sharper than the iPad 2 768p display but also significantly less sharp than the new iPad 3 1536p display."

And im pretty confident when people get the devices and can do unbiased testing they will come up with the same conclusion. But then again common sense would tell you this, if ClearType rendering would make a 768p screen as sharp as a 1536p screen, why did Microsoft use a 1080p screen for the Pro? Why drive up costs and batterylife? Funny how when someone asked them that same question on reddit, they ignored it









You also seem to be implying that I've made some grand statement that Surface is going to immediately eat away all of iPad's market share. My point was significantly simpler than that. The single bullet point of resolution is simply not as important as you (and some others in this thread) are making it out to be.

That single bullet point becomes important when MS is already facing an uphill battle. When John Doe walks into the store, he is looking at an iPad wich he knows from his friends and overall reputation is a high quality product, he knows it has all the apps, he knows it has iTunes and he knows it has higher resolution screen. Apple doesnt need to convince people to buy iPads, they already wait in line to do it.. Microsoft is the one who needs to be convincing people they should buy their product over what "everyone else" is buying
 
Thats a broad and vague statement since PC come in different price ranges and form factors. Hardly relevant to this specific argument. Apple has a 90% marketshare in the premium laptop segment and gets almost all the profit. So at the same pricepoint Macs destroy PCs. I could easily use a similar vague statement and include iPads in the discussion, wich according to NPD made Apple the worlds largest mobile PC supplier this quarter beating HP

And yet somehow, Apple's share in premium laptops doesn't come from competing on price.

http://www.displaymate.com/news.html#11

DisplayMate did the same comparison you did and according to them:

"The Windows ClearType 768p display on the Asus Netbook was significantly sharper than the iPad 2 768p display but also significantly less sharp than the new iPad 3 1536p display."

The Asus netbook they tested may have the same resolution and ClearType as the surface, but does -not- have the same low-reflectivity glass. So no, they didn't do the same comparison at all.

You seem to be under the mistaken impression that anyone here has argued that you don't increase the overall potential visible quality of a display by increasing resolution. Or that "ClearType is the answer." Or whatever. Obviously if you have the hardware space to put in a higher res display, it's a perfectly fine thing to do. They have said they didn't feel that it was going to be viable in RT, and they did other things to cover the gap. Pro is targetted as a larger, heavier device, and thus what can be fit in it is different.


That single bullet point becomes important when MS is already facing an uphill battle. When John Doe walks into the store, he is looking at an iPad wich he knows from his friends and overall reputation is a high quality product, he knows it has all the apps, he knows it has iTunes and he knows it has higher resolution screen. Apple doesnt need to convince people to buy iPads, they already wait in line to do it.. Microsoft is the one who needs to be convincing people they should buy their product over what "everyone else" is buying

And for the 10th time, the entire point is that you are massively overstating the importance of resolution in overall display quality, particularly in the degree to which it is important to "John Doe." It's easy to write an amusing anecdote about what you think "everyone" is going to care about. Some people will care, other's won't.

Microsoft is doing plenty of things to "convince people they should buy their product", but throwing in a very high res display that was likely to be too expensive and too heavy to be viable for the design goals of RT isn't one of them.
 
Last edited by a moderator:
With adequate sub-pixel font rendering a lower resolution might not be an issue while zooming into text. However anyone will have a very hard time convincing me that there won't be any difference at all in 3D applications like textures f.e.
Frankly it's more the other way around. As the note about "resolution isn't everything, contrast factors in just as much" is getting at, high contrast, razor sharp edges are where this stuff matters. i.e. black text on white background and similar. In games where textures are filtered and often later post-processing blurs stuff, you're not really going for that look and your assets are not nearly as high contrast. In addition to lack of GPU power, most 3D applications don't render at native resolution on high-dpi displays like the iPad because frankly beyond a certain point you can't see the difference and it's just wasting time. If you have good multisample anti-aliasing, there's really no need to shade at those high frequencies visually.

Besides the majority of the linked articles will point out that there's no "ideal" default ClearType setting and that's exactly the reason why Microsoft has a ClearType fine tuning application in its OSs in order for the user to pick what appears best to his eye according to the display medium he's actually using.
Right but that's because different displays can sometimes have different subpixel arrangements for the color components. If you control the display (i.e. in the surface), there is no need to have end user configuration. You can directly use the known subpixel arrangement to generate the anti-aliased edge.
 
It doesn't seem to make much difference if you take into account also the paragraph that followed the quoted one above:

With adequate sub-pixel font rendering a lower resolution might not be an issue while zooming into text. However anyone will have a very hard time convincing me that there won't be any difference at all in 3D applications like textures f.e.

On the other hand I've no idea what kind of performance drop anisotropic filtering comes with in the majority of cases on Tegra3 ULP GFs. If it's reasonable it's not really an issue either IMHO.

If I'd have the dilemma between 1280 with high quality anisotropic filtering vs. 2048 no AF, I'd obviously pick the first for better overall texture sharpness especially in the farther viewing distance IF the performance drop for AF isn't all too big. I haven't seen yet any benchmarks with AF on vs. off to see how each GPU behaves with it, but I wouldn't suggest that since the majority of those GPUs are either single or dual TMU that AF is for free.

In the case where AF typically costs too much I'd rather have 2048 no AF vs. 1280 no AF or any other higher resolution than the latter. Multisampling has obviously nothing to do with textures.
 
In sub-pixel rendering, do you still align the characters to pixel boundaries? That is: the intra-character rendering is sub-pixel aware but the placement as a whole is not? This way, you can still easily use the GPU to blog them into position once you have them rendered to a cache.

AFAIK, yes.
 
Back
Top