NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
also given that most exploits evolve some kind of social engineering for privilege escalation its a pretty dumb statement. stuff like sudo/UAC wont help you if the user clicks yes.

I have been running windows 7 since RC without AV, I ran XP without AV for about 6 years lost my machine 2 or 3 times to viruses. a lot of it is about where you go and what you do.

The problem is that if you get caught out, your machine is not just rooted, it could be used to relay spam, take part in botnet attacks, your personal data (mail logons, bank details, credit card numbers, etc) can all be compromised, which moves a virus from trashing your machine to trashing important parts of your daily life.

So yeah, you can avoid a lot of problems by being careful, but the one time you get fooled is the time you really need that AV to pop up and stop some scumbag from emptying your bank account. Nowadays with zero day attacks against Windows or Adobe, rootkits, fake anti-virus apps, etc, it's easier that ever to get infected if you don't run some sort of reputable AV.

Security isn't all or nothing, it's lots of layers, every one making you that bit more secure.
 
Cross-posted from the Tegra thread...

Looks like Nvidia and their troubled Tegra lost yet another client due to lack of performance: http://www.engadget.com/2010/09/13/boxee-box-ditches-nvidias-tegra-2-for-intel-ce4100-pre-orders/

At a rendezvous in San Francisco, Avner Ronen told us the decision to abandon Tegra 2 was about performance and nothing more: "The major problem we had with the Tegra 2 was support for high-profile HD playback," he said. "You can do high-profile VC-1 with Tegra 2, but not H.264." It was a problem of bitrate, he told us, and while NVIDIA's dual-core Tegra T20 was apparently not up to the task, the team had internally tested Intel's CE4100 decoding streams at up to 90 megabits per second.

San Francisco, Calif., Sep 13, 2010 - Today at the Intel Developer Forum (IDF), D-Link and Boxee announced the upcoming Boxee Box by D-Link is now powered by the Intel® Atom™ processor CE4100. Additionally, starting today, US customers will be able to pre-order the Boxee Box by D-Link exclusively through Amazon.com at http://amzn.to/boxeeboxbydlink. Units will begin shipping in November 2010 in the US, Canada, EU and Australia.

The award winning Boxee Box by D-Link, first released in December 2009, received critical acclaim from the industry and is highly anticipated by consumers. The product is the first media device capable of delivering both free and premium TV shows and movies, videos, music, and photos from the Internet with support for full 1080p high-definition (HD) and 5.1 surround sound for popular digital media formats.

The Boxee Box uses the Intel Atom CE4100, Intel's system-on-a-chip designed for TV and Internet integration. It provides performance, audio visual and graphic capabilities to enable Internet-driven applications, video, personal media, advanced user interfaces and electronic program guides. The new chip lets the Boxee Box deliver a wide range of applications and ensures support for 1080p content, no matter the source.
 
Cross-posted from the Tegra thread...

Looks like Nvidia and their troubled Tegra lost yet another client due to lack of performance: http://www.engadget.com/2010/09/13/boxee-box-ditches-nvidias-tegra-2-for-intel-ce4100-pre-orders/

At a rendezvous in San Francisco, Avner Ronen told us the decision to abandon Tegra 2 was about performance and nothing more: "The major problem we had with the Tegra 2 was support for high-profile HD playback," he said. "You can do high-profile VC-1 with Tegra 2, but not H.264." It was a problem of bitrate, he told us, and while NVIDIA's dual-core Tegra T20 was apparently not up to the task, the team had internally tested Intel's CE4100 decoding streams at up to 90 megabits per second.

San Francisco, Calif., Sep 13, 2010 - Today at the Intel Developer Forum (IDF), D-Link and Boxee announced the upcoming Boxee Box by D-Link is now powered by the Intel® Atom™ processor CE4100. Additionally, starting today, US customers will be able to pre-order the Boxee Box by D-Link exclusively through Amazon.com at http://amzn.to/boxeeboxbydlink. Units will begin shipping in November 2010 in the US, Canada, EU and Australia.

The award winning Boxee Box by D-Link, first released in December 2009, received critical acclaim from the industry and is highly anticipated by consumers. The product is the first media device capable of delivering both free and premium TV shows and movies, videos, music, and photos from the Internet with support for full 1080p high-definition (HD) and 5.1 surround sound for popular digital media formats.

The Boxee Box uses the Intel Atom CE4100, Intel's system-on-a-chip designed for TV and Internet integration. It provides performance, audio visual and graphic capabilities to enable Internet-driven applications, video, personal media, advanced user interfaces and electronic program guides. The new chip lets the Boxee Box deliver a wide range of applications and ensures support for 1080p content, no matter the source.

Removing the NEON units from the Cortex CPU's does not seem to have paid off for nVIDIA.
 
Removing the NEON units from the Cortex CPU's does not seem to have paid off for nVIDIA.
It has absolutely nothing to do with that and everything to do with the dedicated video decode blocks. None of the high-end application processors use NEON in any way for video processing; there will likely be low-end chips that end up using it for VGA/D1-level video decode/encode though.
 
It has absolutely nothing to do with that and everything to do with the dedicated video decode blocks. None of the high-end application processors use NEON in any way for video processing; there will likely be low-end chips that end up using it for VGA/D1-level video decode/encode though.

Well, do you think that a pair of NEON units could have helped to fix (through software means) such video decoding deficiency without destroying Tegra2's power budget?
That is what I was saying. More processing punch might allow you to fix some mistakes... I am unclear if 2 vector units could have helped that design loss mutate into a design win. Fact of the matter is that as it is now, H.264 has decoding issues on Tegra 2 which lost them a customer. If something like being able to use NEON could have helped to do in software what the video decode blocks could not, they might still have that customer... or maybe not, depending on what having those twin NEON units might have done to power consumption.
 
*snicker*

Note that according to our sources there’s also a rather particular GeForce 210 or GT 220 on the market. This is an interesting case which reveals certain doubtful sales practices that are far too common at the low-end of the ranges. This card is equipped with a cut down GT216 with 24 shader units and 8 texture units. NVIDIA sells this version off cheap as the GeForce 210 as long as partners acccept to buy x number of “true” GeForce 210 512 MB cards (based on the GT218 here) at the same time. As these 512 MB cards aren’t particularly attractive to some, NVIDIA sells its card based on the cut down GT216 at a slightly higher price but as a GeForce GT 220
 
Well, do you think that a pair of NEON units could have helped to fix (through software means) such video decoding deficiency without destroying Tegra2's power budget?
That is what I was saying. More processing punch might allow you to fix some mistakes... I am unclear if 2 vector units could have helped that design loss mutate into a design win. Fact of the matter is that as it is now, H.264 has decoding issues on Tegra 2 which lost them a customer. If something like being able to use NEON could have helped to do in software what the video decode blocks could not, they might still have that customer... or maybe not, depending on what having those twin NEON units might have done to power consumption.
H.264 in general is fine; 1080p High Profile specifically is not, simply because the dedicated hardware isn't strong enough. Interestingly the specs I heard never implied Tegra2 could run 1080p High Profile at 10Mbps, so I'm not sure what Boxee was thinking unless NV claimed they could boost it through software optimisations but failed to do so.

In theory NEON could assist there, but: 1) The video block likely expects a full stream and reads directly from memory, I rather doubt it'd be flexible enough to benefit from NEON handling one part of the process or a small percentage of full frames. 2) It'd have a huge power penalty, 2x1GHz NEON isn't exactly cheap compared to fixed-function logic. 3) It just wouldn't be enough anyway. With CE4100, Boxee can do Blu-ray 2.0 with >40Mbps H.264 and dual-stream.

It's very possible that not bothering with NEON will publicly hurt NVIDIA eventually (everyone else seems to include it, although interestingly the A15 also has an option without NEON), but this doesn't appear to be related to it. We'll see.
 
is that really any surprise ? Nv has been doing such practices for some time, ironic given the complaints leveled against Intel for said practices... (ie bundling)

Intel's bundling was to discourage OEM's from using competitor's complementary products. Also I don't recall Intel getting flak for bundling, it was for kickbacks/bribes IIRC. If they were offering CPU X at a discount if some amount of CPU Y was bought then it would be similar to what Nvidia is doing - which has nothing to do with the competition.
 
Intel's bundling was to discourage OEM's from using competitor's complementary products. Also I don't recall Intel getting flak for bundling, it was for kickbacks/bribes IIRC. If they were offering CPU X at a discount if some amount of CPU Y was bought then it would be similar to what Nvidia is doing - which has nothing to do with the competition.

IIRC (and correct me if I am wrong) I though nV's major beef (besides the whole QPI bus thing) with Intel was that Intel was bundling Atom+Chipet together where IF an OEM wanted to build an Ion + Atom then HAD to buy Intel's bundled Atom+chipset then add Nv's ION to that (thus adding to cost) instead of simply being able to buy Atom then add ION. Really not much of a difference.. Intel was saying that in order to buy W (Atom) you had to buy X (chipset), just as NV was saying in order to buy Y (Fermi/GF2x0) you had to buy (240, 250/220).
 
is that really any surprise ? Nv has been doing such practices for some time, ironic given the complaints leveled against Intel for said practices... (ie bundling)

NVIDIA is not in a dominant position, so they can do that. It's sleazy, but I doubt you could find a judge who would deem it illegal.
 
`Really not much of a difference.

There is a very large difference when you are talking about a complementary and necessary component like a chipset. The bundled chipset discouraged OEMs who wanted Atom from considering chipsets from other vendors. How does a bundled GPU+GPU deal from Nvidia discourage OEMs from purchasing AMD GPUs? They aren't complementary or related at all.

And this wasn't a Fermi deal. This was a deal to get hobbled GT216's cheap if you bought GT218's. That's hardly a move that squeezes AMD out of the market - as the article states people could easily walk away from it because hobbled GT216's aren't some hot item, unlike an Atom CPU.
 
NVIDIA not only shows signs of strain, its future is pretty bleak too. NV will either abandon general GPU market or most likely will be bought by IBM or similar heavy-weight. Think about it:

1. Nvidia already lost integrated chips in MBs.

2. They will lose massive low-end graphics market too, integrated GPUs in CPUs will make sure of that. Both Intel and AMD are launching products as we speak.

3. Nvidia already behind in mid/high end market for almost two generations. They still can catch up, but is it worth it? No doubt they'll try, but AMD isnt sleeping, while Nvidia will be losing money (or making pennies at most). NV is a rich company, but their funds wont last forever.

4. HPC and GPGPU market - Nvidia already lost this battle, and I think they know it too, just JHH ego wont accept it. My guess is - within 3 to 5 years there wont be more than traces of NV products in it.

Read about Intel's Knight Ferry:
http://www.pcworld.com/businesscente..._32_cores.html
http://www.intel.com/pressroom/archi...100531comp.htm

Not only Intel have vastly more resources and influence in corporate market, their product is actually much more attractive for the developers and customers. Devs can very easy port their specialized software (or write for it), while customers get high-performance product in a bundled deal - Xeon + GPUs in one.

AMD will get a small part of this market too, and even their chances are much better than Nvidias. Green goblins lost HPC, I dont see any way around it. All they can do is to prolong the agony, not win the battle. CUDA will never be as attractive to what Intel is offering.

5. Most GPUs for the games today go to consoles, as far as I know, Nvidia doesnt have a single next gen contract.

6. Tegra and similar products are getting canceled or postponed. Yes Nvidia can still claw their way to this market, but competition is very strong there too.

Whats left for Nvidia?

Only professional workstation market. This will keep NV afloat for a while, but it wont last long. Why? Because NV needs its consumer discreet market to offset costs required for R&D. Think about all previous GPU makers for this market - none of them maintained market share (or even survived) if their only income was professional GPUs, not one of them.

AMD is increasing its share in it too, but in a long run it wont be the only threat for NV - Intel will launch its Knight's Ferry Xeon+Larrabee for this market too, and soon. What do you think software powerhouses will embrace more, CUDA or x86 GPU platform from Intel? I think we all know the answer.

Conclusion:

In the long run Nvidia doesnt have a chance to survive in consumer GPU market, and either will find their niche market, or will be bought out, and the sooner the better for them, since their stocks value will only go down from now on.
 
Last edited by a moderator:
You are a bit fast. Noone has a contract for next gen console yet so who tells you they won't get one or more ?
Its possible, but rumors so far points to AMD, Intel and IBM. The only rumor about NV I heard was from BSN about Tegra last year, but since we all know how "reliable" BSN is, what's Tegra acceptance (customers canceling it, etc) and considering AMD with Intel can offer more attractive solutions (Fusion, Knight's tech from Intel, etc), its a bit unlikely NV wins next gen console contract. Even if Nvidia does win, it wont help them that much.
 
Nvidia lost HPC and GPGPU? To Intel because of products not coming to market for a couple of years?
Have you had a look at what some companies presented at GTC this year?
http://blogs.nvidia.com/ntersect/2010/09/cancer-research-and-supercomputing.html#more
http://blogs.nvidia.com/ntersect/2010/09/gpu-technology-conference-day-1-recap.html See video from about 50 seconds out.

And I don't think it's quite as clear cut of anything Intel trouncing CUDA. CUDA is getting quite a foothold in research institutes and many HPC companies, where the cost of a bit of extra CUDA training is easily offset by the potential gains. And I don't think this will be changed until Intel releases actual hardware, which means the CUDA user base will have had time to grow even further.

Also, what makes you think the Intel solution will be so easy to use, just because it runs on an x86 CPU/GPU? Programming models will differ, synchronisation will have to be handled, and it will be a very new architecture which will probably take time to mature.
Not to mention that we have still to see Intel making a GPU suited to GPGPU tasks. (At least I have not heard of anyone trying to do GPGPU on any of Intels current GPU offerings)

So I think you're a bit too fast to call this one. Though I do agree with you that Nvidia is going to have their work cut out for them!
 
Nvidia lost HPC and GPGPU? To Intel because of products not coming to market for a couple of years?
Have you had a look at what some companies presented at GTC this year?
http://blogs.nvidia.com/ntersect/2010/09/cancer-research-and-supercomputing.html#more
http://blogs.nvidia.com/ntersect/2010/09/gpu-technology-conference-day-1-recap.html See video from about 50 seconds out.

And I don't think it's quite as clear cut of anything Intel trouncing CUDA. CUDA is getting quite a foothold in research institutes and many HPC companies, where the cost of a bit of extra CUDA training is easily offset by the potential gains. And I don't think this will be changed until Intel releases actual hardware, which means the CUDA user base will have had time to grow even further.

Also, what makes you think the Intel solution will be so easy to use, just because it runs on an x86 CPU/GPU? Programming models will differ, synchronisation will have to be handled, and it will be a very new architecture which will probably take time to mature.
Not to mention that we have still to see Intel making a GPU suited to GPGPU tasks. (At least I have not heard of anyone trying to do GPGPU on any of Intels current GPU offerings)

So I think you're a bit too fast to call this one. Though I do agree with you that Nvidia is going to have their work cut out for them!

If u can gain more performance for the same than language change will be the last problem in large HPC projects.(at least from the pdf it seems they are aiming it in the realm of supercomputers as a xeon+coprocessor)
With intels funds and superiority in fab technology nvidia will have it hard(if not imposible) in the far future.
 
@MrGaribaldi, I agree NV will keep gaining share for next couple of years, but after that they'll nose dive, hence only traces of their products 5 years from now IMO. NV isnt making any money in this market at the moment (actually they are losing a lot), this should change in the future, but by then Intel will have a stronghold of much more attractive solutions.

Intel is launching next year, devs and bigger customers already are working on it. What we have - vastly more influential company, with vastly easier and universal tech. to work with vs relatively small company with proprietary and limited tech, who havent even achieved critical acceptance or profitability in this market. Who do you think wins in a long run? To me the question isnt who wins, but how long it will take for Intel to overshadow NV.
 
@MrGaribaldi, I agree NV will keep gaining share for next couple of years, but after that they'll nose dive, hence only traces of their products 5 years from now IMO. NV isnt making any money in this market at the moment (actually they are losing a lot), this should change in the future, but by then Intel will have a stronghold of much more attractive solutions.
Ah, OK. I think you have too much confidence in Intel. They have a very good track record creating multi-core CPUs, but I am not sure that this will easily transfer into many-core CPUs. Sure, they will get there, but how long will it take, I do not know.

Intel is launching next year, devs and bigger customers already are working on it.
Yup, my bad. Misread the PcWorld article.

What we have - vastly more influential company, with vastly easier and universal tech. to work with vs relatively small company with proprietary and limited tech, who havent even achieved critical acceptance or profitability in this market.
I see where you are going, but I'm not sure if I buy the "easier and universal tech".
Programming a CUDA core is actually quite easy, the problem is more in splitting the problem up in such a way that it fits that is problematic. And I don't see how the new Intel chips will make that easier. In fact, the constraints of the memory model in CUDA can be seen of as a plus, since it makes it very hard to get deadlocks or stale data.
We see today problems writing and debugging programs on mulitcore cpus, with race conditions, deadlocks, etc. making it hard to write efficient working code. Now add in the problem of cache trashing with 50 cores drawing different data into a coherent cache, and I don't really see the ease with which we will write programs for this new hardware.
Sure, it will be worked out in time, Intel is already working on this in TBB, Intel Thread Checker etc., but this will give Nvidia more time to adapt and find new solutions.

Who do you think wins in a long run? To me the question isnt who wins, but how long it will take for Intel to overshadow NV.
As it looks right now I agree with you that Intel seems to be most likely winner in the long run. But I feel it is too early to say Nvidia is dying.
If Nvidia were to come to market with a solution with both their CUDA cores and integrated multi-core ARM chip(s), giving us a system on a card (if not chip), they could very well turn things around.
 
If Nvidia were to come to market with a solution with both their CUDA cores and integrated multi-core ARM chip(s), giving us a system on a card (if not chip), they could very well turn things around.
I thought Nvidia will go similar road two years ago, do you remember their talks with VIA? It would have been very interesting partnership, but NV burned that bridge. Nor it would be powerful enough for high-end games, probably same goes for ARM chips. Fusion products looks much more attractive, modular cpu with gpu, customers can get whatever power they want - AMD is probably front runner for next gen consoles as well.
 
Status
Not open for further replies.
Back
Top