Nvidia is now offering "official" Customer Support at Nzone.

ChrisRay

<span style="color: rgb(124, 197, 0)">R.I.P. 1983-
Veteran
Hello everyone. I apologize if some consider this spam but I think its news worthy. I just wanted to let everyone know that Nvidia is now offering some level of customer support forum interaction. The purpose of this is to allow Nvidia to support users who can't provide the additional support for those who don't have access to larger AIBS like EVGA.

The new support Staff "Peter S" And "Manuel G" should be helping users with some technical assistance . This is one of the reasons we have decided to merge SLIZONE and NZone. More information will become available over time as more on the merger becomes available.

There is also some more changes coming involving improving NVidia's interaction with the community. And I'll be discussing that soon. But at a later date.

Here's the FAQ on the subject.

http://forums.nvidia.com/index.php?showtopic=108983


Q: What is the best way to get official technical support for a product based on NVIDIA technology?

A: The best way is always to check with the product provider first. They are the only ones that can handle warranty and RMA related issues. If you bought a graphics card for example, contact the manufacturer of the card. If it came as part of a computer system (such as a Dell or HP), contact them first. If it’s an NVIDIA branded product, like 3D Vision, the NVIDIA support pages on www.nvidia.com are your best bet. Don’t forget the NVIDIA online knowledgebase as well, our research shows that over 98% of the people that visit our support website find the answer to their question there.



Q: What types of issues can we help with?

A: We’ll do our best to help with technical problems, driver questions and other technology related issues. Sometimes we’ll help right here within the thread, sometimes we may redirect you to the appropriate support group, or ask for more information either via PM or within the thread.



Q: What types of questions can’t we help with?

A: Manuel and I are part of the NVIDIA Customer Care team which is a part of NVIDIA Software Engineering. We are not a part of marketing or sales, so we can’t really answer questions about future products, business decisions, legal actions, conspiracy theories, naming conventions and so on. We can’t recommend one partner over another, or make specific product brand recommendations. We tend to be Windows-centric since that’s the bulk of our customer base although Manuel knows his way around a Mac pretty well too and of course he has a direct link to our Mac development and QA groups. We have other forums and support channels for Linux issues and OpenCL/CUDA developers.



Q: What about the existing forum mods and admins?

A: We have a great team of volunteers here and we couldn’t run these forums without them. Nothing changes, they are still here and happy to get involved as usual.



Q: Will you reply to every question and post?

A: We suspect that would be impossible due to volume. We’ll do our best and read everything we can, but Manuel will be picking and choosing where he feels he can offer the most value. We can't guarantee a reply to every question. Be patient as well, we hope to respond pretty quickly but can’t guarantee quick responses. Manuel will also be trying to repro issues, pinging internal groups, filing and tracking bugs, creating reports and many other things other than reading and writing forum posts.



Q: How can I best word a post to get an answer?

A: Ask a technical question, provide information about your system and the steps required to reproduce the issue. I am instructing Manuel to not waste a lot of time on threads that contain profanity or have dissolved into flame wars.



Q: Why doesn’t NVIDIA provide a toll-free number for technical support?

A: In a sense we do. Almost all of our partners that create products based on NVIDIA technology provide toll free support. For NVIDIA branded products, we also provide a direct toll-free phone number for those customers as well.



Q: What will you do with the information you gather on these forums?

A: First and foremost, we’ll try our best to help resolve issues as quickly as we can. We will try to repro legitimate and well documented issues, file internal bugs when it makes sense, help set priority on known issues and create internal reports that track the hottest issues. We take customer feedback very seriously and implement your suggestions whenever it makes sense to do so. For example, providing regular web drivers for notebooks was driven by the NVIDIA Customer Care team based on customer requests.



NVIDIA is committed to providing the best technical support experience that we can. Having an official Forum Technical Advisor is one step. We are also making improvements to our technical support systems and will be rolling out new support technology in the near future. We are looking forward to working with you to make the ownership of an NVIDIA based product as painless as possible.

Best regards,

--Peter S

Director, NVIDIA Customer Care
 
Very nice, I bet those 2 will be very busy :) Thankfully like most support forums, the general public usually answers most questions and performs troubleshooting steps with posters.
 
Would you like to ask Nvidia a question?

Would you like to ask Nvidia a question?

As a member of the user group along with other user group members I have been responsible for watching trends and delivering feedback to them. I have been trying ((with other user group members)) to get Nvidia to interact more freely in the community.

In An attempt to do this we are now fielding so many questions a week that Nvidia will attempt to respond too. At Nzone me and Amorphous will be going over the questions and trying to get the most prevalent and relevant questions answered.

And Nvidia will be supplying a spot on Nzone ((page is being built)) for the answered questions.


http://forums.nvidia.com/index.php?showtopic=109093&st=0#entry600985

Update: Please post at the Nzone thread. This is important for this to work because I alone will not be cooridinate questions and feedback


Greetings everyone. I would like to take this chance to invite Nvidia customers and enthusiasts to ask Nvidia a question.

1) Is a latest trend or development on your mind?

2) Have a question about Nvidia hardware?

3) Have a question about The Way Its Meant to be played?


Me and Amorphous will be allowing questions that Nvidia will be answering. Please keep in mind that not "all" questions are going to be answered. Me and Amorphous will try and pick and choose the best questions and the most supported subjects. There are of course some limitations on this. We will not be able to field questions regarding unreleased products and products Nvidia does not support. ((Example Radeon Questions)). Also, be wary that if we feel the question has already been answered, we will link you back to the question's answer.

This is an exciting chance for us to help you communicate your questions, concerns, and feedback to Nvidia. It also will help Nvidia have better interaction with the community. Nvidia will be providing a spot to "answer" these questions in the near future. Which will update/amend into this post once its available. We will try to submit 3 to 5 questions a week and if successful Nvidia is committed to continue doing this. The amount of answers received will also depend on the amount of questions asked.


Remember. Me And Amorphous will be closely monitoring this thread. Do not troll in this thread. It will not be allowed.


Final Note: This is not a "Debate" Thread. If you wish to debate the answers that are recieved. Then please do so in another thread. This post is specifically for asking questions and receiving answers. Not Arguing or debating them. You can discuss that how you feel free anywhere in this community or another.
 
Last edited by a moderator:
Here's a support question, why do updated NVAPI.dll's block PhysX on machines that have an AMD card installed? :p

In all seriousness, it's a good thing Nvidia is providing at least some level of support to its customers.
 
Here's a support question, why do updated NVAPI.dll's block PhysX on machines that have an AMD card installed? :p
I actually looked up my nZone login details just to ask that, but then something stopped me from just being a dick. ;)

It is a good thing, and I wish them luck with it. Any attempts like this should be encouraged and not abused, even I can figure that out.
 
Yay about time, good work Chris (of course the credit is really all mine for going on at chris about nzone :D)

/goes off to make sure the official dudes know that "show only applications installed on this computer is broken yet again"
 
Nvidia answered its first line of questions... and I hand picked the first 4 questions with Amorphous. So if you dont like the questions. Well talk to me about what you'd like to see. I tried to pick some of the tougher questions and criticism Nvidia has recieved. We will be doing this weekly and trying to answer at least 2-3 questions a week. Maybe more dependent on time constraints.



1. Is NVIDIA moving away from gaming and focusing more on GPGPU? We have heard a lot about Fermi's compute capability, but nothing of how good it is for gamers.




Jason Paul, GeForce Product Manager: Absolutely not. We are all gamers here! But, like G80 and G200 before, Fermi has two personalities: graphics and compute. We chose to introduce Fermi’s compute capability at our GTC conference, which was very compute-focused and attended by developers, researchers, and companies using our GPUs and CUDA for compute-intensive applications. Such attendees require fairly long lead times for evaluating new technologies, so we felt it was the right time to unveil Fermi’s compute architecture. Fermi has a very innovative graphics architecture that we have yet to unveil.



Also, it’s important to note that our reason for focusing on compute isn’t all about HPC. We believe next generation games will exploit compute as heavily as graphics. For example:

· Physical simulation – whether using PhysX, Bullet or Direct Compute, GPU computing can add incredible dynamic realism to games through physical simulation of the environment.

· Advanced graphical effects – compute shaders can be used to speed up advanced post-processing effects such as blurs, soft shadows, and depth of field, helping games look more realistic

· Artificial intelligence – compute shaders can be used for artificial intelligence algorithms in games

· Ray Tracing – this is a little more forward looking, but we believe ray tracing will eventually be used in games for incredibly photo-realistic graphics. NVIDIA’s ray tracing engine uses CUDA.



Compute is important for all of the above. That’s why Fermi is built the way it is, with a strong emphasis on compute features and performance.



In addition, we wouldn’t be investing so heavily in gaming technologies if we were really moving away from gaming. Here’s a few of the substantial investments NVIDIA is currently making in PC gaming:

· PhysX and 3D Vision technologies

· The Way it’s Meant to be Played program, including technical support, game compatibility testing, developer tools, antialiasing profiles, ambient occlusion profiles, etc.

· LAN parties and gaming events (including PAX, PDX LAN, Fragapalooza, Million Man LAN, Blizzcon, and Quakecon to name a few recent ones) Attached are some links to videos from those event.

http://www.slizone.com/object/slizone_eventsgallery_aug09.html

http://www.nzone.com/object/nzone_quakecon09_trenches.html

http://www.nzone.com/object/nzone_blizzcon09_trenches.html

http://www.nzone.com/page/nzone_section_trenches.html



We put our money where our mouth is here.



Finally, Fermi has plenty of “traditional” graphics goodness that we haven’t talked about yet. Fermi’s graphics architecture is going to blow you guys away! Stay tuned.



2. Why Has NVIDIA continued to refresh the G92? Why didn't NVIDIA create an entry level GT200 piece of hardware? The constant G92 renames and reuse of this aging part have caused a lot of discontent amongst the 3D enthusiast community.



Jason Paul, GeForce Product Manager: We hear you. We realize we are behind with GT200 derivative parts, and we are doing our best to get them out the door as soon as possible. We invested our engineering resource in transitioning our G9x class products from 65nm to 55nm manufacturing technology as well as adding several new video and display features to GT 220/210, which put these GT200-derivative products later in time than usual. Also, 40nm capacity has been limited, which has made the transition more difficult.



Since its introduction, G92 has remained a strong price/performance product in our line-up. So why did we rebrand it? While hardware enthusiasts often look at GPUs in terms of the silicon core (i.e. G92) and architecture (i.e. GT2xx), many of our less techie customers instead think about GPUs simply in terms of performance, price, and feature set, summarized via the product name. The product name is an easy way to communicate how products with the same base feature set (i.e. DirectX 10 support) compare to each other in terms of price and performance. Let’s take an example – what is the higher performance product, a 8800 GT or a 9600 GT? The average joe looking at an OEM web configurator or Best Buy retail shelf probably won’t know the answer. But if they saw a 9800 GT and a 9600 GT, they would know that a 9800 GT would provide better performance. By keeping G92 branding current with the rest of our DirectX 10 product line-up, we were able to more effectively communicate to customers where the product fit in terms of price and performance. At the same time, we tried to make it clear to technical press that these new brands were based on the G92 core so enthusiasts would know this information up front.



3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA’s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?



Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can’t really give PhysX away for “free” for the same reason why a Havok license or x86 license isn’t free—the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.



4. Is NVIDIA fully committed to supporting 3D Vision for the foreseeable future with consistent driver updates or will we see a decrease in support as appears to be the current trend to many 3D Vision users? For example. A lot of games have major issues with Shadows while running 3D Vision. Can profiles fix these issues or are we going to have to rely on developers to implement 3D Vision compatible shadows? What role do developers play in having a good 3D Vision experience at launch?


Andrew Fear, 3D Vision Product Manager: NVIDIA is fully committed to 3D Vision. In the past four driver releases, we have added more than 50 game profiles to our driver and we have seeded over 150 3D Vision test setups to developers worldwide. Our devrel team works hard to evangelize the technology to game developers and you will see more developers ensuring their games work great with 3D Vision. Like any new technology, it takes time and not every developer is able to intercept their development/release cycles and make changes for 3D Vision. In the specific example of shadows, sometimes these effects are rendered with techniques that need to be modified to be compatible with stereoscopic 3D, which means we have to recommend users disable them. Some developers are making the necessary updates, and some are waiting to fix it in their next games.



In the past few months we have seen our developer relations team work with developers to make Batman: Arkham Asylum and Resident Evil 5 look incredible in 3D. And we are excited now to see new titles that are coming – such as Borderlands, Bioshock 2, and Avatar – that should all look incredible in 3D.

Game profiles can help configure many games, but game developers spending time to optimize for 3D Vision will make the experience better. To help facilitate that, we have provided new SDKs for our core 3D Vision driver architecture that lets developers have almost complete control over how their game is rendered in 3D. We believe these changes, combined with tremendous interest from developers, will result in a large growth of 3D Vision-Ready titles in the coming months and years.

In addition to making gaming better, we are also working on expanding our ecosystem to support better picture, movie, and Web experiences in 3D. A great example is our support for the Fujifilm FinePix REAL 3D W1 camera. We were the first 3D technology provider to recognize the new 3D picture file format taken by the camera and provide software for our users. In upcoming drivers, you will also see even more enhancements for a 3D Web experience.


5) Could Favre really lead the Vikings to a Superbowl?



Ujesh Desai, Vice President of GeForce GPU Business: We are glad that the community looks to us to tackle the tough questions, so we put our GPU computing horsepower to work on this one! After simulating the entire 2009-2010 NFL football season using a Tesla supercomputing cluster running a CUDA simulation program, we determined there is a 23.468% chance of Favre leading the Vikings to a Superbowl this season.* But Tesla supercomputers aside, anyone with half a brain knows the Eagles are gonna finally win it all this year! J



*Disclaimer: NVIDIA is not liable for any gambling debts incurred based on this data.
 
Even though I don't game in 3d I'm happy to hear that they'll be updating and further fixing the problems with it.

But I'm mostly excited about the AO mention. Good to hear they're still supporting it. I feared for a while there that they were dropping it. Hopefully we'll see the existing games fixed up and lots more added.

Thanks for the updates.
 
This weeks questions and answers. Got Jen-Hsun to respond to one ;)

Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?


Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and “integration” is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.

We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.

The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.

We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond “programmable shading” to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, “computational graphics” will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. “Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.

While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.

To enable game developers to create the next generation of amazing games, we’ve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We’ve created a tool platform called Nexus, which integrates into Visual Studio and is the world’s first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a “co-processing” configuration. And we’ve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.

Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.

NVIDIA’s growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.

For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world.

Keep in touch.

Jensen


Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?

Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCL—APIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.



For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.




Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?



Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :p). We’ll share more details when we introduce Fermi’s graphics architecture shortly!
 
Tier? Tiering?

or Tiring? I don't understand. I'm also tired. :p
 
"Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys ). We’ll share more details when we introduce Fermi’s graphics architecture shortly!"

Well that's good to finally know. At least it'll end all the speculation. Thanks for the update.
 
hardware dedicated for tesselation ≠ dedicated tesselator

you can have some caches dedicated just for tesselation, but that doesn't mean, that the tesselation itself is done on dedicated hardware (I'm not saying this is or isn't the case, but the statement itself doesn't clarify much)
 
This comment was funny:

"It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world."

And Tom Petersen wrote two paragraphs and managed not to answer any of the two questions. If anything this provides good entertainment value. ;)
 
Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :p). We’ll share more details when we introduce Fermi’s graphics architecture shortly!

Ha, I'm sure it does ;) As an aside, I've got a draft of our Fermi thing doing the rounds at the moment, hopefully out this week, where I'll talk about what I meant a bit more.

Great thread, thanks Chris and the NVIDIAns who're answering.
 
Back
Top