Nvidia shows signs in [2018]

Status
Not open for further replies.
If there is one lesson AMD has taught Intel, repeatedly, is that you NEVER let your foot off the gas.
Exactly. Sharks who stop swimming drown.*

Also, by waiting 'too long' with replacing pascal, or even hinting at a replacement, you might well end up with holdouts who were sitting on maxwell generation (or perhaps even prior) cards who originally wanted a bigger performance upgrade than pascal offered buy a pascal card anyway because the wait is simply getting too long, or maybe their card died. So now when pascal-next comes along they're no longer in the market for a new GPU, that's lost customers.

*Actually, no they don't, we know that now. Still a good figure of speech tho.
 
Pretty sure nVidia is limited by engineering resources too, not ambition. They are fighting on multiple fronts (quite sucessfuly on all of them) so it isn't such a big surprise one year they may prioritize AI and HPC over gaming architectures.
 
Pretty sure nVidia is limited by engineering resources too, not ambition. They are fighting on multiple fronts (quite sucessfuly on all of them) so it isn't such a big surprise one year they may prioritize AI and HPC over gaming architectures.
Although there is synergy in the GPU R&D between Geforce/Quadro/Tesla, I cannot see these ever being fully disconnected from each from other due to the timeframe for the R&D feedback from various teams for product development; it helps to reduce pressure/cost/resources.
One of the senior Nvidia engineers did mention awhile back that Tensor Cores were early in the R&D-development stage for V100 from a Tesla perspective, but this technology also has relevance and synergy to Gameworks/OptiX.

But that also puts pressure on keeping product cycles reasonably on schedule because it would be very difficult to break the window timeframe too much between the segments; case in point not only have we not seen Geforce but we also have not seen Tesla/Quadro next generation of GPUs.
The risk of messing this up too greatly can be seen as others have said with regards to Intel.

That said not sure if Intel will ever learn from others; their 10nm debacle and IHVs told them repeatedly their approach was going against what the industry knew worked, now they finally admit they have problems when they had plenty of time to rectify it in the past and given headsup by others.
 
Last edited:
Nvidia Hitting On All GPU Cylinders
May 11, 2018
“The largest inference opportunity for us is actually in the cloud and the datacenter,” explained Huang. “That is the first great opportunity. And the reason for that is there is just an explosion in the number of different types of neural networks that are available. There is image recognition, there is video sequencing, there is video recognition, there are recommender systems, there is speech recognition and speech synthesis and natural language processing. There are just so many different types of neural networks that are being created. And creating one ASIC that can be adapted to all of these different types of networks is just a real challenge.”

Public clouds are starting to ramp up Tesla V100s in their infrastructure, and we suspect that some are also buying Pascal Tesla P100s, too, to meet demand. This is also driving up sales.

The DGX line of hybrid CPU-GPU appliances is also adding to the datacenter revenues, and Huang said that it was now “a few hundred million dollar business.” We take this to mean that the annualized run rate for DGX system sales (including to Nvidia itself, so far its biggest customer) is running at a few hundred million dollars. The datacenter business as a whole has an annualized run rate of $2.8 billion based on the fourth quarter and has trailing twelve month sales of $2.22 billion, so DGX might represent 15 percent of datacenter revenues at this point, which is one reason why the DGX line exists and why Nvidia is not afraid to compete against its OEM, ODM, and cloud partners in servers.
https://www.nextplatform.com/2018/05/11/nvidia-hitting-on-all-gpu-cylinders/
 
Nvidia’s ‘infinite resolution’ patent could change gaming forever
In a patent filing released on Thursday, June 7, Nvidia describes a technology that could fundamentally alter the way games look, feel, and perform. Nvidia calls it “infinite resolution,” and it’s effectively a clever way of using vector graphics to replace static textures in games. Let’s dive into what that means and why it could be a big deal.
...
Currently, developers package games with a series of these textures, one for each resolution the game runs at, and one for each detail setting at each resolution. This requires a lot of storage space, and it means that today’s games have a ceiling or a maximum resolution.
...
Nvidia’s solution would fix these issues. Instead of packaging games with a massive set of static textures, games built using Nvidia’s technology would include only a single set of texture information, not the actual textures themselves. Effectively, each in-game texture would be drawn in real time from instructions the developers include in the game. Your computer would use its processing and graphics rendering horsepower to do the heavy lifting here.
...
To be clear, Nvidia has been working on this for quite a while, but this latest patent filing suggests the company could be close to bringing it to market.
https://www.digitaltrends.com/computing/nvidia-infinite-resolution-patent-filing/
 
So they started using PDF or SVG technology... ;)
 
Sounds like procedural stuff that isn't a good fit for many purposes, but is excellent for some.
 
Details are here:

We propose a new texture sampling approach that preserves crisp silhouette edges when magnifying during close-up viewing, and benefits from image pre-filtering when minifying for viewing at farther
distances.

During a pre-processing step, we extract curved silhouette edges from the underlying images. These edges are used to adjust the texture coordinates of the requested samples during magnification. The original image is then sampled -- only once! -- with the modified coordinates.

The new technique provides a resolution-independent image representation capable of billions of texels per second on a mid-range graphics card.

Link: http://research.nvidia.com/publication/infinite-resolution-textures
Paper: http://research.nvidia.com/sites/default/files/pubs/2016-06_Infinite-Resolution-Textures/inret.pdf
 
Nvidia demos AI method to convert 30fps image into 480fps slow motion video
Researchers from Nvidia developed a method to use AI for the interpolation of video images. This makes it possible to convert a standard recording in for example 30fps into a slow motion video of, for example, 240 or 480fps. Check out the video below the fold, it's pretty impressive.
 
Not bad, but far from perfect or even good enough, horrible artifacting on the hockey clip at least
Yeah, that was probably the most problematic clip, but it is pretty impressive nonetheless.
 
NVIDIA invites journalists for Gamescom
This year’s Gamescom starts on August 21st. It’s a few-day event focusing on new gaming titles. We have been told NVIDIA is *allegedly* sending invitations to the press for this very event. We can’t give you any date yet, because as far as we know, it’s not a single date. It could happen before Gamescom, at Gamescom or shortly after.

What’s important, nowhere does the invitation mention new GeForce series or new hardware. What is promised is hands-on presentation of the latest PC gaming titles. That’s it. Yet still, no one would offer a free trip to Europe if it wasn’t important.
https://videocardz.com/newz/nvidia-invites-press-for-gamescom
 
Bosch, Daimler, Nvidia Seal Robotaxi Pact

A German automotive tag team of tier one automotive supplier Bosch and OEM Daimler announced Tuesday their choice of Nvidia as the AI platform for development of a robotaxi scheduled for mass production in the early 2020s.
...
Specifically, the two German companies — Bosch and Daimler — will join to deploy Nvidia’s Drive Pegasus platform for “machine-learning methods in generating vehicle-driving algorithms,” according to Bosch. Nvidia will provide its Drive Pegasus based platform, including “high-performance AI automotive processors along with system software,” Bosch added.
...
Meet Drive Pegasus. Nvidia boasts that Pegasus, consisting of two Xavier SoCs and two yet-to-be announced “next-generation” GPUs, delivers 320 TOPS (trillions of operations per second) to “handle diverse and redundant algorithms.” More important, Shapiro stressed, “Pegasus offers the most energy efficient solution at one TOPS per watt.”

Asked about the new GPU inside Pegasus, Shapiro declined to provide details, noting that the company has not disclosed it yet.


InsideXavierSoC.jpg



https://www.eetimes.com/document.asp?_mc=RSS_EET_EDT&doc_id=1333462
Edit: Correct link.
 
Last edited:
So previously the safety features weren't functional?
Mobileye's safety model is currently being proposed but based on what I've read it's far from perfect. I think Nvidia is focusing on implementing checks and balances similar to the Doer/checker approach.

The media event gave Intel/Mobileye an opportunity to show how far its team has advanced in AV development and to publicly explain a car safety concept called “the Responsibility-Sensitive Safety (RSS) model.” The goal, of course, is AVs that behave responsibly on public roads.

Industry observers applauded Mobileye’s willingness to discuss its AV strategy. However, this demo opened the door to a host of issues that the AV industry, from Waymo to Intel/Mobileye, has yet to explore, examine, and test before L4 and L5 vehicles can hit the road without hitting something else.
...
Separately, the public AV demo in Jerusalem inadvertently allowed a local TV station’s video camera to capture Mobileye’s car running a red light. (Fast-forward the video to 4:28 for said scene.)

According to Mobileye, the incident was not a software bug in the car. Instead, it was triggered by electromagnetic interference (EMI) between a wireless camera used by the TV crew and the traffic light’s wireless transponder. Mobileye had equipped the traffic light with a wireless transponder — for extra safety — on the route that the AV was scheduled to drive in the demo. As a result, crossed signals from the two wireless sources befuddled the car. The AV actually slowed down at the sight of a red light, but then zipped on through.

While Mobileye isn’t blaming the TV crew for this blunder, it’s a reminder that the industry is still in the experimental stage.
https://www.eetimes.com/document.asp?doc_id=1333308
 
So previously the safety features weren't functional?

'Functional safety' is automotive slang. (I've worked some 6 years in the dreaded automotive field )

So they are adverting to offer features for 'functional safety'. I.e. some sort of support for code that is critical and needs to be run safely (safety functions) .
I've personally (thankfully) not delt with too many safety stuff in my auto stint and I've yet to figure out weather they help in any way. Or it's all just for regulatory bodies to check and for trials to be based upon

LE :
What is posted above wrt to mobileEye is not related to functional safety, if I may remark. The mobileEye thinggie seems to target self driving of some flavour. While for the 'Functional safety features' are just a bullet for the gen purpose ARM core.
It's just that any core sold for the automotive market should support functional safety in some way, least they'll loose marketing batles.

LELE : https://en.wikipedia.org/wiki/ISO_26262 is the wiki page describing functional safety ISO standard
 
Last edited:
Status
Not open for further replies.
Back
Top