PS. using Johnny come lately's definition over the established ones (AMD/NVIDIA agree on what threads are) makes little sense to me and will generally just cause confusions. Also I don't think Intel's chosen definitions make a lot of semantic sense to begin with.
Logic fails to understand your statement!
In the context of GPUs (and this is still 3D architectures and chips forums) Intel is the latecomer to this party ... language is defined in context. Regardless strands&fibers are entirely new terms with no history, and as I said counter-intuitive definitions ... and their highly Larrabee specific use of the term threads is very much debatable.The Johnny come lately definitions are those of AMD/Nvidia. The words they are using have been in use for 20+ years and have defined definitions and wide understanding within the EE and CE communities.
In the context of GPUs (and this is still 3D architectures and chips forums) Intel is the latecomer to this party ... language is defined in context. Regardless strands&fibers are entirely new terms with no history, and as I said counter-intuitive definitions ... and their highly Larrabee specific use of the term threads is very much debatable.
What NVIDIA calls threads are threads even in the traditional sense. From the kernel program's point of view they execute independently, branch independently and share a memory space.
Their scheduling works wildly differently than on traditional SMP machines, but meh.
NVIDIA chose that in this context threads would only refer to the threads of execution of the kernel (and not the threads of the SIMD program, nor the different contexts of the SIMD programs the hardware can switch between with vertical multithreading) and Intel did it the other way ... in this context NVIDIA was first with the decision.
One of the big hype items for G300 is the support of more than 1 thead active per chip! They call it a kernal, but they really mean thread.
MfA said:but meh
Regardless strands&fibers are entirely new terms with no history
I tend to agree with the Khronos stance on this point, which is that it's a bad idea to use terms that already have a defined meaning in another domain, particularly as the GPU domain tries to merge into the HPC domain. That's why they called them "work groups" and "work items" rather than warps and threads.In the context of GPUs (and this is still 3D architectures and chips forums) Intel is the latecomer to this party ... language is defined in context.
That's the point though - it's important to create a new term for a new concept rather than confusing it with a pre-existing term. (FWIW though, "fiber" is a pre-existing term in some OSes including Win32, and it's a similar concept to the current usage, albeit not always exactly the same.)Regardless strands&fibers are entirely new terms with no history, and as I said counter-intuitive definitions ... and their highly Larrabee specific use of the term threads is very much debatable.
Not true, there are much more complicated rules with respect to "warps" that are semantically important.What NVIDIA calls threads are threads even in the traditional sense. From the kernel program's point of view they execute independently, branch independently and share a memory space.
Half of whom (at the brand-name level at least) shouldn't really be involved in the development of any API related to high-performance computing.
Except that Nvidia uses the correct meaning, and (despite what others on this thread are claiming) that meaning didn't come from the marketting department. No really, Nvidia knows how to architect chips. For real.Thread has a fairly well defined meaning. Nvidia isn't using that meaning. Nvidia is wrong.
I didn't say it's changed ... but the fiber is also a thead in the classical sense (and what NVIDIA calls threads are threads too in the classical sense, the software sense where the term came from). Intel are calling hardware threads just threads and dropping the hardware bit ... that shorthand combined with the pre-existing use of the term in this context, and the fact that a fiber made of strands is prima facie ridiculous is just not conducive to proper understanding.Furthermore I'm not sure how you can complain about the use of the term "threads" with respect to Larrabee... the definition or usage has not changed at all... it's exactly the same as it has always been
Yeah, OpenCL at least brings some sanity.I tend to agree with the Khronos stance on this point, which is that it's a bad idea to use terms that already have a defined meaning in another domain, particularly as the GPU domain tries to merge into the HPC domain. That's why they called them "work groups" and "work items" rather than warps and threads.
Except that Nvidia uses the correct meaning, and (despite what others on this thread are claiming) that meaning didn't come from the marketting department. No really, Nvidia knows how to architect chips. For real.
You and I have different definitions of a "thread in the classical sense". And most of the definitions that I can find online - wikipedia in particular - tend to agree with my definition, but I'm not willing to argue the point. If anything it shows that there's already a lot of confusion surrounding the terms.I didn't say it's changed ... but the fiber is also a thead in the classical sense (and what NVIDIA calls threads are threads too in the classical sense, the software sense where the term came from).
I think Intel is being pretty clear about "hardware threads", but in the same way as with hyper-threading and other technologies. i.e. the threads are real, 100% OS-controlled, preempted, forkable, etc. POSIX "threads" that have real hardware resources dedicated to them. Nowhere do I see claims that these map 1:1 with "cores" (which is yet another awesome term being thrown around to mean "SIMD lane" in the GPU space, because it allows arbitrary inflation of marketing numbers).Intel are calling hardware threads just threads and dropping the hardware bit ... that shorthand combined with the pre-existing use of the term in this context, and the fact that a fiber made of strands is prima facie ridiculous is just not conducive to proper understanding.
I don't see how it's confusing at all. It very clearly describes what you're telling the runtime semantically with work items and work groups, with the also-clear implication that these things map to different execution resources on different devices. It's extremely important to not confound the new concepts with existing ones that are *not the same thing*.Just dumping everything and using the OpenCL terms without using shorthand for hardware threads in a way seemingly almost consciously designed to cause maximum confusion is good too.
Their meaning is inconsistent with the meaning in the CPU and particularly HPC space that long predated them. Hence the confusion and questioning of why they would deliberately overload the term except to confuse people and inflate numbers.Bob said:Except that Nvidia uses the correct meaning, and (despite what others on this thread are claiming) that meaning didn't come from the marketing department. No really, Nvidia knows how to architect chips. For real.
What?? NVIDIA's definition of "threads" doesn't even meet the POSIX "definition" and they certainly don't agree with the majority of the wikipedia article on threads. Conversely, they do agree precisely with a predicated SIMD lane, or more generally the SPMD model. That has been well known for a long time and the more technical reviewers called out NVIDIA for introducing the nonsensical "SIMT" nomenclature when a perfectly valid term already existed. To quote AnandTech:There is no way you can define "thread" generally to exclude the claimed "NVIDIA Threads" but also include what is commonly referred to as "thread" on many other architectures.
AnandTech said:NVIDIA wanted us to push some ridiculous acronym for their SM's architecture: SIMT (single instruction multiple thread). First off, this is a confusing descriptor based on the normal understanding of instructions and threads. But more to the point, there already exists a programming model that nicely fits what NVIDIA and AMD are both actually doing in hardware: SPMD, or single program multiple data. This description is most often attached to distributed memory systems and large scale clusters, but it really is actually what is going on here.
I know a lot of them and they definitely do (I have nothing but respect for them!), but that's entirely besides the point. By the same token I could just say that "Khronos might know something about standardization of terminology", which is actually much more relevant...NVIDIA engineers might know about HPC too.