Just a quick ponderance.
We are moving futher into an age of less rapidly increasing raw speed of hardware, but alot more parallelism to deliver throughput. We are seeing this now in 2 and soon 4 core CPUs and GPUs with 16 - 24 pipelines, and soon we may have PPUs too.
Do we see game engines relying more and more on work queues to abstract how much parallelism a rig has under its hood? If you had an approach (or architecture) where jobs are initiated and placed on a work queue for one or multiple hardware resources to address, then perhaps developers would barely have to know if they are operating on a rig with one big CPU or 1000 little ones or 2 biggish CPUs plus 4 GPUs plus 2 PPUs.
Is this a direction the industry is considering?
We are moving futher into an age of less rapidly increasing raw speed of hardware, but alot more parallelism to deliver throughput. We are seeing this now in 2 and soon 4 core CPUs and GPUs with 16 - 24 pipelines, and soon we may have PPUs too.
Do we see game engines relying more and more on work queues to abstract how much parallelism a rig has under its hood? If you had an approach (or architecture) where jobs are initiated and placed on a work queue for one or multiple hardware resources to address, then perhaps developers would barely have to know if they are operating on a rig with one big CPU or 1000 little ones or 2 biggish CPUs plus 4 GPUs plus 2 PPUs.
Is this a direction the industry is considering?