Producer/Consumer with web workers and x86 architectures

K.I.L.E.R

Retarded moron
Veteran
I have been reading about multiprocessing from AMD’s optimisation guide for their 10th generation CPUs, the Phenom II, as I am getting one tomorrow, the six core. Sharing data across cores is very simple in a producer/consumer relationship. A producer has to create data > L1 + L2 caches in order to evict the data into L3 cache in order for all cores to see it.

The model seems good and easy to follow. One only has to create a number of producers and consumers that produce enough to not let the L3 cache spill into memory, but just enough for consumption to read the evictions into their own cache.

There is only one flaw to this model, how on earth can you track this behaviour in dynamic languages that provide threading, without seeing the machine instructions generated dynamically in memory?

Let us take JavaScript, HTML 5 allows web workers, as a fan of local application development I think HTML 5 is attractive, and I have been writing a rogue-like with the technology. Web workers (WW) work off a producer/consumer relationship, which is reasonably simple.

Now my main problem lies with the fact that a lot of work is happening very quickly and makes it difficult to follow every development that affects JavaScript (JS) performance across every browser, knowing this, how on Earth am I supposed to develop an application that works on multiple architectures without knowing the underlying implementation details? I simply cannot just ‘postMessage’ L1 + L2 + k amounts of data, as I do not know the overhead in data transmitted via the function.

What do you guys think the solution is?
 
Back
Top