Yes I was referring to the balance thing
The process for determining an architecture specification to achieve goal X is convoluted and relies on looking backwards as much as it does looking forward. I've observed the process for forward hardware planning for our server farms and taking that as a basis, this is how I envisaged Sony decided on a Jaguar core with 18 Compute Units.
Sony approached AMD with a bag of $$$, the budget for their CPU and GPU. To expedite the explanation let's assume they've already discounted discrete CPU/GPU solutions and have settled on AMD's GCN architecture in an APU and the timing of the project means they'll be using Jaguar and not Bobcat or Puma. Let's also assume that they've already decided on GDDR5 - lot's of discussions with AMD have already taken place at this point.
How did they reach 18 Compute Units at Z Mhz?
Sony would have had a ballpark graphics performance target for PS4 e.g. 30/60fps at 1080p and in part this would have been derived from earlier discussions with AMD about what hardware their $$$ could actually buy, i.e. die size, expected yields, number of CPU cores and the overall GPU performance.
Working out how many CPU cores and compute units would have been a process of reviewing the performance profiles from a lot of older and current games and the CPUs and GPUs driving them. AMD would have been reviewing DirectX/OpenGL profiling data and perhaps leveraging early experience from Mantle. Sony would have been profiling as many PlayStation 3 game as possible, looking at the number and types of API calls and measuring what the GPU was actually doing. Sony are also a PC developer (Sony Online Entertainment) so have that experience to draw on as well. Between AMD who are experienced in what hardware is required to run games well on DirectX and OpenGL and Sony who are experienced in graphics hardware use with a low-level API, they can determine pretty accurately the number of CUs needed for Sony's target allowing for some unknowns, like the API (GNM) being incomplete.
From the VG leak and what Cerny has alluded too, this 'balance' appears to be 14 CUs for graphics. The reason to include more Compute,
which Cerny has been more explicit on, is because Sony think GPGPU is going to be big in a few years and they didn't want developers to have to cut back on compute rendering graphics for what will in the future be traditional compute for other game processing:
Mark Cerny said:
"The vision is using the GPU for graphics and compute simultaneously. Our belief is that by the middle of the PlayStation 4 console lifetime, asynchronous compute is a very large and important part of games technology."
Having a performance target and determining that X number of Compute Units is sufficient/optimum for graphics is predicated on the performance of the available CPU cores, their ability to feed the CPU (and run the rest of the game) using the buses in the system and the available memory and its bandwidth - executed efficiently. If you are not being efficient in programming the GPU then 18 CUs will produce better results than 14 CUs unless you are constrained elsewhere - CPU or bandwidth for example.