Welcome, Unregistered.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Reply
Old 05-Jun-2012, 16:51   #1201
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,482
Default

Quote:
Originally Posted by mczak View Post
Though apparently in this form it was a big failure. Anyway, I'm not fully convinced the uop cache is that much better than the trace cache was (though it should have lower overhead indeed), the pentium 4's problem with the trace cache wasn't so much the trace cache itself but simply that it relied on the code to be in the trace cache too much because the decoder was slow as molasses.
The uop cache is definitely simpler and it doesn't have a bunch of the glass jaws the trace cache introduced. The mapping to the L1 is straightforward and its contents aren't dependent on the sequence of execution and not subject to duplication, and various events that flushed the entire trace cache in P4 do not do the same for the uop cache.

That this could be snuck in as a parallel process to the decoder block probably took some engineering work, and it does buffer the more limited 16B decoder path.

Quote:
And I'm also baffled by the low l1i associativity, it seems so obvious it's not enough. Yet AMD didn't even fix it with piledriver.
The memory pipeline is pretty fundamental to the functioning of the front end. Changing that probably means changing the front end, which is more significant than what a tweaked Bulldozer variant is going to get.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 05-Jun-2012, 19:24   #1202
fellix
Senior Member
 
Join Date: Dec 2004
Location: Varna, Bulgaria
Posts: 3,031
Send a message via Skype™ to fellix
Default

I still think AMD went a bit too far with their automated design approach for Bulldozer, especially regarding the front-end. If you look carefully on the die-shot, the front-end block is very similar to what they have been using since K8. Now, of course, the thing is somehow patched for dual-threaded workloads, but I can't help to think there are too many legacy leftovers there, where the "copy-pasted" L1i being just one of them.
__________________
Apple: China -- Brutal leadership done right.
Google: United States -- Somewhat democratic.
Microsoft: Russia -- Big and bloated.
Linux: EU -- Diverse and broke.
fellix is offline   Reply With Quote
Old 05-Jun-2012, 19:59   #1203
Gubbi
Senior Member
 
Join Date: Feb 2002
Posts: 2,868
Default

The uop cache is effectively a cache for one trace. If anything it validates the trace cache concept, - massive issue width while consuming less power.

The problem with the P4 was that it was limited to decoding a single instruction per cycle when it missed the trace cache.

Intel picked the low hanging fruit by exploiting the very predictable nature of loops, but I wouldn't be surprised to see more aggresive "uop" caches in the future with support for more traces. It won't be called a trace cache, that name is forever stigmatized.

Cheers
__________________
I'm pink, therefore I'm spam
Gubbi is offline   Reply With Quote
Old 05-Jun-2012, 20:03   #1204
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,482
Default

Quote:
Originally Posted by fellix View Post
I still think AMD went a bit too far with their automated design approach for Bulldozer, especially regarding the front-end. If you look carefully on the die-shot, the front-end block is very similar to what they have been using since K8. Now, of course, the thing is somehow patched for dual-threaded workloads, but I can't help to think there are too many legacy leftovers there, where the "copy-pasted" L1i being just one of them.
The more automated design flow doesn't automatically lead into the decision to have a low-associativity L1 Icache.
The contents of the cache have changed, since branch prediction no longer has bits in the L1 like in K8.

The cache would have been reimplemented in BD, so it's not a matter of simple matter of copy/paste. The designers looked at major design parameters of the previous gen's L1, and they kept them.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 05-Jun-2012, 23:16   #1205
mczak
Senior Member
 
Join Date: Oct 2002
Posts: 2,727
Default

Quote:
Originally Posted by 3dilettante View Post
The memory pipeline is pretty fundamental to the functioning of the front end. Changing that probably means changing the front end, which is more significant than what a tweaked Bulldozer variant is going to get.
The l1 instruction cache might be quite linked to the whole frontend, but I don't think this poses a fundamental problem for increasing cache size or associativity. Maybe they weren't able to increase associativity while not increasing latency though.
That said are you suggesting AMD is going to stick with 2-way l1 instruction cache associativity for the next 5 years or so? Because tweaked Bulldozer is all that's on the roadmap (well apart from the low-power designs).
fwiw l1i associativity for core2 was 8, 4 on Nehalem and now back to 8 for Sandy/Ivy Bridge. Clearly these things can be redesigned.
AMD OTOH stuck with 2-way 64KB L1 instruction/data caches for forever (since K7 days - K6 also had two-way caches but only 2x32KB). Only BD now has different L1D (Bobcat is also "blessed" with 2-way 32KB L1I though it probably makes sense there, and the 8-way 32KB L1D is actually more than what you get with BD...)
mczak is offline   Reply With Quote
Old 06-Jun-2012, 04:14   #1206
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,482
Default

Quote:
Originally Posted by Gubbi View Post
The uop cache is effectively a cache for one trace. If anything it validates the trace cache concept, - massive issue width while consuming less power.
I don't see it as having any traces. The contents of a trace cache reflects the execution path taken by the processor. If code is stored contiguously in memory as AA if BB else CC, the trace cache may contain AABB and AACC or various combinations of trace fragments.
The uop cache is a post-decode cache that has a fixed relationship to the linearly addressed Icache, and it primarily validates a benefit for a chip running a complex ISA. To a limited extent, it gathers some low-hanging fruit for the original purpose of a trace cache--solving the problem of discontinuities in the fetch stream compromising superscalar issue.

Quote:
Originally Posted by mczak View Post
The l1 instruction cache might be quite linked to the whole frontend, but I don't think this poses a fundamental problem for increasing cache size or associativity. Maybe they weren't able to increase associativity while not increasing latency though.
The memory closest to the execution pipeline has less leeway in terms of how it impacts cycle time and the stages in the pipeline that deal with fetch and prediction.
The TLB and tag check logic would be altered if the ratio of tag and index bits changes.
One possible, if unlikely change, would be to significantly change the associativity or reduce capacity so that it matches the size/associativity ratio of Sandy Bridge.
This would eliminate the aliasing problem entirely, and discard a portion of the cache fill pipeline used to invalidate synonyms.

Quote:
That said are you suggesting AMD is going to stick with 2-way l1 instruction cache associativity for the next 5 years or so?
AMD promised little more than increases in some buffers and minor changes like new instructions for Piledriver coupled with improved clocks at a given power level, and that's what we got.
Some reports say that more change is in store with Steamroller, more so than was promised with Piledriver.
Since Steamroller is also meant to be on a new non-SOI node, more changes could be in the air because various parts of the pipeline will need to be adjusted anyway.

Quote:
Because tweaked Bulldozer is all that's on the roadmap (well apart from the low-power designs).
fwiw l1i associativity for core2 was 8, 4 on Nehalem and now back to 8 for Sandy/Ivy Bridge. Clearly these things can be redesigned.
Sandy Bridge is a bit more than a tweaked Nehalem.
Bulldozer to Piledriver is something like the SB to IVB transition, without the node jump.

Quote:
AMD OTOH stuck with 2-way 64KB L1 instruction/data caches for forever (since K7 days - K6 also had two-way caches but only 2x32KB).
The size and associativity haven't changed, but the BD L1 is physically different and it no longer serves as part of the branch predictor.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 06-Jun-2012, 19:30   #1207
Exophase
Senior Member
 
Join Date: Mar 2010
Location: Cleveland, OH
Posts: 1,982
Default

Quote:
Originally Posted by fellix View Post
Well, it's only ~6KBytes in size. Not much to find there. But it's a mile better than the old trace cache for instance, with its huge redundancy overhead and complete lack of immunity to branch miss-predictions. I think Intel's reasoning was/is still much more about the power savings mantra, for the mobile SKUs, mostly. Performance is a hit and miss, anyway.
The cache holds 1536 uops. According to David Kanter's Sandy Bridge article (http://www.realworldtech.com/page.cf...1810191937&p=4) it is accessed in 32 byte windows of 4 uops each, meaning that each uop is 8 bytes. This is actually kind of surprising since Prescott also used 64-bit uops which were generally much more limited than SB fused uops.

That would make it 12KB, but it probably takes up significantly more space than a normal 12KB 6-way set-associative instruction cache would due to metadata mapping instruction addresses between it and the L1 instruction cache.
Exophase is offline   Reply With Quote
Old 06-Jun-2012, 22:52   #1208
fellix
Senior Member
 
Join Date: Dec 2004
Location: Varna, Bulgaria
Posts: 3,031
Send a message via Skype™ to fellix
Default

Thanks for the head up.
__________________
Apple: China -- Brutal leadership done right.
Google: United States -- Somewhat democratic.
Microsoft: Russia -- Big and bloated.
Linux: EU -- Diverse and broke.
fellix is offline   Reply With Quote
Old 07-Jun-2012, 06:05   #1209
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,482
Default

Quote:
Originally Posted by Exophase View Post
The cache holds 1536 uops. According to David Kanter's Sandy Bridge article (http://www.realworldtech.com/page.cf...1810191937&p=4) it is accessed in 32 byte windows of 4 uops each, meaning that each uop is 8 bytes.
My interpretation is that the 32 byte window represents the 32 byte aligned chunks as represented in memory and the standard ICache.
Any given window can be represented by up to 18 uops that can take up to 3 lines in the uop cache.
There isn't a 1:1 correspondence between the external representation and the uop cache in terms of size or op count. I'm not sure how to arrive at 8 bytes per uop in a general case.

There is a restriction that instructions with 64-bit immediates take up two slots, which may give a granularity to the uop cache where a 32 bit immediate can fit comfortably, so perhaps each slot is between 64 and 128 bits in length.

The amplification of a 32 byte chunk of instructions would be variable.
If slots are 64 bits and each way can have up to 6 uops, that's at least 48 bytes of uop, not counting metadata. A 32 byte window mapping to a fully occupied way would be amplified by a factor of 1.5.
96 bits per uop would give a factor of 2.25.

Quote:
That would make it 12KB, but it probably takes up significantly more space than a normal 12KB 6-way set-associative instruction cache would due to metadata mapping instruction addresses between it and the L1 instruction cache.
Mapping seems like it could be derived with a pointer and enough length counters.
There is at least 48 bits for the IP of the first instruction in the window. Then there would be theoretical max of 18 length counters per window. The max byte length for x86 is 18, which naively makes me think 5 bits per counter. This assumes there is a valid way to pad an instruction out to 18 bytes while translating to one uop. It may be unnecessary to be that naive because an instruction that long wouldn't allow enough room in the window for 17 additional instructions.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 08-Jun-2012, 20:13   #1210
Exophase
Senior Member
 
Join Date: Mar 2010
Location: Cleveland, OH
Posts: 1,982
Default

Quote:
Originally Posted by 3dilettante View Post
My interpretation is that the 32 byte window represents the 32 byte aligned chunks as represented in memory and the standard ICache.
Any given window can be represented by up to 18 uops that can take up to 3 lines in the uop cache.
There isn't a 1:1 correspondence between the external representation and the uop cache in terms of size or op count. I'm not sure how to arrive at 8 bytes per uop in a general case.

There is a restriction that instructions with 64-bit immediates take up two slots, which may give a granularity to the uop cache where a 32 bit immediate can fit comfortably, so perhaps each slot is between 64 and 128 bits in length.

The amplification of a 32 byte chunk of instructions would be variable.
If slots are 64 bits and each way can have up to 6 uops, that's at least 48 bytes of uop, not counting metadata. A 32 byte window mapping to a fully occupied way would be amplified by a factor of 1.5.
96 bits per uop would give a factor of 2.25.
I'm not talking about the 32-byte x86 instruction window that the uop cache scans from, I'm talking about the interface coming out of the uop cache which is being labelled as 32-bytes. I may have incorrectly read a description of the former as one for the latter, though, since I was just skimming for some textual reference to the label. On second read, I think that he's using 32 byte to refer to what x86 instructions it could be "representing." The 32-byte x86 windows don't correspond to the 4 uops that can come out of the cache, though, but the full 6 uop lines, and I think you'd be hard pressed to get any 4 uops to fit 32 bytes of x86 code.. But the correlation between x86 instructions and uops is irrelevant to what I'm saying, I'm referring strictly to the size of uops here (and thus the size of the uop cache). Of course I understand/agree with what you're saying. I'm strictly interested in the uop size.

64-bits might fit as uop size. Despite being the same size as Prescott's (generally simpler) uops the uop fusion rules don't really add a lot of extra data per-uop. If this number is incorrect then I suspect it's not that much larger, for example 80 bits; 128 bits would really surprise me. We both agree they're fixed width though, right?

The confusion behind fellix's original comment is probably because of Intel's claim that the uop cache performs "like a 6KB instruction cache." Going just by capacity that'd imply that a uop in the cache is worth about 4 bytes of x86 code - the average bytes/instruction in typical programs is probably well under 4, but the average uops/instruction is also going to be a bit higher than 1. Then the uop cache will have some unused parts in lines. So it's a pretty reasonable sounding estimate.

Last edited by Exophase; 08-Jun-2012 at 20:24.
Exophase is offline   Reply With Quote
Old 09-Jun-2012, 05:29   #1211
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,482
Default

Quote:
Originally Posted by Exophase View Post
I'm not talking about the 32-byte x86 instruction window that the uop cache scans from, I'm talking about the interface coming out of the uop cache which is being labelled as 32-bytes.
I'm trying to think through the description of the process. The term "window" is used multiple times, but I wasn't sure what exactly is being referenced in each instance.
I reread the description of the uop cache hit process a few times, and after thinking it through the initial guess of 64 bits fits. Upon a hit the uop cache sends the 1-3 lines to an intermediate buffer, and that buffer can take 6 uops a cycle but it has an output limit of 4 uops/32 bytes, which gives each one 64 bits. Since each way of the uop cache has 6 uops, that would make mean each way of the uop cache contains 48 bytes worth of op data plus additiona meta data. 32 sets * 8 ways *48 bytes per way / 8 bytes in 64bits gives 1.5K uops.

Quote:
64-bits might fit as uop size. Despite being the same size as Prescott's (generally simpler) uops the uop fusion rules don't really add a lot of extra data per-uop. If this number is incorrect then I suspect it's not that much larger, for example 80 bits; 128 bits would really surprise me. We both agree they're fixed width though, right?
Fixed or mostly fixed since ops with 64-bit immediates apparently span more than one slot according to Intel's documentation.
__________________
Dreaming of a .065 micron etch-a-sketch.

Last edited by 3dilettante; 09-Jun-2012 at 05:36.
3dilettante is offline   Reply With Quote
Old 14-Jun-2012, 13:00   #1212
itsmydamnation
Member
 
Join Date: Apr 2007
Location: Australia
Posts: 835
Default

looks like trinity has made some solid gains in quite a few area's in regards to "IPC" but then not in others.

http://www.tomshardware.com/reviews/...400k,3224.html

much like Barcelona this is what bulldozer should have been.
itsmydamnation is offline   Reply With Quote
Old 15-Jun-2012, 14:48   #1213
Commenter
Member
 
Join Date: Jan 2010
Posts: 118
Default

How does ~10-15% better perfomance for piledriver stack up to Intel's bridges made of sand and ivy though?
Commenter is offline   Reply With Quote
Old 15-Jun-2012, 18:26   #1214
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,482
Default

I will need to go back and review numbers from years back.
On a multithreaded basis, there may be more areas where on a module to core basis Piledriver is more competitive.
In terms of single thread performance, I will need to check where it is in relation to Westmere before worrying about SB and IB.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 15-Jun-2012, 23:44   #1215
itsmydamnation
Member
 
Join Date: Apr 2007
Location: Australia
Posts: 835
Default

Quote:
Originally Posted by 3dilettante View Post
I will need to go back and review numbers from years back.
On a multithreaded basis, there may be more areas where on a module to core basis Piledriver is more competitive.
In terms of single thread performance, I will need to check where it is in relation to Westmere before worrying about SB and IB.
perf per clock its only just up to Llano (but still behind in quite a few areas) which is about 5% on avg better then phenom II, So no where near intel, but its big turn around considering it is a minor revision, the fact at same TDP you have almost 1000mhz over llano and turbo isn't used in that review means significant performance increase over llano.

There have been statements from AMD saying steamroller with bring there single thread performance much closer to intels, i wonder what they are going to change? Decode, load/store to the FPU, number of ALU's, trace cache?

it will be very interesting to see what steamroller is, looks like its good enough for atleast sony, maybe even Mircosoft. I guess we will see if bulldozer was Yonnah or northwood
itsmydamnation is offline   Reply With Quote
Old 09-Jul-2012, 07:48   #1216
fehu
Member
 
Join Date: Nov 2006
Location: Somewhere over the ocean
Posts: 806
Default

There's around some benchmark with the last win8 preview?
fehu is offline   Reply With Quote
Old 09-Jul-2012, 20:14   #1217
Albuquerque
Red-headed step child
 
Join Date: Jun 2004
Location: Guess ;)
Posts: 3,291
Default

Quote:
Originally Posted by fehu View Post
There's around some benchmark with the last win8 preview?
Is this question in relation to the updated scheduling that was touted in Win8 to provide an incremental performance improvement on Bulldozer? My own speculation: I doubt it's significant enough to write about.
__________________
"...twisting my words"
Quote:
Originally Posted by _xxx_ 1/25 View Post
Get some supplies <...> Within the next couple of months, you'll need it.
Quote:
Originally Posted by _xxx_ 6/9 View Post
And riots are about to begin too.
Quote:
Originally Posted by _xxx_8/5 View Post
food shortages and huge price jumps I predicted recently are becoming very real now.
Quote:
Originally Posted by _xxx_ View Post
If it turns out I was wrong, I'll admit being stupid
Albuquerque is offline   Reply With Quote
Old 12-Jul-2012, 11:13   #1218
hkultala
Member
 
Join Date: May 2002
Location: Herwood, Tampere, Finland
Posts: 273
Default

Quote:
Originally Posted by Albuquerque View Post
Is this question in relation to the updated scheduling that was touted in Win8 to provide an incremental performance improvement on Bulldozer? My own speculation: I doubt it's significant enough to write about.
The most important scheduler changes were already released for windows 7 during the winter, so I don't except big improvements over them with w8.
hkultala is offline   Reply With Quote
Old 14-Jul-2012, 19:36   #1219
fehu
Member
 
Join Date: Nov 2006
Location: Somewhere over the ocean
Posts: 806
Default

really? i read that the new scheduler is am 8's exclusive
fehu is offline   Reply With Quote
Old 15-Jul-2012, 18:55   #1220
I.S.T.
Senior Member
 
Join Date: Feb 2004
Posts: 2,569
Default

Incorrect, it's been in Windows 7 for months now.
I.S.T. is offline   Reply With Quote
Old 16-Jul-2012, 20:12   #1221
Albuquerque
Red-headed step child
 
Join Date: Jun 2004
Location: Guess ;)
Posts: 3,291
Default

Just to make sure there's no question:

http://www.bit-tech.net/news/hardwar...ldozer-boost/1
__________________
"...twisting my words"
Quote:
Originally Posted by _xxx_ 1/25 View Post
Get some supplies <...> Within the next couple of months, you'll need it.
Quote:
Originally Posted by _xxx_ 6/9 View Post
And riots are about to begin too.
Quote:
Originally Posted by _xxx_8/5 View Post
food shortages and huge price jumps I predicted recently are becoming very real now.
Quote:
Originally Posted by _xxx_ View Post
If it turns out I was wrong, I'll admit being stupid
Albuquerque is offline   Reply With Quote
Old 09-Aug-2012, 18:00   #1222
fellix
Senior Member
 
Join Date: Dec 2004
Location: Varna, Bulgaria
Posts: 3,031
Send a message via Skype™ to fellix
Default

AMD Piledriver FX Vishera Engineering Sample Benchmarks

__________________
Apple: China -- Brutal leadership done right.
Google: United States -- Somewhat democratic.
Microsoft: Russia -- Big and bloated.
Linux: EU -- Diverse and broke.
fellix is offline   Reply With Quote
Old 10-Aug-2012, 18:03   #1223
imaxx
Member
 
Join Date: Mar 2012
Location: cracks
Posts: 128
Default

I don't see it so bad.
Clock Mesh gave them a free +10%, and it seems they mainly improved BD's single core IPC. BD architecture gains overall more by single-core IPC improvements than Intel, since you have (sort of) two cores.
So, for 1-year distance, a +20% increase in performance is pretty good.

Also, it is very interesting they didnt fix yet the huge front-end problem: intel has 32K8W L1I whereas AMD has yet a pathetic 64K2W... and AMD cores are way more hungry since they cannot cover latencies with HT (not talking of the AMD decoder ofc).

But I am curious to get steamroller info, where they *should* fix frontend issues.

edit: just a note on schedulers - win7 scheduler kills AMD cmt since by default it reschedule threads without processor affinity, so trashing L2 cache. On intel this is not a problem since L2 cache is just 128kb and L3 cache can keep everything up. On AMD you get a lot of problems due to basically trashing 2MB of cache instead of 128Kb..
Also, I do not believe MS did rewrite W7 scheduler. One would be mad to touch a critical working component with such huge changes 'on the flight'. So it is likely that W8 will have a better scheduler for AMD.

Last edited by imaxx; 10-Aug-2012 at 18:11.
imaxx is offline   Reply With Quote
Old 10-Aug-2012, 18:41   #1224
Blazkowicz
Senior Member
 
Join Date: Dec 2004
Posts: 4,965
Default

did you mean 256K L2 for Intel?, or are you implying it's spiit in two halves for each thread (it isn't, I'd think, but I've never ever thought about how two (or more other archs) thread would share the L1 or L2 cache..)
Blazkowicz is offline   Reply With Quote
Old 10-Aug-2012, 18:53   #1225
Albuquerque
Red-headed step child
 
Join Date: Jun 2004
Location: Guess ;)
Posts: 3,291
Default

Quote:
Originally Posted by imaxx View Post
edit: just a note on schedulers - win7 scheduler kills AMD cmt since by default it reschedule threads without processor affinity, so trashing L2 cache. On intel this is not a problem since L2 cache is just 128kb and L3 cache can keep everything up. On AMD you get a lot of problems due to basically trashing 2MB of cache instead of 128Kb..
Also, I do not believe MS did rewrite W7 scheduler. One would be mad to touch a critical working component with such huge changes 'on the flight'. So it is likely that W8 will have a better scheduler for AMD.
No. It was already covered just above you; there is no extra special sauce hiding in W8. If you want to see how much it doesn't matter, load up both types of processors with as many threads as they can simultaneously take -- the AMD loses by a mountain, and that's nothing that a scheduler can 'fix'.

As for trashing their cache? Yes, but because they've yet to spend the R&D to get the associativity beyond 2-way on their front end which directly contradicts how they want certain parts of these cores to be shared... This isn't a scheduling issue, it's a design issue.

Edit: Let's put this to rest, can we? PC Stats tested the exact same rig, powered by an FX-8150, on both Win7 and Win8. Here is their basic summary:
Quote:
For the most part Windows 8 saw a 1%-5% improvement with the AMD FX-8150 processor... <snip> Even with these 6-of-1, half a dozen of the other benchmark results between Windows 8 and Windows 7, on the whole the AMD FX-8150 processor was slightly faster under the Windows 8 Developer environment. Not at all what PCSTATS expected, but a welcome result none the less for AMD.
Also, keep in mind that this was before Microsoft made the changes to the W7 scheduler that corrected nearly all of the affinity questions. One or two of the tests came out pretty far ahead, but those instances were not always in Windows 8's favor either. The performance gap that is left is negligible, at best...
__________________
"...twisting my words"
Quote:
Originally Posted by _xxx_ 1/25 View Post
Get some supplies <...> Within the next couple of months, you'll need it.
Quote:
Originally Posted by _xxx_ 6/9 View Post
And riots are about to begin too.
Quote:
Originally Posted by _xxx_8/5 View Post
food shortages and huge price jumps I predicted recently are becoming very real now.
Quote:
Originally Posted by _xxx_ View Post
If it turns out I was wrong, I'll admit being stupid

Last edited by Albuquerque; 10-Aug-2012 at 18:59.
Albuquerque is offline   Reply With Quote

Reply

Tags
amd, blewdozer, oh well, patents

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 04:44.


Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.