Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. almighty

    Banned

    Joined:
    Dec 17, 2006
    Messages:
    2,469
    Likes Received:
    5
    How can a 8800 series PC make it into a next generation console?

    The 8800 series is not out yet and doesn't it take months to finalise a console and get working samples out?

    Surely they had to lock down the GPU specs months ago in order to start making prototypes and bug testing the hardware?
     
  2. lefizz

    Newcomer

    Joined:
    Nov 7, 2005
    Messages:
    138
    Likes Received:
    2
    The 8 series AMD gpus have probably been in development for around two years so they could have easily have started to adapt them for use in a console some time ago.

    Remember that the time to market for a gpu is measured in years,usually two, a card that is released today probably has had a replacement in development for at least a year the day it hits the shelves.

    That gives you plenty of time to get next gen cards into a console released in a years time.
     
  3. McHuj

    Veteran Regular Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,434
    Likes Received:
    554
    Location:
    Texas
    Absolutely, I don,t think people realize how long it takes to make a new chip. If anything AMD, is probably already working on the 9xxx/10xxx series.

    From my experience in big semiconductor companies, there are multiple teams that work on a product. Usually a high level architecture team comes up with the design 2-4 years before product, they pass the spec/design to lower level architecture teams that may write accurate performance models of that design. Then the RTL gets written and synthesized. Initial tape out and then validation gets done by yet another team. All of the work is done by different teams with multiple designs concurrent in the pipeline.

    I bet the 8xxx series feature and architecture set was finished and finalized last year, probably concurrent with the next gen console designs.
     
  4. Ruskie

    Veteran

    Joined:
    Mar 7, 2010
    Messages:
    1,291
    Likes Received:
    1
    Durango won't have 1TF GPU, but if it had, I wouldn't expect CPU to compensate it. MS is clear on platform architecture and ease of development, Ballmer would literally chop of couple heads if someone came with that proposition. I know they barely went with multi core CPU this gen, since even that was considered complicated.

    Maybe we should start discussing something else instead of Durango GPU which we know nothing about. BG stated several times that "1TF+" was vague guesstimate from guy who is not even his source, for a GPU that is not even finished. People than went on with "1.1-1.5" number without anything to back it up.

    Based on assumption that they went with Jaguar, just like Sony went from Steamroller to Jaguar, I would expect that developers wanted more GPU grunt. And if those shots are legit Alpha Kits (Iherre said developers got them in Nov 11') than GPU won't be even close to 1TFLOP.
     
  5. Squilliam

    Squilliam Beyond3d isn't defined yet
    Veteran

    Joined:
    Jan 11, 2008
    Messages:
    3,495
    Likes Received:
    113
    Location:
    New Zealand
    Maybe we can point this creativity towards uncovering next gen mysteries?

    Personally I vote for the expected, unexpected. It'll probably be technology which is familiar but put together in interesting or unique and unexpected ways. If we consider the Wii U and the use of the io processor and DSP or the 360 with ED-RAM, perhaps what is most interesting isn't the number of flops but the unique console-esque engineering solutions.

    How do you make something more than the sum of its parts?
     
  6. user542745831

    Veteran

    Joined:
    Jul 11, 2008
    Messages:
    1,156
    Likes Received:
    19
    But "Rangers" apparently wrote the following:

    ?
     
  7. almighty

    Banned

    Joined:
    Dec 17, 2006
    Messages:
    2,469
    Likes Received:
    5
    I don't think it's anywhere near the time, they spend months bug testing the architectures before the chips is even classed as being ready.

    With the move to GCN AMD had to make a fresh start from the ground up and I seriously doubt they had the 8800 series in production at the same time or within the same time frame of the concept of GCN.

    I still think you're talking mid-low range 7000 series GPU.

    Even a 7770 is a good card to use, on the latest drivers it works like this

    AMD 7770 = AMD 5850
    AMD 5850 = GTX 285 OC
    GTX 285 OC = 9800GX2
    9800GX2 = 2x 9800GTX
    9800GTX = ~8800 Ultra
    8800Ultra = 1900XT Crossfire
    1900XT Crossfire = 2x Xenos

    360 has 1x Xenos, so as you can see it's going to be a monster upgrade with just a low-mid-range GPU.
     
  8. user542745831

    Veteran

    Joined:
    Jul 11, 2008
    Messages:
    1,156
    Likes Received:
    19
    Although a HD 7770 might probably be a noticeable upgrade to the hardware in current consoles, how could a HD 7770 be called a "monster upgrade" (as you called it), if, at least according to the following presentation over there for example:

    apparently not even a GTX 680 appears to be be able to manage full 1080p at 30 fps there :???:?
     
  9. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,322
    Likes Received:
    1,120
    7770 would be a "monster upgrade" on current (heck just look to transistor count, 1.5 billion vs 300 million for current GPU, not too mention near double clocks on top), but rather disappointing in the overall scheme of things.

    Pretty obvious concept.

    I think Ruskie is trying to say it wont be closer to 1TF because it will be way more?
     
  10. user542745831

    Veteran

    Joined:
    Jul 11, 2008
    Messages:
    1,156
    Likes Received:
    19
    If it really will take one more year (end of 2013 or whatever) or maybe even longer, then anything below HD 7970 / GTX 680 performance/specs or maybe even anything below HD 8970 / HD 9970 or GTX 780 / GTX 880 (or whatever they are going to be called) perfomance/specs would be disappointing, wouldn't it :mrgreen::razz::grin::wink:?

    You might have a point there :wink:.
     
  11. almighty

    Banned

    Joined:
    Dec 17, 2006
    Messages:
    2,469
    Likes Received:
    5
    So are you saying that going from a 1900XT to a 7770 would not be a monster upgrade?

    You're talking at least a 6-7x the jump in performance, maybe more and the added benefit of DX11.

    I really think you guys are getting over excited and over generous with the hardware.

    I don't think Microsoft or Sony want to take a big loss, if any on the next set of consoles like they did in the previous generations so I think these uber high end hardware is out of the window for that reason alone.

    People are expecting too much.
     
  12. user542745831

    Veteran

    Joined:
    Jul 11, 2008
    Messages:
    1,156
    Likes Received:
    19
    A lot of hardware today is much faster than what's in current consoles, isn't it? It doesn't even need a HD 7770 to be quite a bit faster than current consoles, doesn't it? So that alone is not necessarily anything special?

    But this thread is about "next-gen", isn't it?

    And if, at least according to that presentation mentioned above, not even a GTX 680 appears to be able to manage full 1080p at 30 fps in a demo of a "next-gen" engine, then how much "next-gen" would a HD 7770 really be?

    Especially at the end of 2013 or maybe even later when GTX 780 / GTX 880 and HD 8970 / HD 9970 (or whatever they are going to be called) might already be around?

    And considering that the demo in question does not even necessarily appear to be looking THAT impressive, anything below GTX 680 performance/specs would be quite disappointing, wouldn't it?
     
    #14712 user542745831, Sep 24, 2012
    Last edited by a moderator: Sep 24, 2012
  13. Arwin

    Arwin Now Officially a Top 10 Poster
    Moderator Legend

    Joined:
    May 17, 2006
    Messages:
    17,682
    Likes Received:
    1,200
    Location:
    Maastricht, The Netherlands
    But if said console had the equivalent of a CPU, a Physix Processor, and a GPU working efficiently together, then the GPU wouldn't have to do a lot of stuff it wasn't designed for because latency between CPU and GPU is too big, leaving the CPU bored to death.

    In that scenario, the GPU could be more lightweight.

    Who knows? Just speculating here ...
     
  14. user542745831

    Veteran

    Joined:
    Jul 11, 2008
    Messages:
    1,156
    Likes Received:
    19
    Well, regarding CPU, see for example (especially the highlighted (bolded/underlined) part of the following quote):


    :wink:
     
  15. ERP

    ERP Moderator
    Moderator Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Likes Received:
    49
    Location:
    Redmond, WA
    Next generation will be defined by what MS and Sony put in their boxes, not by Epic or anyone else's demos.
    Next gen engines will be scaled to run on those boxes, or Epic won't be selling very many copies.

    The days where consoles get the ultra high end PC parts is over IMO, power constraints dictate that even if costs don't.

    As an aside, I could imagine a scenario where one console had a clear 2x advantage in performance, and still couldn't differentiate itself significantly from the competition.
     
  16. user542745831

    Veteran

    Joined:
    Jul 11, 2008
    Messages:
    1,156
    Likes Received:
    19
    See for example:


    :mrgreen::wink:
     
  17. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    So, do we think the GPU is going to be made of billions of "giant enemy crabs"? Though if there are billions of them I would assmue home storage may be an issue... maybe it going to be some cloud gaming system...? :lol:
     
  18. user542745831

    Veteran

    Joined:
    Jul 11, 2008
    Messages:
    1,156
    Likes Received:
    19
  19. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Joined:
    Mar 15, 2005
    Messages:
    2,528
    Likes Received:
    107
    Yes marketing fluff is a great example of what actually happens...yep yep!
     
  20. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,723
    Likes Received:
    193
    Location:
    Stateless
    Well this is the next generation thread on b3d, I'm close to think that ubber focus on paperflops is as close to trolling as can be.
    It only says part of the story and luckily for company like Nvidia still vouch as the leader in the high performance 3d cards otherwise they would not have sold a damned cards (or at a big loss) since the introduction of evergreen till their release of Kepler...
    There is also the evolution of the PC market wrt to graphic card in the PC realm and the fact that they hit the power wall. Why care, consoles makers have to beat or match whatever the PC are pushing on every accounts, or more on paperflops alone.
    Not too mention that this reborn one sided focus is kind of correlate to rumors that may have a system pushing a bit more paperflops than the other.
    I remember writing a post about how the different FLOPS compare in AMD architecture (between Vliw5, vliw 4 and scalar architecture), it took me time but it was lost time it seems. Mostly you wold have to make a (gross) comparison between those arch to multiply (in that order) by 3.8/5 , 3.8/4 and 1.
    I also posted links to hardware.fr reviews because they consistently (they've been doing it for a long time) results for theoretical measurements (geometry, texturing, fillrate, fillrate with blending). Those measurements shows that paper FLOPs are not the most relevant metric to compare GPU (think of Nvidia again...) (that's if you would want to compare GPUs on a single metric which you would not want). Having read their reviews for a long time, the most relevant data is fillrate with blending as it worked for many competing generations of architectures from both manufacturers.

    Link ignored I guess anything that doesn't read in TFLOPS is irrelevant. To me it's a sad premise of the shit blizzard to hit the web in 2013 to paraphrase "Park Boys".
    Or in a more classy way, Mark Twain:
    --------
    To Acert93 I think there might be a significant difference between Xenos and RSX, not in theoretical numbers but in sustained fillrate and fillrate with blending. RSX has indeed strong(er) points, more pixel shading raw power, more texturing power as well as support for some shadow filtering.
     
    #14720 liolio, Sep 24, 2012
    Last edited by a moderator: Sep 24, 2012
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...