Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    I'm not sure about that, look at what Dice made on FB2 on the ps3, if the market share are some developers are willing to go through some pain. I actually think Dice state so in one of their paper.
    I think the issue is more risk from a competitive pov, the risk that early showing under perform which looking at the money involved I agree is a a massive hurdle.
    Indeed especially as we speak about big money. But it is a case where the market dynamic hold on progress. I mean Sebbbi, Andrex Lauritzen and I guess plenty of others are toying now with software rendering, possibly intellectual curiosity for the former, for the latter the same applies and he is a researcher so well :)
    It seems that some pretty effective languages are available now, actually I wonder at this point if the issue is more hardware, there is no proper hardware. There is also no market if no consoles manufacturers is interested in jumping forward. In the software rendering thread people seems have to agree that I won't before 5/6 years that thing could change and it is only a "could" as market dynamic and the choices made by the big actors in the field will set the tide.
    Without consoles (at least one) there is no market for such a device.

    Thing is investment to make that happen would be massive as form what I get from the discussion going on in the aforementioned thread, you need both sane single thread performances, quiet possibly 4 hardware threads, a solid throughput forn the SIMD units, quiet some cache and icing on the cake lot of on chip bandwidth to cope with the requirement of the various stages in 3d rendering. Quiet an extensive list of requirement that call for an engineering jewel/marvel.
    Well I can't disput your povm I stated that based on Keldor's comments on the matter (same thread as above).
    That i an interesting though, I would bet that Intel works on larrabee replacement, I wonder what they will come with.

    Speaking of consoles and what possible now, I wonder if as some stated (always the same thread) if 'well rounded' throughput CPU cores (I mean by well rounded CPU cores that don't give up on anything but peak SIMD performances ala Larrabee) back up by a tiny IGP could have been doable.
    Especially looking at what Dice did on the ps3, it may have lessen the pressure on the CPU.

    Honestly I won't go further as I can't contribute in any sensible to the aforementioned topic and you and others had a really interesting discussion on the matter. For some reason I think that ultimately CPU are superior, that if you have dedicated units (as graphic cards now, or the video engine in GPU or Intel procs, or sounds card, or whatever accelerator you may found in for example in a PowerEN) those devices should bring great bang for bucks (both power and area efficient).
    There is something that I don't like is that graphic workloads get more complicated and GPUs tries to deal with more general purpose type of tasks, in the mean time CPU (whether they are widely available or not) also improved their throughput and still have quiet some room. To me it doesn't look "efficient", you have 2 type of resources (on which you spend quiet some silicon, both burns power, etc.) that "conceptually" fight for the same workloads (which should it should be easier to avoid that workloads fighting for the same resources).
    From a software pov it looks like a consistent headache to have both things to works together, they have different strength, load balancing should prove hard, you have to improve code on two different architecture (part could be hidden but still is there on the shoulder of the driver teams), etc.
    To me it looks quiet like a dreadful situation at this point, it is not the same as say questioning the validity of having video processing units.
    I think the bulk of the computations should be move to the CPU (possibly still to be designed) cores.

    At the same time I don't realy agree with Nick, I don't see the future (as any time soon) consisting of plenty of massive cores (like Haswel and its successors). Though to me it doesn't conflict with the idea of having the bulk of the computations done on CPU cores.
    I kind of have a reverse position, I would more easily question how many of those cores are needed in the personal realm), looking at what a 360 achieve with pretty slow CPU, the tasks run by the average user, I would think if costumers needs many CPUs cores, they don't need many "big cores". For example if I look at how flash is accelerated by GPU, it looks like quiet an effort on software side, actually if they have this working on GPU, the result would be greater on may "well rounded CPU cores".
    I do agree with you when you answered Nick that we do not need +16 (I think the number was 24) haswell kind of cores, though I'm not sure that it discard the need for more CPU cores, neither that GPU should completely disappear anytime soon but they cold focus on the stuff they are massively faster at.

    Edit for example looking at that post make me really wonder the extend to which one should invest silicon on GPU (not discard it altogether).
     
    #17881 liolio, Jan 9, 2013
    Last edited by a moderator: Jan 9, 2013
  2. Aeoniss

    Regular

    Joined:
    Mar 23, 2007
    Messages:
    557
    Likes Received:
    0
    Location:
    Nebraska
    So.. From what I'm gathering here in terms of 'generational leap' the Durango is going to offer fairly mediocre performance? Or will the jump (from what is most commonly speculated) be greater than from Xbox ---> 360?

    Also, why all of this love from MS for APU's? Why not just stick a regular chip from AMD in there? I mean apart from power benefits the performance has to suck by comparison..
     
  3. Proelite

    Veteran Subscriber

    Joined:
    Jul 3, 2006
    Messages:
    1,620
    Likes Received:
    1,107
    Location:
    Redmond
    Because there is a limit to how good a traditional gpu can be in a console due to TDP constraints. You won't be able to get GTX 680 performance by squeezing in more and more cus on a console. You make a trade off by sacrificing raw compute power for dedicated graphical power, thereby having gpu that is less flexible but capable for having games that look GTX 680 + level good.

    Just my two cents.
     
  4. XpiderMX

    Veteran

    Joined:
    Mar 14, 2012
    Messages:
    1,768
    Likes Received:
    0

    APUs performance is low because the GPU inside is low power, not because it is an APU.

    I guess it can be an add-on.
     
  5. MarkoIt

    Regular

    Joined:
    Mar 1, 2007
    Messages:
    392
    Likes Received:
    0
    Greater but with much more time in between. As a we stand now, we have:
    ~x4 CPU-wise
    ~x16 RAM-wise
    ~x8 GPU-wise

    over the Xbox360. That doesn't take in account the overall efficiency and features gained in the last 8 years. So, i guess that in the end we will get our ~x10 increment, even if the speculated specs are a lot behind a high-end pc (which by the way, consumes 3-4x more)
     
  6. Inuhanyou

    Veteran

    Joined:
    Dec 23, 2012
    Messages:
    1,305
    Likes Received:
    480
    Location:
    New Jersey, USA
    Nobody knows, we're going off of rumors and speculation, that's what this thread is about.

    And MS has had a good experience with combining parts from their slim, they reduced costs a lot with that model and solved a lot of issues with heat dissipation ect, so going that way with their next console is a reasonable assumption.

    An APU set up does not automatically mean it has to be weak or low powered, that has just been the case with mainstream off the shelf APU's so far because of cost concerns. We have never encountered a custom built APU.
     
  7. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    8 metres of suspension travel, huh? That'll soak up them bumps! However, I'd rather have shorter travel but more response from the suspension. 400 mm of travel with a complete motion in 250 ms is way better for handling potholes than than long, slow forks.
     
  8. anexanhume

    Veteran

    Joined:
    Dec 5, 2011
    Messages:
    2,078
    Likes Received:
    1,535
    It's cheaper from a lot of perspectives. You only have to produce, tape-out and yield one chip. It's likely you'll get more APUs per wafer than you'd get CPU + GPU per wafer. Your motherboard design simplifies. Your cooling solution simplifies. Your memory hierarchy simplifies. Your need to redesign for die shrinks goes from 2 chips to 1. You don't need to convert to a monolithic chip like they did with the 360.

    I'd prefer the suspension built right into the tire. Then you can make the whole tire be the shock absorber rather than relying a few cm^3 to do the job. Plus it's fewer parts so it's simpler in a way.
     
  9. anexanhume

    Veteran

    Joined:
    Dec 5, 2011
    Messages:
    2,078
    Likes Received:
    1,535
  10. Hornet

    Newcomer

    Joined:
    Nov 28, 2009
    Messages:
    120
    Likes Received:
    0
    Location:
    Italy
    Doesn't Cape Verde XT also have 10 CU @ 1 GHz with a 80 W of TDP*? I guess 12 CU @ 800 MHz would make more sense from a power consumption point of view, though. Cape Verde is 123 mm^2 which paired with ~50 mm^2 for the CPU would leave a decent amount of space for SRAM in a 250 mm^2. By the way, if these specifications are accurate, the "secret sauce" better be good or I will consider myself disappointed.

    * http://en.wikipedia.org/wiki/Southern_Islands_(GPU_family)#Chipset_table
     
  11. JasonLD

    Regular

    Joined:
    Apr 3, 2004
    Messages:
    463
    Likes Received:
    105
  12. Nesh

    Nesh Double Agent
    Legend

    Joined:
    Oct 2, 2005
    Messages:
    14,000
    Likes Received:
    3,720
    I dont think that has anything to do with hopes of Sega becoming a big player again.
    If thats a possibility I mean...sure why not? Its not that much big of a deal.
    Sony and Namco were sharing technology between console and arcade cabinets too. Nothing special

    But you know its quite funny that in the link with the Orbi logo he posted as possible confirmation the guy uses a spoiler tag and says "Probably some iOS game" but I doubt he saw it :lol:
     
  13. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil
    Forgive me if was discuss before..I wonder why we have not heard rumors in next box consoles with a pitcairn as 7970M class, since it offers since april last year = ~ 2.2 TFLOPS with only 75Watts .. may be an excellent option if custom, even though APUs 1 to 1.5Tflops are very efficient due to HSA,ESRAM,memory control etc ... I believe that these apus being very efficient but maybe unable* to achieve excellence of 2.2Tflop levels and pitcairn may also have been improved for even better clocks (850MHz to 1GHz) or still more "cool" than 75 watts counting mature improvements on 28nm process.


    * My 2 cents here..
     
  14. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    It's not claiming a new home console, but an arcade board based on PS4, which we know happens (PS hardware in arcades). It'll be worth seeing if the dates are true and we learn anything. If so, we can look to the arcade for more info on Orbis.
     
  15. Helmore

    Regular

    Joined:
    Apr 5, 2010
    Messages:
    466
    Likes Received:
    0
    That 80 Watt for the entire card, I think the chip itself will be a bit less than that, probably between 50 and 70 Watts. Just think of the power that's taking by the graphics memory and the power converters (PWMs).

    @Heinrich4 - The reason why we're not talking about the 7970M is mainly because they get such good results through binning, something that's not possible with console chips.
     
  16. anexanhume

    Veteran

    Joined:
    Dec 5, 2011
    Messages:
    2,078
    Likes Received:
    1,535
    On top of that, 7970M was a 100W mobile card (this includes board and memory too).
     
  17. bitsandbytes

    Newcomer

    Joined:
    Nov 27, 2011
    Messages:
    194
    Likes Received:
    71
    Location:
    England
    Forgive my lack of technical knowledge, but is there anything to stop AMD taking a 7870 desktop edition and reducing the clocks down to the 7970M level, add a sprinkling of Sony customisation and ta da! 2+ tflops?

    Too simple?
     
  18. N2O

    N2O
    Regular

    Joined:
    Apr 17, 2010
    Messages:
    489
    Likes Received:
    191
    (First of all,this is all rumors)
    This is continue of 384 bit rumor from #1 AMD China guy,he had new respond few hours ago,all about GPU part.
    He said there are 2 indicators of durango GPU,one is 384bit,and he said he can't say the other one,because if he say it,everyone will know durango based on which GPU.
    The other indicator is TDP.
     
  19. anexanhume

    Veteran

    Joined:
    Dec 5, 2011
    Messages:
    2,078
    Likes Received:
    1,535

    Die size and TDP could both hinder that plan. A 7870 binned for 7970M is lower power than a normal 7870. They shave off 30W by reducing clocks and binning.
     
  20. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    There you have it, Durango clearly must be using GK110. Makes perfect sense.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...