doesn't excuse rudeness
if you want to talk about topic that's ok, if not please quit or use messages.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
doesn't excuse rudeness
We propose a new FPGA memory architecture called Connected RAM (CoRAM) to serve as a portable bridge between the distributed computation kernels and the external memory interfaces. In addition to improving performance and efficiency, the CoRAM architecture provides a virtualized memory environment as seen by the hardware kernels to simplify development and to improve an application’s portability and scalability.
The CoRAM architecture assumes that reconfigurable logic resources will exist either as stand-alone FPGAs on a multiprocessor memory bus or integrated as fabric into a single-chip heterogeneous multicore. Regardless of the configuration, it is assumed that memory interfaces for loading from and storing to a linear address space will exist at the boundaries of the reconfigurable logic (referred to as edge memory in this paper). These implementation-specific edge memory interfaces could be realized as dedicated memory/bus controllers or even coherent cache interfaces...
Much like current FPGA memory architectures, CoRAMs preserve the desirable characteristics of conventional fabric-embedded SRAM [16]—they present a simple, wire-level SRAM interface to the user logic with local address spaces and deterministic access times (see Figure 2b), are spatially distributed, and provide high aggregate on-chip bandwidth. They can be further composed and configured with flexible aspect ratios. CoRAMs, however, deviate drastically from conventional embedded SRAMs in the sense that the data contents of individual CoRAMs are actively managed by finite state machines called “control threads”.
Control threads form a distributed collection of logical, asynchronous finite state machines for mediating data transfers between CoRAMs and the edge memory interface. Each CoRAM is managed by at most a single control thread, although a control thread could manage multiple CoRAMs. Under the CoRAM architectural paradigm, user logic relies solely on control threads to access external main memory over the course of computation. Control threads and user logic are peer entities that interact over predefined, two-way asynchronous channels.
The most closely related work to CoRAM is LEAP [2], which shares the objective of providing a standard abstraction. LEAP abstracts away the details of memory management by exporting a set of timing-insensitive, request-response interfaces to local client address spaces. Underlying details such as multi-level caching and data movement are hidden from the user. The CoRAM architecture differs from LEAP by allowing explicit user control over the lower-level details of data movement between global memory interfaces and the on-die embedded SRAMs; the CoRAM architecture could itself be used to support the data movement operations required in a LEAP abstraction.
he says "There's nothing that will magically make a GPU magically perform 50% more calculations a second.."
not console GPU, and I don't know why a console gpu can't have an high clock
the GPU on SOC in my Galaxy nexus, from 1.2 GHz to 1.9 GHz
+58%
overclocking CU units it's easier because you have not to OC all the logic, soc, board, only math units.
or you can obtain the same results overclocking by 30% and using more cache on CU (check) and some low latency memory (check)
A 8 core 1.6ghz jaguar cpu with 12 cus running at 800mhz with 32megs of esram and 8 gigs of ddr3 ram could easily become a 8 core 1 8ghz jaguar cpu with 17 or 18 cu gou and 64megs of esram with faster ddr3 if that is what the chips originaly were but with hardware disabled do to yields.
I'd be pretty amazed if Microsoft had commissioned an 18CU chip with 64MB esram and then planned to disable 6CU's and half the esram for redundancy![]()
50 percent more flops(while using Gpus from the same vendor) according to the leaks is significantly more powerfull, not only "slighty".Nothing to really to worry about for me, but that "slightly less powerful " is now turned into "the end of the world" by MS fanboys.
That phrasing (secret sauce, wizard jizz) was invented by ppl with an agenda in order to mock the idea that hardware accelerators existed and were designed to improve how adequately Durango could perform in real world applications. They were clearly wrong, as there really does seem to be extra kit that sounds like it could help Durango punch above its weight. How much so is open for speculation and debate.
It is completely reasonable and fair for onlookers to cry foul when everyone else wants to try drawing comparisons between stuff like flops and bandwidth on an apples to apples basis. It sounds like the Durango setup works to reduce the number of operations you'd need to do in order to get the same end result in the first place and the setup of the DME's and eSRAM sound like they do a lot to mix up the bandwidth comparisons.
I don't see anyone acting like you describe here. It's the opposite in fact. I see tons of ppl all over the internet asserting that the PS4 has a gigantic advantage in terms of power across the board pretty much. These ppl look at GPU specs only, ignore the accelerators in Durango (i.e. dismiss them completely out of hand as 'lol! secret sauce durrrrr'), ignore bottlenecks in the PS4 design (ex: you can't feed 32 ROPS with 176GB/s), ignore dev comments on the subject saying the two are MUCH closer than presumed, and then tell us how Kinect ruined Durango's GPU somehow. Ppl pushing ideological pseudo-dogma and/or memes as arguments while cherry picking information and dismissing reasonable tech when it is easy to mock with a magic phrase like that don't deserve to be taken seriously.
I think you may be confusing ppl openly mocking the concept of 'secret sauce' all over the internet with ppl actually expecting something radically new to show up. The eSRAM, DME's, display planes and the possibility of being designed to leverage virtual assets in a fundamental way is the 'secret sauce' in the sense that those items work together to make highly efficient use of the architecture as a whole to help it punch above its weight.
Yup. The fact ppl are eager to say PS4 is dramatically more powerful based on specs they got from a rumor telling them devs say both are roughly on par should tell you something about how these folks aim to cherry pick the information they allow to influence their worldviews here. Every actual insider and/or dev that I have seen to date commenting with vague hints has suggested the two are about the same.
I think the leaks have it slightly more powerful, the GPU is not the only component. We'll perhaps see slightly better image quality with orbis/slightly more aggressive dynamic resolution on Durango.50 percent more flops(while using Gpus from the same vendor) according to the leaks is significantly more powerfull, not only "slighty".
So what is true: The leaks or that Orbis is slighty more powerfull?
'Overclocking' is not an option, otherwise nVidia would make available at 1.5 GHz instead of 1 GHz. 'Overclocking' means clocking a part above what it's designed to cope with, which adds enormous thermal stress to the component and will burn it out - heat increases exponentially with power, so a 50% overclock will generate insanely more heat. There's also no collection of transistors arranged in a trapezoid or with memory knick-knacking or bandwidth de-co-pollutinating that can add 50% performance. People who have worked closely on GPU architecture for 2 decades and looked for every opportunity to get better performance haven't found anything that can achieve that. I'm not sure even Unified Shaders achieved that level of performance improvement.if he's not the Papa or amd/nvidia chief, he can be wrong as all the others.
MS are not doing this. The same way Sony needs to avoid $599 this gen, MS needs to avoid 'unreliable'.
Well that how the Internet works, look at the early days of this gen.
But I think there is a difference this time around I would say that Sony 'fans 'are happy and are not in 'war" mode.
On the other hand the noise surrounding Durango either the deception at the specs or the other way around minimizing the difference between the system (though based on possibly incomplete, inaccurate information, but what else do we have?) is coming from MSFT "fans".
I fully intend to pick up the new xbox on day 1 pretty much and have no intention of ever getting a PS4 so I certainly have no agenda against the xbox but I still disagree with what you're saying here.
If nothing else it helps bandwidth and thus the amount of data available to a scene. Also, having 2 display planes for application instead of just 1 is likewise helpful for the end result of the rendering looking nice, even if we ignore what could potentially be done in extending the HUD plane to include some actual foreground objects, etc.I don't believe it's been firmly established at all that the esram has any significant benefit for graphics rendering. The developer comments on here have ranged from "very little if any" to "some but probably not a huge amount" (paraphrasing of course). It sounds like it will be beneficial in some measure for compute workloads though which may also translate to gaming performance in the console world.
I still thing the primary driver behind the use of esram is to allow the console to contain 8GB of main RAM cheaply. And thus the DME's are there to minimize as much as possible the disadvantages of having your graphics memory split into two separate pools as opposed to the more efficient unified pool of the PS4.
That's why 360 failed. expletive is (rightly) saying a GPU overclocked 50% will generate far more heat than it can cope with and fail, unless it had the most extreme and expensive cooling system, which'd cost more than just using a larger, faster GPU.To be fair, that was due to lead free solder, no?
I was just throwinng out numbers but I remember semiaccurate was talking about the ms console chip being huge and them having problems producing it last fall. So there can be redunance there but who knows how much.
I do and allot many here are actually supporting the secret sauce theory,and every time one is shut down another comes from no where.
I know precisely what you mean here.Here was that the whole blitter argument was born,and was fast embrace in many other places on the internet because if it comes from Beyond3D it most be true,i can tell you this because i post on other forums and many people take post from here like god spell.
You say that Orbis can't feed 32ROP's at 176/GBs but what about feeding 24 or 26,sure it can't feed them all but that doesn't mean it can't feed more than 16,which most people think is perfect for Durango 12CU,have you look at it from that point.?
I don't see it being 'endlessly passed off as true'. I also don't see ppl diving into discussion about how the eSRAM, DME's and RAM in Durango work as a whole either unfortunately. I see far more dismissals of those parts entirely than I do discussion on their arrangement into a coherent design. That's part of why I came here recently....what about the whole 68/GBs+104/GBs vs Orbis more straight forward 176/GBs that has been endlessly try to been pass as equal and in some cases even better.?
Most developers will always be polite even if one hardware is at disadvantage not saying this is the case but that how it is,most developer will refuse to answer that and tell you both are about on par.
To be fair, that was due to lead free solder, no? I don't think actual engineers would need to worry about that happening again unless they are incredibly incompetent in the first place. That's not to say it is realistic to think they'd do this and want to add more cooling to the setup though.
If a 10% overclock is easy to pull off, how come RSX got reduced from 550 MHz to 500? Why was Xenos 500 MHz and not bumped up to 550? Overclocking, as I've said elsewhere if not here, is literally clocking a part beyond it's design spec, hence the name. This generates heat. You either have to cool it more, or risk failure. Some chips will be made more able to accommodate increases in clocks just by chance, which is where PC enthusiasts can push their clocks as high as possible and get different results. You need to deal with the added heat or your chip will burn out, and clock too high beyond specs and you get too many parts that just can't reach that and you need to chuck out more components and increase costs exponentially....and a 10% meager OC ought to do it.![]()
To be fair, that was due to lead free solder, no?
If a 10% overclock is easy to pull off, how come RSX got reduced from 550 MHz to 500? Why was Xenos 500 MHz and not bumped up to 550? Overclocking, as I've said elsewhere if not here, is literally clocking a part beyond it's design spec, hence the name. This generates heat. You either have to cool it more, or risk failure. Some chips will be made more able to accommodate increases in clocks just by chance, which is where PC enthusiasts can push their clocks as high as possible and get different results. You need to deal with the added heat or your chip will burn out, and clock too high beyond specs and you get too many parts that just can't reach that and you need to chuck out more components and increase costs exponentially.
There's no such thing as 'just overclocking' a consumer device. You need to hit a reliable power point and stick to it. If techie consumers want to overclock their device and post on the internet "2 GHz Durango! Lolz" with a YouTube video of their console immersed in refrigerated oil, that's their personal risk and not one customer support needs to worry about, unlike 100% of consoles edged beyond their reliable limits.
That's not true. PS3's cooling solution was superb, well implemented (maybe not the thermal pad), and certainly Sony's engineers aren't stupid enough to fail to go through their thermal management maths and cope with the heat. PS3's fan had several speeds - if it was overheating that caused the issues, the fan would spend time on the fastest, very loud, setting, before the console went belly up and the chips burnt out. You should also have seen overheating errors like corrupted visuals before the final death, similar to overclocked, burnt out PCs. That wasn't the way with YLOD. PS3 just wouldn't switch on, caused by fatigue in the solder points, proven by temporary heat gun fixes that wouldn't do jack if the component had fried. We also have independent repair companies telling us it was the solder.Actually I think that we do need to start being truthful and calling a spade a spade.
360s (and early PS3's to a lesser extent) died in droves early this generation because we had consoles designed to push the limits of TDP, die size, manufacturability and thermals yet without an even remotely adequate cooling solution that was fit for purpose.