Sony delays PS3 to 2006, concentrates on PSP !?

chaphack said:
WOAH! whats with the trolling comments? first off, have you played SH3? I have the *ahem* leaked copy and trust me the textures are not that GORGEOUS. you can see shimmering and grainiess. The game is not even full 3D, it has limited view and runs at 30fps 480i. Great to push those limited ps2 textures.


(disclaimer: I'M SORRY GUYS, I JUST COULDN'T RESIST)


now everyone look at how chap can make himself look like a total IDIOT... not like we needed any proof really...

sh3 not being full 3D...

having a *leaked copy*... er...i have the demo and i'm sorry to say, it's the most gorgeous game i've ever played.

it runs at 30fps... OH MY GOD HOW COULD THAT BEEEEEEEEEEE!!!! :rolleyes:

at least it's a stable 30FPS... cough*halo*cough...

it runs at FULL FRAME BUFFER 480i. i will let u know how it looks in 480p once i get the full copy and play it through my blaze VGA adapter...

right, now, whatever u say, i will not reply. enough with making me look like a troll, when i'm merely trying (unsuccessfully) to get things into your head.
see ya
 
i hate to say this but i have played the full leaked copy. if you do a netsearch you will know where to find it. :LOL:

sh3 not being full 3D...
the camera is NOT free roaming when you are playing and it IS view limited.

when i'm merely trying (unsuccessfully) to get things into your head.
see ya
i wonder who is getting what into whos head. :LOL:
oh, i think you dodged a few of my replies and missed another of my post on the other page.

as i said, see you. :oops:
 
london-boy said:
it runs at FULL FRAME BUFFER 480i. i will let u know how it looks in 480p once i get the full copy and play it through my blaze VGA adapter...

I just have to say, I don't understand all your DODGED! comments earlier; you said "Most PS2 games have [Progressive Scan] enabled by default", and Chap countered with the fact that most PS2 games do not have ProScan enabled by default. See also YOUR OWN BLAZE VGA ADAPTER COMMENT - that thing hacks 480p support in, it doesn't enable anything 'by default'.
 
chaphack said:
i hate to say this but i have played the full leaked copy. if you do a netsearch you will know where to find it. :LOL:

chap = modz0red PS2, taking the fight against Sony by eating into game revenueueu!~ :oops: :oops:
 
chaphack said:
I have the *ahem* leaked copy
Confessing ownership of illigeal possession. Would one of the moderators be so kind to check Chap's last IP and report him to the local authorities.
 
Old but interesting chipcenter article////

Sony's Emotion Engine
by David Gilbert

Introduction

Last month, we looked at The Impact of Sony's PlayStation2 Emotion Engine processor. Now we will examine the RISC chip in more detail, exploring its array of specialized functional units. Let's take a look at the chip behind the popular game console.

Functional Units

The key to the Emotion Engine's great power is its numerous specialized functional units contained within the chip. The chip was designed by Toshiba and is licensed by Sony for use in its PlayStation2 game consoles. The CPU itself is of the MIPS RISC architecture, and has a 128-bit data path with a clock speed of approximately 300 MHz. The integer unit is 64-bit 2-way superscalar, and there are thirty-two 128-bit General-Purpose Registers (GPRs). The instruction cache is 16KB, and the data cache is 8KB, with an additional 16KB of scratchpad memory close by for low-latency access. The memory management unit is also robust, able to handle 48 double entries in the Translation Look-Aside Buffer (TLB) and 64 entries in the Branch Target Address Cache (BTAC). There are 10 channels available to the DMA engine, and the main memory is a dual-channel Direct RDRAM, with 32MB of capacity and 3.2 GB/second bandwidth.

Other "weapons" in the Emotion Engine's "arsenal" include several coprocessors and vector units. Coprocessor 1, which is essentially the processor's Floating-Point Unit (FPU), contains a floating-point Multiply-Accumulate (MAC) unit and a dedicated floating-point divider. Coprocessor 2 is considered to be Vector Unit 0 (VU0), which contains four additional floating-point MACs and an additional floating-point divider. Vector Unit 1 (VU1) contains five more floating-point MACs and two floating-point dividers. The image processing unit is an MPEG2 decoder capable of handling 150 million pixels per second.

Overall, the array of hardware that this chip encompasses is impressive, considering that everything is contained within a 540-pin PBGA package with 4 layers of metal. The core and I/O voltage is kept to 1.8 V, thanks to 0.18 µm design, and power consumption is kept to about 15 W by a low transistor count (approximately 10.5 million). The total peak performance boils down to 6.2 GFLOPS of processing power, or 66 million polygons per second video output.

The Nuts and Bolts of Video Game Output

You'll notice that the vast majority of the chip's functional units are devoted to graphics-intensive calculations, and this is for a reason. In the world of video games, the most important use of available computing power is for geometry calculations that occur along the way toward the final output. These can include transforms, lighting, vector resolution and translations, and so on. Also high on the list of processor consumption in video game applications are the Artificial Intelligence (AI) and physics simulations that must be performed in order to create a "realistic" experience for the gamer. All of these types of calculations must go on before such functions as programming overhead and execution.

In a certain sense, the Emotion Engine can be looked at as a DSP, which is often architected with many specialized functional units and a relatively low clock speed. The Emotion Engine's coprocessors and vector units create the raw building blocks that are sent to the system's graphics accelerator. Sound processing is also offloaded to another portion of the system as part of the 3D simulation.

What Makes It All So Effective?

You'll also notice that nearly every subsystem inside the Emotion Engine SoC possesses its own scratchpad memory or cache. This low-latency memory is placed close by each processing unit in order to maintain throughput efficiency, since stalling any of the chip's many functional units would cause a dramatic delay in data flow (a floating-point MAC unit is a terrible thing to waste).

But how is this high-speed low-latency RAM connected to the rest of the system? How are excessive wait states avoided? Well, the scratch-pad memory has its own address space, so that load/store operations can be carried out at the same time as other memory accesses. This is one major reason for a 10-channel DMA engine. There is a lot of traffic on those data buses, and the ability to start and finish more than one thing at a time is imperative to smooth out system performance. Without it, dropping from scratchpad to cache to main memory would tie up the buses, cause a lot of contention, and probably result in "jerky" or slow video output.

Did You Say 128-Bit?

Well, to be more specific, there is a dedicated 128-bit bus from the CPU to Coprocessor 1, and the CPU has 128-bit registers. There is also a 128-bit bus connecting VU1 with the graphics unit, and the load/store unit is 128-bit. The 64-bit integer ALU, which is two-way superscalar, can be used in a special SIMD mode that is capable of processing up to 128 bits of data at once, which is also handy for graphics-intensive work. This SIMD capability is accessed via some special instruction extensions to the MIPS architecture that enable the CPU with the ability to work on multiple operands and to "pack" data (much like an Intel MMX processor would, to use a CISC analogy).

To further increase the power of the vector units, Toshiba designed them with VLIW-like properties, which enable them to carry multiple 32-bit or 64-bit instructions, depending on several factors including the programming. The combination of VLIW micro-architecture and SIMD instruction capability makes the Emotion Engine very good at what it does.

Conclusions

Compared to some other architectures such as the PowerPC RISC or the Intel x86 CISC, there was not a lot of sophistication that went into the branch prediction hardware. This trade-off is easily justified, however, since the Emotion Engine has a relatively short 6-stage pipeline. At 300 MHz, this would not result in too much of a penalty in the case of an incorrect branch being taken, so the need for a "big" branch prediction mechanism is just not there.

There has been a lot of investment put into non-video game applications for the Emotion Engine, and government authorities that handle international commerce have expressed concerns about the Emotion Engine's potential as a "dual-use technology." This means there is a possibility that the raw processing power of these chips could be exploited for underhanded military purposes, or worse, possibly furthering some kind of terrorism. To put these statements into perspective, the Emotion Engine's graphics processing power is greater than that of some high-end workstations, and certainly more capable than any off-the-shelf PC for that particular application.
 
Another one....

What's Next In Parallel Computing?
by David Gilbert

Introduction

Hundreds of millions of dollars are going into supercomputer research, and the efforts to conquer persistent problems with increasing the throughput of parallel machines are being redoubled. Cache-Coherent Non-Uniform Memory Access has been the most popular topology for machines with many processors and large amounts of storage, but what lies beyond CC-NUMA? How can the throughput of a processor be increased while increasing the overall processing power?

The Real Problem

The paradigm of a modern electronic computer—there is a processor element connected to a storage element connected to an Input/Output (I/O) element—has not changed in more than 25 years. The tricky part of that equation (and about all that's changed in those 25 years) is the speed of those components, and the proportion of faster storage to slower storage. Processor speeds have increased phenomenally, and the processing element of a computer is no longer the bottleneck to throughput. I/O is, in many cases, ultimately limited by human beings, and that cannot be changed. The concentration of effort, then, must be on the storage element, and the proportion of fast storage to slow storage, as well as the pathways between them.

Anybody Got Any Ideas?

Since the first part of 2000, IBM Research has been working on an idea they call "Cellular Architecture." It can be viewed as the next step for supercomputers, beyond CC-NUMA, and it may be an answer to the "real problem" with parallel computers.

One interesting point about Cellular Architecture is that its memory access scheme throws efficiency out the window in favor of brute force. Since there is no central controller directing memory accesses, all processors will search their local memory. This means a lot of wasted time for a lot of processors, but the end result is that the data are found and accessed in the quickest possible time.

This may seem a step back, but when put into perspective, it makes sense. Each "cell" in the Cellular Architecture is composed of a single chip, and therefore a supercomputer of this architecture can have a number of processing elements that is orders of magnitude higher than today's CC-NUMA machines. For example, a currently available supercomputer based on existing architecture may have 500 processors…but a supercomputer of the upcoming Cellular Architecture may have 50,000 processors—or more.

The high number of processing elements available also brings up another interesting possibility—the ability to introduce fault tolerance to the system. The ability to route instructions and data within such a system brings about the possibility of using this feature to prevent lost instructions and data in the event of failures. Some may argue that this approach would be necessary to make it work, but regardless, it would be a nice feature to have.

When?

One of IBM's prototype machines built around the Cellular Architecture is expected to be completed sometime in 2004. There will be gradual steps advancing toward the completion of the Cellular Architecture, of course, using off-the-shelf parts where possible. To give an indication of the commitment that IBM is putting into this endeavor, their scientists have developed new chip-design tools for the project (which could soon be applied to commercial ASICs), and they are planning to sink $100 million into the construction of just one prototype machine.

Conclusions

When Cellular Architecture is successfully implemented, we will see a dramatic increase in throughput as we will finally be able to take maximum advantage of processor advances made in the last decade or so. The processor's bottlenecked potential that has lain "dormant" for all these years will be unleashed abruptly as memory bandwidth and I/O speed barriers are conquered at last. This could be one of the most important advances in computer technology in the last 30 years! I, for one, am anxious to see the results.

............is it the same cell (PS3)??
 
chap:

Listen, and listen well. I and many others are tired of your stupid arguements. You bash and bash for no reason, yet fail misserably at comprehending what others are trying to teach you multiple times in this thread alone. When will it end? I will use this post to address all the points you keep on bashing. With that, I hope it will put an end to your endless flamings.

PS2 and textures vs. Sonic Adventure 2

SA2 textures are nice agreed. The game also runs at quite an impressive speed so that's another plus point. Your basically argueing that there's is no game on PS2 that delivers as good textures at the same framerate, right? I beg to differ: Jak & Daxter is one of the games that has stunning texturing all over the place. While some stages don't seem to be that impressive, most indoor sceneries have some incredible detail in them. Marconelly! and others have covered this a while back, but it's a given that Jak & Daxter makes use of multi-layered textures. To my knowledge, detail textures were also mentioned. On top of all this, this game runs at a constant 60 fps-framerate while not flickering and no apparent aliasing. Since this is just about technical capabilities, the knowledge of this game saving full-frame buffers is enough for the arguement, as using a Blaze VGA adapter would enable true 480p output.

On the other hand, why do we need to post an equivilant to meet your SA2 demands? While you want to see textures, I for one am not too impressed by the geometry that game is pushing. Does it matter though, it's obviously beautiful the way it is, so who cares about the missing geometry? Just like this game, I take Metal Gear Solid 2 for my next example. It's a beautiful stunning game, that does not feature vibrant textures. In stead, the colour sheme is pretty constant throughout the game. Put that down to art direction or limiting hardware - or both - but, who cares? MGS2 is still among the most beautiful games out on any platform and it looks and plays stunning the way it is. Why go on and bitch about textures? It looks beautiful as it is, so who are you to say that texturing could be better?

Xbox/GameCube *destroy* PS2 in that regard

Among all your arguements, I find this the most absurd one of all. Let me explain why. Here you are chap, downright bitching about PS2's "insufficient" texturing capabilities - then you go on to say that *at least* GameCube and Xbox put PS2 games to shame in terms of texturing, geometry and IQ. I quote you:

chaphack said:
Now when you bring Xbox and GC in, with games like Halo2 Fable RS, that destroy PS2 in both geometry and textures and IQ. You see, PS2, even for its time, does NOT overcome DC completely, unlike Xbox and GC over PS2. Did i shatter your beliefs?

chaphack said:
1)Because PS2, using CLUT of whatever does not do textures deservingly of high praise. The lack of nice filtering made things worse. Of course certain games do textures worthy of PS2 praise, but not where they should be, 18months after the DC. btw Halo2 >>> Halo1

Let me get this straight: You bitch around at PS2 for not meeting SA2 texturing and IQ, yet at the same time say that Xbox and GC completely overcome PS2. That's actually quite funny as the games I have played on Xbox thus far failed misserably in that regard. Taking the exact same approach as you chap, I could go on and list how Xbox fails to eliminate aliasing and flickering (ie. Wreckless, Halo). I could also go on and talk about the blury textures in Halo (inside levels) and the miserable subpar 30fps framerate in most graphical stunning games. I'll at least admit that Halo was a launch game - when I look at the video trailer of Halo 2 though, all I see are more impressive effects, more shiny characters on screen and the same fucked up framerate all again. Can't judge aliasing on the videos, but I'm sure this title won't eliminate this either. All the flaws PS2 had all again. Admittedly, it is a bit better, but come on, for a console that was hyped of being 3 times more powerful with the superiour gemoetry and what not - I'm left pretty disappointed here, using your logic. Lets also not forget that Xbox had over 1 year, in which it launched after PS2. So, just as you think that PS2 did not deliver in the 18 months it had after DC, I fail to see how GameCube or Microsoft delivered. You asked for a game with SA2 textures and IQ - I give you Jak and Daxter - worthy example that has so much revolutionary behind it. Give me a game with as much going on as ZOE2 on any other platform...

I guess what it boils down to is that in certain aspects DC can still keep up with PS2 (textures) and in some regards, PS2 can still keep up with GameCube and Xbox (geometry/particles). While being more powerful, they all have their flaws...

Blaze VGA adapter

Tagrineth said:
I just have to say, I don't understand all your DODGED! comments earlier; you said "Most PS2 games have [Progressive Scan] enabled by default", and Chap countered with the fact that most PS2 games do not have ProScan enabled by default. See also YOUR OWN BLAZE VGA ADAPTER COMMENT - that thing hacks 480p support in, it doesn't enable anything 'by default'.

What he ment by that is that most games on PS2 save full-frame buffers which is all that is needed to produce a 480p signal. Obviously, libraries have been missing at first, along that the target TV set are still interlaced which add to the fact why most games were originally launched without 480p enabled.

In turn though, if I buy a DC and want to experience the full 480p quality on a monitor - I would need to buy a VGA adapter aswell. In addition, on Xbox, I would need to buy some sort of an adapter to experience DTS quality games because there is no optical output at the back. So, what's the big deal of buying a blaze VGA adapter on PS2? Hacking 480p or not - Chap's arguement is of technical comparasment so it should irrelevant how the signal is achieved as long as it's a true progressive out signal. End of story.
 
Another one...really long/////// This one from ibm...sorry dont have the link as I saved these pages long ago!!!

It will have an impact on future of computing and gaming btw.....

The end of the road for Moore's Law?

By Eric J. Lerner

As the steady pace of improvements in integrated circuits begins to slow, IBM researchers are exploring ways to keep computer performance advancing.

--------------------------------------------------------------------------------

The computer industry is facing a period of fundamental change. For more than two decades, computer designers have relied on a remarkable manifestation of progress known as Moore's Law: the reliable tendency for the number of transistors on a chip to double every 18 to 24 months. The doubling occurs in part because the transistors can be made smaller. As they shrink, the transistors and hence the chips themselves become faster, which translates into faster computers. But industry experts agree that by around 2005 -- possibly sooner -- the trend defined by Moore's Law will start to slow. What is behind that slowdown and what will happen when the main engine driving improvements in computer performance is no longer available are crucial questions for our information society.

It is true, of course, that similar predictions in the past have proved overly pessimistic. In general, such forecasts were based on the idea that there were practical limits to continued progress. But it has long been known that, at some point, theoretical or fundamental physical limits would retard and ultimately put an end to further miniaturization. "Even if they can be made smaller," notes Randy Isaac, vice president for systems, technology and science research at IBM's Thomas J. Watson Research Center, "at some point, smaller transistors may not perform faster; in fact, their performance could even be worse, if they worked at all." The remarkable progress achieved by the semiconductor industry has now brought these fundamental limits into clear view. While the search continues for novel structures and materials to extend
conventional technology as long as possible, other techniques may play a greater role in continuing the improvements in computer performance.

Present-day logic and memory chips are based primarily on CMOS (complementary metal-oxide semiconductor) transistor technology. So-called scaling laws, first derived by IBM Fellow Robert Dennard and colleagues in 1972, provide guidelines for reducing the dimensions and voltage levels of an existing CMOS transistor in a coordinated way to produce a smaller, faster one. The three key scaling factors are the thickness of the insulator between the gate and the underlying silicon; the channel length (which is related to the minimum feature size); and the power-supply voltage, which, when applied to the gate, turns the transistor on.

The insulator, which is composed of silicon dioxide, is 2 to 3 nanometers (billionths of a meter) thick in present-day chips. Scaling requires that it become thinner, but it cannot be much less than about 1.5 nanometers thick. If the silicon dioxide layer is made any thinner, electrons can "tunnel" through it, creating currents that prevent the device from working properly. The limit on insulator thickness in turn limits the other dimensions of the transistor, as well as how fast it can switch. According to the latest CMOS technology "roadmap," published in November 1999 by the Semiconductor Industry Association, device designers would like to begin using a 1.5 nanometer insulator as early as 2001, although it may not be reached until 2004.

The channel length is defined as the distance between the source and drain, the two points between which a current flows during the operation of a transistor. The shorter the distance, the less time the transistor takes to switch, but various kinds of so-called short-channel effects suggest that the channel length cannot be much shorter than 25 nanometers, a size that could be attained by 2010 at the current rate of scaling.

The power-supply voltage level is perhaps an even more critical value, points out Tak Ning, an IBM Fellow and a leading researcher in CMOS technology. "The electronic nature of silicon requires a minimum level of about 1 volt. Appreciably less than that, and the transistor cannot produce enough current to switch on and off properly. At present, the voltage level is between 1.2 and 1.5 volts, which leaves room for about one more round of reduction, possibly by 2004."

These limits mean that future transistors will not be able to be made smaller and faster according to the time-tested scaling laws. Although modifications to the materials and structure of the transistor might keep progress going for some years -- as might cooling transistors to, say, -40 degrees C -- such tricks could be exhausted by as early as 2010 if progress continues at the current pace. When that day arrives, performance gains at the transistor could slow to a crawl.

What will the industry do then? Can computer performance continue to improve rapidly even without the help of faster transistors? Will entirely new technologies supplant silicon transistors? No one knows the answers. But many IBM researchers and developers are working to extend the reign of Moore's Law while preparing for the day when it no longer holds.

MOORE BY OTHER MEANS

One of the key measures of chip performance is the clock rate, the heartbeat of a chip. Although it is related to transistor speed, it is not wholly dependent on it. "More efficient circuit design might keep clock rates advancing into the 2010s," says Isaac. "In the past, the pace of change for transistors was so rapid we didn't have time to figure out how to use the devices optimally. Once we do, we can improve clock rates even when transistors don't get faster."

Nevertheless, because actual performance of tasks will continue to improve even after clock rates plateau -- perhaps around 5 to 10 gigahertz -- clock rate could lose relevance as a yardstick of computer speed. "We may," says Ning, "have to substitute other types of measures, such as overall system performance" -- that is, the rate at which a computer performs operations. "Performance will always drive the computer industry, but we'll be achieving that performance in different ways."

One likely strategy in the coming decades is to concentrate on improving performance at higher levels of the system than the individual transistors. "We're far from optimizing how the different parts of a computer -- the processor, the memory and input/ output system -- function together," comments Bijan Davari, IBM Fellow and vice president of semiconductor research and development. "And as progress at the device level slows down, a lot of the slack can be picked up by system integration."

In today's computers, logic functions and memory (except for a small amount of high-speed cache) are on separate chips, and those chips are sometimes on different boards. Communication across such distances creates delays and bottlenecks, as processes wait for data to arrive. Already, computer makers are working to move separate functions onto the same chip as the microprocessor. IBM chips that combine DRAM (dynamic random-access memory) and high-performance microprocessors will soon hit the market. Other functions will follow suit, perhaps until the entire computer is on a single chip or -- as design requirements dictate -- on several chips within the same package. "These system integration steps," says Davari, "might ultimately increase computer performance by as much as five times, even if device performance remains the same." Such advances would allow a system-level improvement to continue well into the 2010s.

Another way to boost system performance without increasing clock rate is through parallel processing. Today's supercomputers, like the IBM RS/6000® SP®, already achieve their high speeds (thousands of times greater than a
single processor) by linking together dozens, hundreds or even thousands of microprocessors. With dropping processor prices, the "multiprocessor could become practical even for the PC market," says Ning.

There will certainly be obstacles to multiprocessing. For one thing, the fastest processors are power-hungry, a problem that the addition of cooling systems will compound. While today's microprocessors may burn only about 20 watts, tomorrow's faster processors may burn 100. "The optimal solution," suggests Ning, "might be to use slower chips but more of them to get the most computer power per watt."

These approaches to improved performance underlie IBM's recent announcement of a five-year plan to build a computer, known as Blue Gene, that will be capable of performing 1 quadrillion floating-point operations per second (see "Cover story"). Among the keys to achieving Blue Gene's performance objectives are system integration and design innovations, in particular the use of streamlined chip architecture and on-chip DRAM.

BEYOND SILICON

At some point, however, perhaps around 2020, parallelism and other approaches to boosting computer power will themselves no longer be able to improve performance. When that ultimate limit of silicon technology is reached, one can only wonder, What lies beyond? That is not an idle question, since 20 years is a relatively short time for ideas to move from an exploratory research stage to widespread application. If there is to be a successor to silicon, chances are that it has at least been glimpsed.

Opinion on a successor is divided. Some researchers reject the possibility that an alternative to CMOS technology will emerge even in the long run. "There is no replacement for CMOS as we know it," declares Ning. "The basic building blocks for future computers will be digital CMOS." But other scientists, including Dennard, say they believe something new will probably come along, as it always has in the history of technology.

If a new technology does arise, it will have to overcome formidable obstacles, emphasizes Thomas Theis, director of physical sciences at IBM's Thomas J. Watson Research Center. "A replacement for silicon CMOS will be possible only once silicon technology truly starts to run out of steam," he says. "Anything that replaces it will have to be dramatically better in nearly every respect." That's because of the enormous investment the industry has already made in the infrastructure of silicon. "If history is any guide," Theis says, "a successor to CMOS will probably be developed to meet the needs of some niche market that doesn't yet exist, with customers that do not yet know they need such a thing."

Although there is no clear follow- on to CMOS, research efforts in the industry have taken on "a new sense of urgency," according to John E. Kelly III, general manager of IBM's Microelectronics Division. "With 15- to 20-year lead times between laboratory work and production, we have to get serious about new ideas," he asserts. Indeed, the Semiconductor Industry Association is sponsoring research initiatives to extend CMOS technology and to go beyond it.

For its part, IBM is pursuing at least three main lines of long-term research aimed at producing a successor technology. The first approach, closest to existing circuits, is based on novel materials and devices. One such device, called a Mott transition field-effect transistor, or MTFET, has been proposed by Dennis Newns at Watson and is being explored by a small team at the lab. It is based not on silicon but on compounds called perovskites. These materials have a wide range of unusual properties, such as high-temperature superconductivity and ferroelectricity, which contribute to their usefulness in novel device applications. The Mott transition, first studied by the Nobel physicist Sir Nevill Mott in the 1960s, is the special way in which some materials, such as perovskites, transform from insulators to metallic conductors as their electron density -- the number of electrons per atom -- is raised. That increase can be brought about in a variety of ways, and Newns has come up with a new one.

His idea is to use a certain kind of perovskite as the channel in an FET. In the normal, "off" state, the channel of an FET is nonconducting. Newns proposes to make the channel conducting -- that is, to make it undergo a Mott transition -- by applying a gate voltage, which is exactly how a typical silicon FET operates. If Newns's ideas prove correct, and if still another kind of perovskite is used as a gate insulator, it may be possible to create perovskite-based transistors that could be both substantially smaller and faster than ordinary CMOS transistors.

So far, though, only single examples of the MTFET have been produced. Many challenges must be met before integrated circuits based on this technology could be built at all, let alone surpass silicon chips in performance. Yet the preliminary results have been encouraging, and the underlying physical arguments in favor of pursuing such a program are sound.

LONGER-TERM THINKING

A second, more exotic approach being pursued at IBM is molecular electronics -- the use of individual molecules to act as switches in a computer. The idea originated with a 1974 proposal by Ari Aviram of Watson and Mark Ratner of Northwestern University to use a certain kind of molecule to pass electric current primarily in one direction, a function performed in electronics by a diode. A team at the University of Alabama succeeded in doing so in 1997. Today, perhaps the best-researched approach to molecular electronics is carbon nanotubes -- cylinders of hexagonally arranged carbon atoms 1 to 2 nanometers wide. Nanotubes can be semiconductors or metals depending on their detailed atomic structure.

In 1998, Phaedon Avouris and his colleagues at Watson demonstrated that a field-effect transistor could be built out of a 1.4-nanometer-wide nanotube bridging a gap between two gold electrodes over a silicon substrate. Among other advantages of nanotubes, Avouris points out, "they can carry huge current densities -- a billion amperes per square centimeter -- without breaking down as normal metals do." That makes them ideal materials for interconnections as well as transistors.

But, like MTFETs, nanotubes must clear many hurdles before they can be used for practical devices. "For one thing," says Avouris, "we don't yet know how to make nanotubes with identical structures. And for commercial applications, we need a different way of assembling the devices from the way we have used in the laboratory." The method Avouris is exploring: self-assembly. His group is also participating in a DARPA-funded effort to use self-assembly to fabricate molecular logic gates based on molecules even smaller than nanotubes.

Joe Jasinski, manager of nanoelectronics and optical science at Watson, explains the rationale behind the pursuit of self-assembly. "If you want to build just one patio, you can take the time to lay bricks one by one," he says. "But if you want to build a trillion identical patios, you need to get the bricks to somehow do the job themselves, to self-assemble into patios. We don't know how to get bricks to self-assemble, because there are no natural forces between bricks that are strong enough compared to the size and weight of the bricks to have an effect. The situation is different for nanometer scale objects." Certain molecules, for example, can self-assemble into quite complex arrays, governed just by the physical forces acting between them. In a recent step forward, IBM scientists Cherie Kagan, David Mitzi and Christos Dimitrakopoulos induced a mixture of organic and inorganic compounds to crystallize from a solution and self-assemble into alternating organic and inorganic layers that can be patterned to form the channels of thin-film FETs.

A group led by Jim Gimzewski at IBM's Zurich Research Laboratory is attempting another approach to molecular computation, one based on machines instead of electronics. "We are attempting to follow physicist Richard Feynman's program of working from the bottom up, using chemistry to design molecules that can do calculations," he says. Such molecules would act as nanomachines. Gimzewski and his colleagues are working on a nanomachine in which electrons would be transported from one electrode to another through molecular wires connected to a tiny molecular ring containing a still smaller molecular rotor. By applying an appropriate voltage, electrons could be directed through the wires and ring to the rotor, which would then rotate before releasing the electron to another electrode, thereby acting as a kind of switch. Although the work is still at a very preliminary stage -- the molecular rotor system exists, but a fully functional device has not yet been built -- Gimzewski has high expectations. "We see this as a potential technology for general-purpose computers," he says.

To link up tiny molecular nanomachines, the Zurich team is developing ways to use atomic-force-microscope (AFM) technology to create extremely thin wires. An AFM senses the atomic contours of a surface with a tiny silicon tip attached to a delicate cantilever, which is kept at a constant distance from the surface when scanned over it. This same cantilever can be used as a stencil, allowing extremely fine lines to be directly deposited through arrays of tiny apertures cut into the cantilever. While a single AFM cantilever might work slowly, other AFM-based projects at Zurich have shown that large arrays of cantilevers can operate in parallel, enabling rapid processing.

Another long-term prospect is quantum computing, which would take advantage of new types of calculations that exploit the esoteric effects of quantum mechanics. It has been shown that such effects could speed up the solutions to some problems -- for example, factoring large numbers and searching an unordered database -- by many orders of magnitude. Quantum computers could also be used to simulate complex quantum-mechanical systems and provide insight into the workings of quantum mechanics itself. In the past few years, simple models of such computers have been shown to operate in the laboratory (see IBM Research, Number 4, 1998, "A quantum leap for computing").

Still more radical concepts may come along. "After all," Davari points out, "we know that brains don't operate on the same principles as a digital computer, and yet they do a fantastic job. Even a mosquito brain, which is much smaller than a pinhead, can control navigation, vision, smell, mating, feeding and other functions." While the principles on which the human brain operates are far from understood, it's possible that, as research gradually reveals the underlying mechanisms, fundamentally new ideas for computing could emerge.

So where are computers going after Moore's Law? For probably another decade past 2010, the industry will be trying to push performance through system integration, and better exploitation of parallel processing and other architectural innovations. Beyond that, either the industry will reach a period of maturity and slow growth in performance, or revolutionary new ideas will move out of the laboratory and into production.
 
Moore's law talks about DRAM. Things are different for other ICs.

Right now in the consumer space cramming transistors on a die for processing units isn likely on par or below in terms of a concern as it's pin counts, power consumption, heat dissipation, and getting reasonable operating frequency. Chipsets are hitting 1k pins, which makes them VERY expensive, much before transistor count. Same thing with the Opteron, you can throw a huge cache on it, but it's the near 1k pin count which is really hurting.
 
Phil said:
What he ment by that is that most games on PS2 save full-frame buffers which is all that is needed to produce a 480p signal. Obviously, libraries have been missing at first, along that the target TV set are still interlaced which add to the fact why most games were originally launched without 480p enabled.

In turn though, if I buy a DC and want to experience the full 480p quality on a monitor - I would need to buy a VGA adapter aswell. In addition, on Xbox, I would need to buy some sort of an adapter to experience DTS quality games because there is no optical output at the back. So, what's the big deal of buying a blaze VGA adapter on PS2? Hacking 480p or not - Chap's arguement is of technical comparasment so it should irrelevant how the signal is achieved as long as it's a true progressive out signal. End of story.

Dreamcast's VGA adapter was also made, produced, and marketed by SEGA. If Sony made their own PS2 VGA adapter akin to Blaze, it wouldn't be as much of an issue... And besides, the statement in question was london-boy saying Most PS2 games have [proscan] enabled by DEFAULT... which is false.
 
Tagrineth said:
Dreamcast's VGA adapter was also made, produced, and marketed by SEGA. If Sony made their own PS2 VGA adapter akin to Blaze, it wouldn't be as much of an issue... And besides, the statement in question was london-boy saying Most PS2 games have [proscan] enabled by DEFAULT... which is false.

What do you care who made the adapter and what relevance does it have to this arguement? None. Chap has been continuesly bringing up this 480p crap because he believes he can prove PS2's technical inferiority against DC like this. A vast amount of games on PS2 save full-frame buffers by default (I believe this was what London-boy was trying to bring across), it completely nullfies that arguement. The Blaze VGA adapter is just the tip of the iceberg in which it proves that a vast amount of games do infact save those fullframe buffers.

Also, if you take this by the general consumers approach here: who cares who makes the VGA adapter? As a DC owner you need an adapter to experience VGA on a monitor - it's no different on PS2 or Xbox or GameCube. Also, Sony does sell an approapritate VGA cable that ships with the PS2 linuxkit. Naturally, it can only be used with games that support 480p by the developer.
 
Phil said:
What do you care who made the adapter and what relevance does it have to this arguement? None. Chap has been continuesly bringing up this 480p crap because he believes he can prove PS2's technical inferiority against DC like this. A vast amount of games on PS2 save full-frame buffers by default (I believe this was what London-boy was trying to bring across), it completely nullfies that arguement. The Blaze VGA adapter is just the tip of the iceberg in which it proves that a vast amount of games do infact save those fullframe buffers.

That's all good and all, but the blaze VGA adapter is still a third-party device which hacks support for 480p on full/full games.

Also, if you take this by the general consumers approach here: who cares who makes the VGA adapter? As a DC owner you need an adapter to experience VGA on a monitor - it's no different on PS2 or Xbox or GameCube. Also, Sony does sell an approapritate VGA cable that ships with the PS2 linuxkit. Naturally, it can only be used with games that support 480p by the developer.

Who cares who makes the VGA adapter? Retailers. They're more likely to adopt a first-party adapter, unless they get some kind of deal with the mfr. It is true that DC doesn't support 480p at all, without the VGA adapter, but then again DC's VGA adapter is also much more accessible (or was at the time) than Blaze. The difference being that GCN and Xbox have cables you can get straight from the mfr. which support 480p... but that's beyond the scope of this mini-discussion.

And... Sony's VGA cable can only be used with games that support 480p natively - no hacks, as you said. Blaze VGA is still a hack device that forces 480p, and doesn't even do a perfect job most of the time (even in some games that do support 480p!).

All I'm saying is that calling Chap on evasion was false, because he had a point - very few PS2 games support 480p correctly... and its 'hack' VGA adapter which allows 480p doesn't work anywhere near as well as DC's 'hack' VGA adapter does. Whether or not this means DC is a better system is completely irrelevant to me, though. PS2 gets the nod IMO because it does technically support 480p natively, which DC doesn't... and PS2's CPU is like the 100-ton wrecking ball to DC's CPU jackhammer...
 
tagrineth said:
All I'm saying is that calling Chap on evasion was false, because he had a point - very few PS2 games support 480p correctly... and its 'hack' VGA adapter which allows 480p doesn't work anywhere near as well as DC's 'hack' VGA adapter does. Whether or not this means DC is a better system is completely irrelevant to me, though. PS2 gets the nod IMO because it does technically support 480p natively, which DC doesn't... and PS2's CPU is like the 100-ton wrecking ball to DC's CPU jackhammer...

No he did not. He tried to change the arguement by argueing that the Blaze VGA is "cheating" when in fact it's not even relevant considering the actual topic we're debating here. It is not relevant how the VGA adapter gets its progressive out signal - what is relevant to our little debate is the knowledge of games that save full-frame buffers and could theoretical display a native progressive out picture if the developer had chose so.
 
:cracks fingers: :LOL:

phil

1)why must you sound like i am some kid who failed to see your arguements? i have and i have replied. what so stupid of that? why dont you think of yourself that way? i hate people who try to sound high and holy. In fact, i think you are the few who still lives on the mantra that PS2 is high and mighty, it blows DC away and it still hangs around with XB! blah! Hypocrisy and hype at its best. :LOL:

2)JD => did you try to use the right analog stick and pan the camera around while standing on top of a hut in the village? Try that and watch the a)fps stutters b)the image breaks into half, like RRV. Multitexturing only adds more layers, it does not improves on the texture quality. Remember we are on textures quality, on textures that is part of what PS2 is supposed to blow away DC.

3)MGS2, who am i to say textures could be better? Because it IS. Let see whether next gen MGS4 will still use back those blurry textures, i mean why change since blurry textures rules! Just add more polys and effects, it be beauty for next gen! Kojima once said of PS2: "Sony promises us high quality graphics, oh well, if only we have twice the amount of ram, we could have realized Sony's promise with our mgs2 engine.", or something like that.

4)Using Blaze VGA adaptor will NOT enable TURE 480p on non 480p output capable games. period. It is just a fricking hack to play on your monitor.

5)The SA2 textures challenge has been thrown to the PS2 folks who think PS2 completely blew away the DC. With TnL yes, but IQ and textures no. Is that so hard to accept? Is that clear enough for you?

6)I stand by my points that XB/GC will do games that PS2 folks can only dream off. The first gen Xbox games have shown much better lighting, IQ and textures over PS2 best along with strong geometry. Watch the next step from Xbox. You just have to wait for Bungie direct feed trailer of H2 and of course play the game. Or you can always wait for KIN!!!! HALO KINLER!!! :LOL: If you are still not impressed i have nothing to say, but at least i am sure the other nice ps2 folks like pana will.

7)To me, Fable and H2 sure look 3X better than what PS2 can throw out. At least i am sure MGS2 is not 25X better than whatever DC games there are. :LOL:

8)ZOE2 = OTOGI.

9)480p is NOT a ps2 standard, early libraries or not. I ask of you, what do you think of MGS3, KIN, Megaman7 GTA4, True Crimes, TMNT, SSX3, GT4, Castlevania, Ice Nine etc. Random future games, tell me what you think of true 480p support for them? Go on make a statement and we shall see where that goes in the future. On the other hand, Standard = i am 100% certain H2, Fable, Doom3, HL2, Sudeki, MotoGP2, DOAOnline, Conker etc WILL do 480p, at the least.

10)Blaze PS2 VGA a)is not supported by Sony b)forces those non 480p output games to output hacked pscan. End of story.


PS2 must have infused some sort of lower standards to its folks over the years! Blurry IQ is just as sharp! Grainer textures are actually quite tastey! Line doubled pscan looks great! Yup they are sure do, i believe. :LOL:

But whatever, you ps2 folks just carry on holding to your untapped hardware and holier than thou name calling. I am off now. End of story. :oops:
 
chap:

1)why must you sound like i am some kid who failed to see your arguements? i have and i have replied. what so stupid of that? why dont you think of yourself that way? i hate people who try to sound high and holy. In fact, i think you are the few who still lives on the mantra that PS2 is high and mighty, it blows DC away and it still hangs around with XB! blah! Hypocrisy and hype at its best.

Because yet again, you fail to comprehend what I actually wrote down. You counter a few of my sentences and believe you actually made a point. Well think again. My point I am trying to bring across is not that texturing is better on PS2 (highly subjective), but trying to point out your double standards of bashing PS2 for weak texturing compared to DC and raving on about Xbox's superiourity in all these areas. If we want to compare things correctly, you should take two games on either platform and compare them under the same circumstances.

The point is, you are right, in some areas PS2 is not superiour over DC. At the very same time though, one has to be blind not to see that in some areas Xbox is not superiour over PS2. So, why do you consantly bash Sony for it, yet praise Microsoft at the same time?

And your points about the Blaze VGA adapter are moot btw: in regards of our technical comparasment, it is not important how you get the VGA signal as long as it's a true crisp 480p signal. London-boy aswell others have confirmed this and they actually use it.

And btw; Jak & Daxter has no slowdowns or tearing. I don't know what you're smoking. I'm sure anyone who has played the game can confirm this. Dream on.
 
ChryZ said:
chaphack said:
I have the *ahem* leaked copy
Confessing ownership of illigeal possession. Would one of the moderators be so kind to check Chap's last IP and report him to the local authorities.
This was not a joke, will someone remove this criminal already.
 
ok, last reply! :p

phil

1) Like i said, lets wait for Sonic Heroes.


2) Like i said, i am sick of the double standards shown by PS2 folks. And i praise the Xbox because, it delivers on its graphical promises. Xbox games, afaiac, do show the 3X improvements over PS2. Play Halo2 or play KIN, hmm hmm hmm. Another good comparison ya? FPS vs FPS, first party vs first party, killer game vs killer game.


3) Like i said, Blaze VGA hacked 480p is NOT moot. And londonboy, as biased as he is, confirms nothing apart that, native true 480p capable PS2 games like Tekken/BO2/GGXX, look superb at 480p(as i have always said). While the majority that dont have native 480p are not as good. Thread for referance.

london-boy's words: all it does is *switching PRO-scan ON* which is great for some games but not so good for others (the majority really)... the ones that do work (usually the ones that support pro-scan in either NTSC or PAL versions) look absolutely superb,

Even good old phil yourself, admits that The Getaway, another native 480p capable game, the only game you tested, looks good in 480p. Now now, dontcha hit that edit function. ;)

Phil's words: The only game I currently own is the Getaway.... wow, the IQ is superb.. too superb actually.. it looks more like a PC game now (too clean).
Another PS2 gamer discovering the joy of true 480p display. True 480p > hacked 480p >= 480i, there is no question about it. ;)


4) Like i said JD has some slowdowns and image breakups. I believe Marc can attest for that. I am not dreaming. Start another JD thread ya? ;)



END OF ARGUEMENT. KTHBYE. Backpadel all you want now. :oops:
 
For Miss Taggy

All I'm saying is that calling Chap on evasion was false...I just have to say, I don't understand all your DODGED! comments earlier

As you can see, I am certain i am NOT the one dodging replies in this topic. :p
I would like to thank you for your support nonetheless! Thanks. :oops:
 
chap:

1.) Xbox does deliver? In your opinion maybe. The lack of a proper stable 60 fps framerate disgusts me in most graphical impressive games compared to the best games on PS2 that do indeed run at that speed. Also geometry/particles is no better... :rolleyes:

2.) You didn't read what I posted did you? Key word is own - just because I only own one game doesn't mean I have seen others since then. Besides, in that very same post, I went on to say how p-out is nothing spectacular considering I can run it on a 100 Hz 32" TV. This though has no relevancy to this arguement. :rolleyes:
 
Back
Top