Humanoid robot learns how to run

_xxx_ said:
The other way around, human brain is not capable of calculating really fast. There would be no point in creating a replica of a brain for the tasks usually done by a computer.

I have never read anything on the subject, although the interest is there, but I think the brain, to a certain extent, works a LOT with maths, it's just very abstract to us. Geometry especially, i can see sometimes that my brain develops "patterns" with numbers and geometry, to understand things that have nothing to do with maths and geometry themselves.
Logic and maths seem to be connected very tightly, at least in my experience.
 
there`s 2 reasons why man kind wont "see" a true AI in it`s life time.
a) we gonna blow ourself up before that.
b) when we`ll build an AI that will be inteligent enough it will blow us up.
in the end, "Artificial intelligence is no match for natural stupidity"
 
DOGMA1138 said:
there`s 2 reasons why man kind wont "see" a true AI in it`s life time.
a) we gonna blow ourself up before that.
b) when we`ll build an AI that will be inteligent enough it will blow us up.
in the end, "Artificial intelligence is no match for natural stupidity"
_________________
in case you where wondering...
yes, i am a mentaly unstable idiot so feel free to disreguard my post/s

Ok then! :D

No, really, as long as we remember to put a line for Love Donuts in the AI code, robots will only ever kill in situations where they are refused donuts. Which they can't eat anyway but that's not the issue here.
 
DOGMA1138 said:
there`s 2 reasons why man kind wont "see" a true AI in it`s life time.
a) we gonna blow ourself up before that.
b) when we`ll build an AI that will be inteligent enough it will blow us up.
in the end, "Artificial intelligence is no match for natural stupidity"

c) after it blows us up, it'll blow itself up

EDIT:
so many blowjobs...
 
Based upon old estimations from the 70s/80s, which assumed neurons were only capable of one calculation per burst, the brain in a vegetative state was assumed to have the equivalent processing power of approximately 100 TeraFLOPS (roughly 100 trillion floating point calculations per second). This is based on factoring the capability of the brain's 100 billion neurons, each with over 1,000 connections to other neurons, with each connection capable of performing about 200 calculations per second.

As a comparison: http://www.cnn.com/2004/TECH/biztech/11/10/supercomputerrace.ap/

The system was clocked at 70.72 trillion calculations per second, almost double the performance of the reigning leader, Japan's Earth Simulator, which can sustain 35.86 trillion calculations a second.

Again, the estimation of 100 TeraFLOPS doesn't factor in recent brain research (sorry, read it in a science journal months ago and don't have a link) where it was found each neuron is capable of multiple calculations per burst; similiar to a scalar unit (I'm not a hardware engineer, so I might be wrong on the type of hardware I'm thinking of. Could someone correct me if I'm mistaken?). This means that, per burst the neuron can dynamically allocate resources dependent on the type of calculations needed to be done. As a simple example, it might be able to do two additions or subtractions per burst but only one multiplication (as of this year I do not believe scientists have discovered the neuron's exact capabilities).

This estimation also does not factor in the brain's special capabilites for quantum mechanics calculations (again, sorry; can't provide a link). This capability could be compared to a modern CPU's multimedia extensions like MMX, SSE, etc. These extensions allow a CPU to calculate certain mathematical functions much faster than normal. The brain is the same, except it's doing quantum mechanics physics calculations.

The difference between a CPU and the human brain could be compared to the difference between CPUs and graphics processors (GPU). The GPU is built for a dedicated task and while modern GPUs are now capable of calculating general functions like a CPU often times it is far slower; of course it's the reverse for other mathematical functions the GPU was designed for. For those of you still in college or high school, that is why you can't crunch through your calculus homework like crazy... :D

Finally, this estimation doesn't factor in the possibility of compression, encryption, the fact that the brain is designed to be massively parallel in its computations, and a myriad of other possibilities (quantum computer?). So all in all, I think it's reasonably possible the human brain is capable in excess of 1000 TeraFLOPS.

EDIT: Just realized that CNN link no longer works.
 
Maybe the term "FLOP" isn't exactly right, as those are floating point calculations and i'm not sure the brain "calculating power" can be quantified with floating point maths.

But yeah, we have a hell of a good CPU.
 
The Brain is a neural network with lots of analogs connections. Maybe this analogs connections/operations can be simulated with FLOPS.

I agre that first we need the computational, communications, storage and sensorial technology very advanced to then start some profound incursions in neural nets. This is why I believe that by 2030 we will have the basic hardware but not the intelligent software to do that.

Also some modifications could be included with some preprocessing, primitive, autonomous and peripherical functions defined/performed by traditional computing. Maybe some safeguards should not be "writen" in the neural programing, but in some traditional programming.

I am starting to read this online book that is singularity-aware: http://www.edge.org/documents/ThirdCulture/d-Contents.html
The Third Culture

Beyond the Scientific Revolution

by John Brockman
 
Very interresting chapter:

http://www.edge.org/documents/ThirdCulture/zh-Ch.23.html

My view of what it's going to take to make a thinking machine has changed in recent years. When we started out, I naively believed that each of the pieces of intelligence could be engineered. I still believe that would be possible in principle, but it would take three hundred years to do it. There are so many different aspects to making an intelligent machine that if we used normal engineering methods the complexity would overwhelm us. That presents a great practical difficulty for me; I want to get this project done in my lifetime.

The other thing I've learned is how hard it is to get lots of people to work together on a project and manage the complexity. In some senses, a big connection machine is the most complicated machine humans have ever built. A connection machine has a few hundred billion active parts, all of which are working together, and the way they interact isn't really understood, even by its designers. The only way to design an object of this much complexity is to break it into parts. We decide it's going to have this box and that box and that box, and we send a group of people off to do each of those, and they have to agree on the interfaces before they go off and design their boxes.

...

There's another approach besides this strict engineering approach which can produce something of that complexity, and that's the evolutionary approach. We humans were produced by a process that wasn't engineering. We now have computers fast enough to simulate the process of evolution within the computer. So we may be able to set up situations in which we can cause intelligent programs to evolve within the computer.

...

It's haughty of us to think we're the end product of evolution. All of us are a part of producing whatever is coming next. We're at an exciting time. We're close to the singularity. Go back to that litany of chemistry leading to single-celled organisms, leading to intelligence. The first step took a billion years, the next step took a hundred million, and so on. We're at a stage where things change on the order of decades, and it seems to be speeding up. Technology has the autocatalytic effect of fast computers, which let us design better and faster computers faster. We're heading toward something which is going to happen very soon — in our lifetimes — and which is fundamentally different from anything that's happened in human history before.

People have stopped thinking about the future, because they realize that the future will be so different. The future their grandchildren are going to live in will be so different that the normal methods of planning for it just don't work anymore. When I was a kid, people used to talk about what would happen in the year 2000. Now, at the end of the century, people are still talking about what's going to happen in the year 2000. The future has been shrinking by one year per year, ever since I was born. If I try to extrapolate the trends, to look at where technology's going sometime early in the next century, there comes a point where something incomprehensible will happen. Maybe it's the creation of intelligent machines. Maybe it's telecommunications merging us into a global organism. If you try to talk about it, it sounds mystical, but I'm making a very practical statement here. I think something's happening now — and will continue to happen over the next few decades — which is incomprehensible to us, and I find that both frightening and exciting.
 
pascal said:
The Brain is a neural network with lots of analogs connections. Maybe this analogs connections/operations can be simulated with FLOPS.

I believe emulating the brain with computer hardware is the wrong way of going about things. Just for starters, a computer will probably never be able to compete with brains on a neuron-by-neuron basis; there's just too many of the little buggers and with too many cross-connections.

The brain evolved out of a specific need using tools available to nature. No point in trying to copy it using dead matter... It'll probably just be slow and clumsy and extremely expensive.
 
The brain is a product of evolution. Most things biological resulting from evolution are NOT optimized, but are usually more generic in design and function. I'm sure we will be able to build computers 1000's of times faster then any brain given time.
 
_xxx_ said:
We do already. Try calculating a Doom3 scene 60 times a second with your brain ;)

I got a problem with your comment in that although we might not be able to calculate D3 at 60fps, we can still do alot more than any computer that exceeds 1000 or even a 1,000,000 times the computational power.

Also at 60Hz, my eyes hurt looking at the monitor, which kinda does mean we can process 60fps. 75-85 is good though.

US
 
Alejux said:
Most things biological resulting from evolution are NOT optimized, but are usually more generic in design and function.

Then you'd better hurry up and inform biomimetics researchers of this fact for based upon the journals I've been reading they seem to be under the impression that many biological functions are optimized to the extent physics allows (even beyond our technological prowess I might add). :p

Anyways, bring on the "wetware"...
 
Gump said:
Alejux said:
Most things biological resulting from evolution are NOT optimized, but are usually more generic in design and function.

Then you'd better hurry up and inform biomimetics researchers of this fact for based upon the journals I've been reading they seem to be under the impression that many biological functions are optimized to the extent physics allows (even beyond our technological prowess I might add). :p

Anyways, bring on the "wetware"...

Yes, but they're biological. Because of that, they suffer many limitations.

Again, I'll say, lifeforms are not optimized. They have to be highly adaptable for survival and evolution. If you had to travel 100 miles on a road, would you rather have do it on a horse mustang, or a ford mustang?
I'm sure the horse is an infinitely more complex machine, but I'm also sure the car will get from one point to another much faster. Is a telescope more complex then a human eye? No way! But you don't see far away galaxies using your naked eye.

If will ever build a virtual brain, it will be optimized for it's functions, using things that would NOT EVER have been available in normal biological evolution. May it be silicon, photonics, quantum computing, or whatever....I'm sure we will get there soon enough.
 
Ah, sorry, misunderstood you. Basically you're saying that biological functions are not optimized for the certain applications WE desire. In any case, a while back researchers managed to build a small neuron-based computer that could calculate generalized mathematical functions that were inputted. Through further genetic engineering it might be possible to "grow" our CPUs of the future. Now whether or not it will be worth it from a performance viewpoint by time this is feasible is another matter.
 
Related to the thread´s topic http://www.kawada.co.jp/global/ams/hrp_2.html

Looks like a manga robot :)
hrp2_ph01.jpg
 
If you try to talk about it, it sounds mystical, but I'm making a very practical statement here. I think something's happening now — and will continue to happen over the next few decades — which is incomprehensible to us, and I find that both frightening and exciting.

now this is a very good quote, and this is inevitable.

Like it was in the past, so it will be in the future - a big change, and an unexpected one, whether it will be for further good, eventually it won't... i guess we better enjoy the good times while they last. ;)
 
Back
Top