How does a computer allocate memory in hardware? (pointers,stack, heap)

Flux

Regular
How does a computer allocate memory in hardware?(pointers,stack, heap)

How does it move data to and from the registers and main memory?

How does it know when one instruction ends and another begins?
 
How does a computer allocate memory in hardware?(pointers,stack, heap)
Stack and heap are software concepts. Computers deal with physical and (if available) virtual address spaces. I'll try to outline basics. Some things are not exactly as described and stuff is in general much more complex.

Data (code is data too) is kept in some ROM or RAM. This memory is attached to a bus (communication thingie) which allows other devices attached to it (e.g. CPU) ask for certain range of data or set data in some range. You can think of a physical memory as if it were a bunch of pigeon holes indexed from 0 to however many there are. CPU sends commands over the bus like "give me whatever is in pigeon hole 16" and then uses data received. Or it can send command "store zero in pigeon hole 5" and get that done. If physical address space is all you've got, this is roughly what happens when you read/write data.

In many cases however applications deal with virtual address space. This allows you to address pigeon holes from 0 to however many bits in address you have. So e.g. 32-bit address space would let you get/set values in up to 4gigaholes. ;) But these pigeon holes are virtual, they don't correspond to any real place in physical memory, unless someone governs how VA is associated with PA. Translation between PA and VA is managed by OS but aided by a hardware piece called Memory Management Unit (MMU). MMU is a hardware unit that handles lookup tables (LUT) where you say to the MMU "I'm about to use virtual address 1000" and MMU responds "this is mapped into physical address 10".

In a simple scenario you've got one set of VA. Every time you malloc a memory in your program, something looks into the PA space available and associates a range of PAs with a range of VAs (builds entry in the LUT handled by the MMU). Somehow, someone has to manage data structures that describes ranges already used and ranges that are free - both physical and virtual. This could be a piece of HW or this could be a piece of SW. This LUT usually has multiple levels of indirection called page directories, page tables and/or something else entirely.

In reality MMU most likely sits between your CPU and bus so whenever CPU is sending request to the bus, addresses are automatically translated from VA to PA based on the LUT attached. The LUT itself is just a bunch of data (potentially in the physical memory) that MMU references (over the bus obviously) to do the translation. Translation is often done page by page. In other words: address spaces is divided into fixed-size pages (4kB, 64kB, others) so index into the page is the same for both VA and PA. In other words: lower bits are the same regardless of address space and higher bits of the VA are only translated by the MMU to PA space.

In multi-process environments you'll have multiple processes dealing with their own VA spaces. What happens when processes are switched back and forth is that pointer to the LUT MMU is using will change (it will perform different translation for different processes).

Free is pretty simple once you wrap your head around this - it just frees PA and VA range (removes entry from the LUT). Pointers in your C code are values representing VA. When you read from a given address, MMU does translation and provides data from the physical memory. Stack and heap are just separate ranges of memory your application is using: stack is potentially fixed range that you use more or less of, in a linear fashion, based on how deep you call functions. Heap is everything else.

How does it move data to and from the registers and main memory?

When bus responds with data from a given address, CPU has to store that somewhere. It does so in its register bank.

How does it know when one instruction ends and another begins?

Instructions are data: CPU reads this data in a sequential fashion (unless you branch in your code) and interprets this as commands, e.g. 0 would mean move data between registers, 1 would mean add values from two registers, 2 would mean multiply, and so on.

In general: this is a very, very broad topic that won't be explained on the board in details. Get a book on computer architecture and read it.
 
Hows does a computer's cpu moves instruction onto and off the stack and registers?

How does it move one instruction and not something else?

How are things pointers,references and constants handled in hardware?

What is the point of interrupts,dumps and unused instructions?

Can you move data from one location to another without a MMU or an instruction set?

Why is Out of Order processing faster than In-Order processing?
 
Last edited by a moderator:
There seems to be a very significant amount of basic CS education needed. The terminology looks like it's getting mangled to a point that it's going to be difficult to know how remedial the answers would need to be. Any thoughts on getting a book or maybe looking into an intro course?

1) The most common case is not doing this.
2) If it isn't moving that instruction, it's moving something else? This is basically saying that they'd probably toss a CPU that moved something it wasn't signalling to move. There are complex systems designed to take a set of inputs and react in a way specific to what they were. Either that was what was intended, or the thing is broken.
3) A good portion of this is determined at a higher level, and these items are CS concepts. There are a lot of places in computation where there are pointers, references and constants, so the context matters.
4) The world is unpredictable, not everything can wait, things are imperfect, things can be lost, and sometimes plans don't work out.
5) There are CPUs without an MMU, but I can't think of how one can conceptually have a full CPU without some predetermined set of signals that can be used to tell it to perform some kind of transition. A limited unit could be made to twiddle bits, but the very use of terms like "data" and "location" assumes an organizational scheme and purpose that demands more than that.
6) Do we all know enough about what it means to "process", or perhaps more appropriately "execute", before worrying if it's in-order or not?
 
Oh my so many questions, you need to take computer science/programming classes because that will answer your questions more thoroughly and allow you to build a better mental image than just answers to a few questions like that.

I do not know of any such courses available online, but maybe someone else does.
You may try to search it in your favorite engine too, but I'm not even sure which words to search for ^^
 
Dude, you don't understand the very basic concepts. Buy books mantioned below (top two at least), read them, come back with questions. It's impossible to answer questions that are simply wrong. How can a question be wrong? How would you answer to "is there a way to eat a cat using ear" or "how do I glue a pair of cardboard boxes to my dog so he can fly under water"?

William Stallings - Computer Organization and Architecture
Andrew Tanenbaum - Operating Systems: Design and Implementation
John Hennessy - Computer Architecture: A Quantitative Approach
 
To a CPU, the world is a simple place. It has a few boxes to store things, called registers, a large selector switch to select addresses and a pneumatic tube mail receptacle.

At the start of the day, it will set the selector switch to zero and press the "receive mail" button. After a while, a message will drop in the receptacle.

This message is encoded, so it takes it's code book and looks up what this code means. It says: "get what is in 1000 and put it in box 0".

As it doesn't want to forget which coded instructions have been processed and which one is next, it writes the number 0 in the book marked: "PC". This is perhaps the Progress Counter or something.

After that, it dials 1000 on the big switch and presses the "receive mail" button. After a while a new message drops in the receptacle, it takes this and puts it in the box labeled: "0".

Satisfied about a job well done, it looks in the PC book, and sees that the next instruction will be on address 1. It sets the switch and presses the button.

The next code turns out to be the instruction: "increase box 0 with 1". So, it takes the message out of box 0 and sees that it contains the number 41. It scratches out that number and writes 42 in its place. And puts it back in box 0.

The next instruction tells it to get what is in address 1001 and put it in box 1. Easily done.

But the instruction after that is a bit more complex: it says: "store what is in box 0 on the address that is in box 1". So, the CPU takes the message out of box 1, and sees that it contains the number 2000. It dials 2000 on the big switch, takes the message out of box 0, puts it into the receptacle, and presses the "send mail" button. Phew!

So what we did was:

move what is inside 1000 to box 0
increase what is inside box 0
move what is inside 1001 to box 1
move what is inside box 0 to what is inside box 1

That last bit is called indexing. It means, that you don't specify the address directly, but that you say: the address is in that box.

And that's actually all the CPU has to know: where it can find the address. You can actually specify really complex things, like: "add what is inside the address inside box 5 plus what is inside box 3 to the address that is inside the address that is in box 2". Phew!


It is the job of the operating system to specify such an address if you want to store stuff. Simply because your program isn't the only thing that uses those addresses.

And an address can be many things: it can be a memory location, but also a key on your keyboard, or a pixel on the screen. It's all the same to the CPU.


What you might have noticed, is that the CPU can only do one thing at a time, and has to wait a lot for the mail system to transport the messages. That's why we call those things "execution units" nowadays and put more than one in a CPU.

Actually, we don't call them CPU's anymore, but Cores, and put multiple of those in a single CPU! All to make sure that at least one of them is doing something at any time.

But still, that puts an even larger strain on the mail system, as now it has to service all those execution units on all those cores, instead of only the one.

So, we figured out, that it would be much better if they all worked on their own part of the problem. So, not do everything step by step, but in batches. And we call that out-of-order execution.


The star of the show in all this is not the simple CPU, but rather the mail system. Or as we call them: the Memory Manager and the cache units.

The Memory Manager does as you would expect: it manages what goes where and who does what. And a cache unit is like your own warehouse: they store the things you might need, just in case. So you don't have to wait for it to come all the way from some remote address.

To do that, the cache units don't order single addresses. They order blocks. Like, if the CPU wants what is in address 1000, they order the contents of address 1000 and all consecutive addresses, up to, say, 1010. So if the CPU wants to know what is in 1001, or 1002, etc., they already have it stored.

To speed things up even more, more and larger cache units are added.


That's how it works.
 
Dude, you don't understand the very basic concepts. Buy books mantioned below (top two at least), read them, come back with questions. It's impossible to answer questions that are simply wrong. How can a question be wrong? How would you answer to "is there a way to eat a cat using ear" or "how do I glue a pair of cardboard boxes to my dog so he can fly under water"?

William Stallings - Computer Organization and Architecture
Andrew Tanenbaum - Operating Systems: Design and Implementation
John Hennessy - Computer Architecture: A Quantitative Approach

I know a fair amount about the subject I just need to know more about the subject.
I read some things on ic and software. I just wanted a better understanding of how the instruction decoder/registers/interrupts and other RTL level components work.

Like how does a binary computer computes vectors and why is simd or fused mac important in a gpu.
 
Jon Stokes, formerly of Ars Technica, wrote a book titled "Inside the Machine" that's probably better for someone not in college than a traditional computer architecture textbook.
 
I know a fair amount about the subject I just need to know more about the subject.
No you don't, unless you compare yourself to a grocery store clerk. But that's not the correct POV. Your questions make no sense to folks who understand the subject. This does not come from hubris but from the sheer fact that you simply don't know the subject enough and your questions expose this fact.

I read some things on ic and software. I just wanted a better understanding of how the instruction decoder/registers/interrupts and other RTL level components work.
You don't have the necessary background knowledge to do that. Instead of being combative, take our word for it and look into resources provided above. Hannibal's book "Inside the Machine" is a good start, as mentioned above. It's super fun read and will give you some of the much needed basics. I assure you that your questions will be much more lucid once you do that.

Like how does a binary computer computes vectors and why is simd or fused mac important in a gpu.
You know tidbits and buzzwords. Trust the people, there's a lot more groundwork you need to do, really.
 
No you don't, unless you compare yourself to a grocery store clerk. But that's not the correct POV. Your questions make no sense to folks who understand the subject. This does not come from hubris but from the sheer fact that you simply don't know the subject enough and your questions expose this fact.

Why are you criticizing me for asking questions? What does it matter to you why I am asking the questions it does not pertain to you.

You don't have the necessary background knowledge to do that. Instead of being combative, take our word for it and look into resources provided above. Hannibal's book "Inside the Machine" is a good start, as mentioned above. It's super fun read and will give you some of the much needed basics. I assure you that your questions will be much more lucid once you do that.

And you know this how? Your replies keep getting more insulting...and more silly

You know tidbits and buzzwords. Trust the people, there's a lot more groundwork you need to do, really.

Your opinion is noted.

Please stop with the personal attacks. All i am doing is asking questions for a better understanding of the subject.

You don't have to be insulting. We are all adults here.

Can a series of flip flops read bit values from both left and right ends?
Can you have a specialized logic block in a much larger proprietary design without violating a rights restriction of the ip?
What is the advantage of having a truly opensource ip(like RISC-V) vs a licensed ip(like arm a57)?
 
Last edited by a moderator:
Sigh.

You're asking questions that have do not have answers, because you're asking broken questions. That's what Dominik is trying to tell you, and he isn't doing it in a combative way. The naivete of your questions betray your lack of understanding of the core subject -- that's what he's getting at.

Let's put it a different way: the "Dunning Kruger" effect, which states:
The Dunning–Kruger effect is a cognitive bias manifesting in two principal ways: unskilled individuals tend to suffer from illusory superiority, mistakenly rating their ability much higher than is accurate, while highly skilled individuals tend to rate their ability lower than is accurate. In unskilled individuals, this bias is attributed to a metacognitive inability of the unskilled to recognize their ineptitude. Skilled individuals tend to underestimate their relative competence, erroneously assuming that tasks which are easy for them are also easy for others.[1]

The bolded section is mine, and exactly outlines the challenge you have. You are so completely far in left field with these questions that you can't realize how much you dont' know about the subject.

This isn't an attack, it's a statement of pointed truth. If you take it as an attack, you have by definition proven your bias in the subject matter.
 
Please stop with the personal attacks. All i am doing is asking questions for a better understanding of the subject.

You don't have to be insulting. We are all adults here.

Can a series of flip flops read bit values from both left and right ends?

He wasn't attacking or insulting you, he tried to help you and then after your second set of questions assessed your understanding of things, which is usually necessary to help someone. His assessment was you didn't know the basics properly and that your questions were non/semi-sensical. To be honest I thought the same thing or that you were trolling/clowning with your questions.

For example a flip-flop has no left or right end, and if that is what you think you're focusing to much on the picture and not functionality as defined by specific implementation (SR, JK, and if I'm not mistaken D flipflops). Anyway my suggestion is that you should review what you claim to know because unless there is a language issue your questions do indeed point to you not really understanding the material.
 
Sigh.

You're asking questions that have do not have answers, because you're asking broken questions. That's what Dominik is trying to tell you, and he isn't doing it in a combative way. The naivete of your questions betray your lack of understanding of the core subject -- that's what he's getting at.

Let's put it a different way: the "Dunning Kruger" effect, which states:


The bolded section is mine, and exactly outlines the challenge you have. You are so completely far in left field with these questions that you can't realize how much you dont' know about the subject.

This isn't an attack, it's a statement of pointed truth. If you take it as an attack, you have by definition proven your bias in the subject matter.

Type what you want random person on the internet. If intellectually bullying strangers on the internet gives you some form of validation then go right ahead. You can attack me personally all you want if you like. Calling me inept and arrogant(Dunning Effect is a mild form of arrogance.)
I am not an IC fab engineer nor do I play one on TV so me being ignorant on the subject matter should be no surprise to you. Hey if I knew more than you(or nearly 1/3 as much) why would I be asking you so many inept questions?

Back on topic.

Why doesn't Intel just ditch the old fossil x86 architecture and license armv8 like AMD does?
 
Why doesn't Intel just ditch the old fossil x86 architecture and license armv8 like AMD does?
Because the old fossil is their own. They produce some of the fastest processors on earth with it. It gives them extremely valuable lock-in on various industries / segments with its no compromise backwards compatibility. Baytrail has proven x86 can run with ARM pretty well in power scaling too.

Intel isn't a stranger to making ARM processors though.
 
Last edited:
...(Dunning Effect is a mild form of arrogance.)
Sigh.

No, it isn't. But your continually defense posture will only ever let you see anything we give you as "an attack", so I'm simply not going to respond to any more of those questions. I suspect that anyone else that might have attempted to help you with those questions will similarly stop helping, because you fail to acknowledge your current abilities.

To your question on ARMv8 licensure -- I must agree with Swaaye. At this moment Intel has no specific need to ditch x86. It would be akin to Microsoft ditching Windows, really. Both have an amazing installed base, both legacy and future, both own the namesake, and both have thousands upon thousands of highly skilled engineers to carry their wares into the future.

They both also have competition, and so both are diversifying their portfolios to meet those challengers on uncomfortable turf. Intel and Microsoft both are having a difficult time in the mobile and super-low-margin spaces, but they both also have the technical talent to (arguably) make a competitive product in those spaces.
 
Last edited:
Sigh.

No, it isn't. But your continually defense posture will only ever let you see anything we give you as "an attack", so I'm simply not going to respond to any more of those questions. I suspect that anyone else that might have attempted to help you with those questions will similarly stop helping, because you fail to acknowledge your current abilities.

And my ability is important to you why? Look if you want to criticize me on the limitations of "current abilities" then PM me and we'll discuss it somewhere else. Where all adults here.

To your question on ARMv8 licensure -- I must agree with Swaaye. At this moment Intel has no specific need to ditch x86. It would be akin to Microsoft ditching Windows, really. Both have an amazing installed base, both legacy and future, both own the namesake, and both have thousands upon thousands of highly skilled engineers to carry their wares into the future.

I get the legacy argument. But x86 is a large power hungry chip. The current trend is smaller,cooler and cheaper

They both also have competition, and so both are diversifying their portfolios to meet those challengers on uncomfortable turf. Intel and Microsoft both are having a difficult time in the mobile and super-low-margin spaces, but they both also have the technical talent to (arguably) make a competitive product in those spaces.

I am just a random person on the internet posting on a message board with other random people on the internet. Relax. Its not like I am asking questions about the US tax code.

yeah intel can dump a massive budget into anything and make it competitive. Why not make a multicore armv8 with a legacy intel x86 core?

microsoft has real competition with google's android os. The chromebooks are an encroachment into microsoft's backyard.
 
Last edited:
Back
Top