Does anyone actually know how the ones and zeros of binary code are translated into the things we see on the screen.
I get binary numbers
but, how does a computer work out that you mean, make this bit blue, and this bit black, or whatever, and not just go if 0 equalled white, and 1 equalled black, that you didn't mean one white dot, one black dot, one white dot, one black...
what seperates one binary chunk of coding from another, like full stops or whatever?
I can't find a book that'll tell me.
It's driving me nuts not knowing.