- v50 information can now be added to pages in the main namespace. v0.47 information can still be found in the DF2014 namespace. See here for more details on the new versioning policy.
- Use this page to report any issues related to the migration.
User:Vasiln/Goblin Logic 3
Optimization: Opcodes revisited[edit]
Right now, we have seven operations codes to spare: 0000, 1010, 1011, 1100, 1101, 1110, and 1111; and 15 arguments for XOR; and 8 arguments for bit shift or bit rotate. Way more than we need. At the same time, we don't have nearly enough memory. We could do a couple of things: we could add instructions, so that for instance it wouldn't take two instructions to add two numbers. We could also switch it up to a 3-bit instruction and a 5-bit address.
Or, we could do both. Let's redefine our operations:
- 000xxxxx Write from xxxxx to register a
- 001xxxxx Write from xxxxx to register b
- 010xxxxx Write from register a to xxxxx
- 011xxxxx Write from register b to xxxxx
- 100xxxxx Register operations, receives xxxxx as specifying argument
- 101xxyyy Bit shifting register operations, receives xx as specifying argument, yyy as bit value to shift
- 110xxxxx Jump if not b to address xxxxx
- 111xxxxx Reserved (Halt?)
Two of these need specifying.
100xxxxx Register operations[edit]
*10000000 XOR a and b -> a *10000001 Add: XOR a and b -> a, write 9-bit carry word to buffer, XOR a and carry word->a, write 8-bit carry word to buffer, with carry(8)=output of first XOR(8) OR output of second XOR(8) *10000010 Add with carry: Add a and 0000000x where x is equal to carry(8), add a and b, carry(8)pre-b OR carry(8)post-b->carry(8) *10000011 XOR a with 11111111 *10000100 Add 1 to a (XOR a and 00000001, XOR a and carry word) *10000101 Compare a to b, output to b *10000110 Subtract a from b -> a (XOR a with 11111111, XOR a with 00000001, XOR a with carry, XOR a with b, XOR a with carry, invert carry(8)) *10000111 Subtract a from b with carry -> a (subtract carry(8) from b, subtract a from b) *10001000-10011111 Reserved (24 choices!)
101xxyyy Bit shifting register operations[edit]
*10100yyy Arithmetic right shift 0-7 units (y units) *10101yyy Arithmetic left shift 0-7 units (y units) *10110yyy Right rotate 0-7 units (y units) *10111yyy Reserved
Note that there's no reason to send a y of 000. I suppose 1010000, 10101000, and 10110000 are also reserved values. We'll try and save 10111yyy for an operation that requires a 3-bit argument. Maybe like "write true" or "write false" or "bitwise XOR."
So I just bit the bullet. We made a carry byte; we established a carry bit. We also built two constants into our system: 1 and -1. We've succumbed to 5 bit addressing, but we still have plenty of reserved instructions. We can add, subtract, multiply or divide, with a smaller instruction count. Finally, we're ready to design the kind of memory to be used by our registers:
######## <h^hh^h< ########
Lol. All of our functionality is built into our operations circuits or our buffer. Our register memory doesn't need to do anything special.
A note on the notion of "Turing complete"[edit]
To be Turing complete, a computer has to have four things: a NAND, a NOR, a jump if, and infinite memory.
The idea is that a computer with all of those things can run any program that any computer imaginable can run. Those are our minimum specifications.
You might notice that we lack one of those criteria. That's okay: so does every other computer that exists. Typically, when people talk about being "Turing complete," they ignore the infinite memory part.
So if all we need are 3 operations, why do we have the rest? If you've been reading the programs so far, you might imagine that it's for reasons of speed and memory space. I could write an XOR with a NAND and NOR. I could write a bit shift with an XOR and a jump if-- it would just be long and slow. I could redesign my look-ahead unit in software. I could store carry bits in main memory.
With only 32 bytes, operating under circumstances that are less than ideal-- I need all of the speed and memory space I can get.
Optimization: Memory Addressing Revisited[edit]
We spent a lot of time optimizing our adder, and ended up with a parallel add/look-ahead that was very fast-- less than half the time of our parallel ripple, and far faster than a full adder. What I forgot to mention is that there was a lot of stupid delay built into the add just by virtue of requiring two memory accesses (two XOR operations). We've been assuming that a single goblin will address all of our bits in a particular bit location, but that takes a lot of time: trip the hatch takes 100 ticks just to get him started, and with 32 bits to address, we're probably looking at something like 30 moves to make it to our addressable bit-- so we've got around 400 ticks just to read memory/buffer, followed by 100 ticks of latency for our memory.
We can cut this in half by addressing our bits in parallel.
######### #### ### ## ######d ## # #### ^### # # # #hhhhh#hX# # null path, if not addressed # # #### # # # ########d ## # # ddddddd^### # write true to memory ## # ########hX# # # ## ########### # #h#d ########d ## ##^# ddddddd^### # reset position write true to buffer #h## ########hX# # #X## ########### # #h## ########d ## #^## #######^### # memory true ##hd# ddddddd#hX# # write true to buffer #dh## ########### # ##^## ddddddd#d ## memory false write false to buffer #h## #######^#### #X###########hX# ### ####
This is memory bit 11111(0): that means it is addressed by our memory address being 11111, and it reads/writes buffer bit location 0. For memory at address 01111, we would see the first door of each path replaced with a hatch, and the first hatch of our null path replaced by a door.
We've replaced our begin hatch with a begin door. When it gets tripped, the goblin begins operations. That lowers latency, so long as we can guarantee that reset path doesn't become available before that door closes. He returns r/w t/f or null via the top NAND then returns to reset. Longest time to completion: 12/4, 176 ticks, followed by another 100 ticks from memory latency. Time to reset isn't important-- it can happen simultaneous with other operations.
You might notice two problems. Before, we were using a goblin to tell us if memory r/w was complete. Then, he had eight doors to walk through. Now, to do the same thing, he'd need 256 doors to walk though. The sheer distance would negate any speed benefit by independently addressed memory. How can we work around that?
The other problem is that this memory is 19 tiles wide. We can't fit 8 of these in a 3 square embark. Let's do some rearranging.
### ###d## Set=True ##^hh^## False True ######h#d##h# Set=False ## d#hX####hX# # ##^###hd^#### Write True to Memory # # ### ###d ## # # h## ###### # # # h # ###hX# # # # h # hh^### # Write False to Memory # # h # ###d # # ##h # ###### # #h#d ## ###hX# # ##^##d# dd^### # Write True to Buffer #h#h#d# ###d # #X# #d# ###### # # #d#d# ###hX# # # #^##d#dh^### # Write False to Buffer ##h#h######d ## #### ### #########
I've consolidated the addressing doors-- in fact, I have room for one more addressing door (which we just might use). I've rearranged the r/w functions to optimize for writes to buffer instead of writes to memory, because the former are going to occur with more frequency than the latter, but it would be a good idea for me to perform reverse optimization on one or two words at the end of the memory space-- the 31st and 32nd bytes. Our memory's been curved around a little bit (at the top of the diagram) but it works the same.
I could make it smaller, and thus faster, by using z-levels, but I don't want to do that. There is a lot to be said for keeping your design understandable at a glance. By the time I'm done, even I won't understand what I'm saying or what I've done.
One memory cell needs 2 goblins, 42 doors/hatches, and 63 mechanisms, if I counted right. Entire memory space will thus require 64 goblins, 1312 doors/hatches, and 2016 mechanisms.
Our memory cell is large-- 16 wide, 21 tall. Sides are just walls. 8 of these, side by side, would take up 121 tiles. That means that to keep a nice looking computer, we need a 3-wide embark. In that space, I could keep 8 and expand my memory cell by 2 tiles, which would give me a whole mess of addressing bits. A 2 square tall embark would give us enough room for 4 of these. We might just need that-- each byte needs to be separated from the other by 2 dead z-levels, remember (could get around that with offsets, but that requires visualizing everything at once). So we could fit our entire memory space inside 24 z-levels of one map.
I'd rather keep it subterranean. I don't like the idea of a roc flying in and killing my memory. I guess I could get away with bridged pitting squares instead.
Let's consider build order too. We want any linked hatches or doors to be nearby in the build queue-- so that we can find them with a minimum of keystrokes. In our case, we can't build address doors until we've built an address buffer, buffer T/F doors until we've built a buffer, the begin door of our reset or until we've built a cycle manager or continue doors until we've built a r/w manager, but everything else in this circuit just links to itself.
What about the problem of knowing when r/w is complete? By separating our continues into r/w and null, we guarantee that each bit location will only return a single r/w signal (and 31 null signals). We can still tell, with just eight doors. But each door has to be linked to 32 different r/w completion points.
Optimization: Memory Latency[edit]
So far, we've only been linking our operations to the true part of memory. We've been using FALSE and NOT TRUE interchangeably. But they're not quite interchangeable-- not really. Consider the following two circuits:
### ##### ###d#### # h # Xh^hh^hX r###c ####d### # d # ### #####
Let's link the leftmost plate of our memory (TRUE) to the door, and the rightmost plate of our memory to the hatch. Starting state of memory is TRUE. The door is open, and the hatch is closed.
What happens when we open the northern memory door at tick 0, allowing our goblin to move to false? At tick 5, he leaves his TRUE pressure plate. At tick 33, he reaches the FALSE pressure plate. The hatch opens at 33. The door doesn't close, however, until 105.
It happens in reverse, the other way. When we set him to true at tick 0, the hatch closes at tick 105, and the door opens at tick 33.
Our memory has two different latencies and a refractory period. The latency to switch to TRUE is the time it takes a goblin to move from FALSE to TRUE-- about 33 ticks. The latency to switch to FALSE is the time it takes a goblin to move off of TRUE plus 100 ticks-- about 105 ticks. The refractory period is the time after a write to memory during which we cannot write to memory again-- usually, 110 ticks (time for the door to close from a delay 10 goblin running a trip plate).
Design that takes advantage of this disconnect between FALSE and NOT TRUE gets really byzantine: you need to combine ORs with NANDs in the same design, make twice as many arms, etc. Logic was never designed to differentiate FALSE from NOT TRUE. You can't use it to optimize for both true and false, but if you expect one value more frequently than the other, you can optimize for that; if one state is timing dependent while the other is not, you can optimize for that.
One example of how we're optimizing for this lies in our memory timer-- the goblin that determines when all memory R/W has completed. We could give him eight hatches instead of eight doors, each of which started open (probably from his position on reset) and was tripped by a goblin completing a r/w. But that would introduce additional latency to our r/w: he wouldn't make it through the path until 100 ticks after r/w completion.
Cycle Manager[edit]
Every cycle of our computer follows a certain flowchart. Let's spell it out.
1) Copy memory at pointer to buffer 2) Copy buffer to instruction buffer 3) Increment pointer 4) Evaluate instruction 5) Go to 1
That's all. The complicated part of it is "evaluate instruction," which is sometimes going to involve writing to various places, someplace reading from various places, sometimes running operations to copy to buffer followed by operations to copy from buffer to register. But our basic cycle manager is very simple:
############### ## h## # ############4##### #d#hh1h^h23h^h#h#hX# ##r#############^### ##h#h#############h# #X###d ## ###################
r is reset. Each number corresponds to a pressure plate beginning that process. Each ^ corresponds to a pressure plate where we wait for completion of that operation. Notice that 2 and 3 can happen at the same time (we don't really need a separate pressure plate.)
I've added two other things. One is a lever operated door before reset. With this, we can permit operation or halt the system (very important if we accidentally program an endless loop!). I didn't design it very well, because it doesn't limit the goblin to a single tile, but it works. The other is a "back door" to our "execute instruction" part of the loop. This is important, because we need to get our values into memory to program the computer. We're going to do that via manually altering the instruction register and register a to write our program where we want it. We use our back door by opening the back door and closing, via hatch, normal progress along our cycle. Again, design was lazy and needs improvement.
Special Memory: Getting stuff done[edit]
So far, our computer doesn't do anything. I mean, it can compute anything we throw at it, but it doesn't kill goblins or irrigate our farms or anything like that. What we need are special bytes: output and input.
Output[edit]
Output bytes look like any other kind of byte (yeah, I'm going to stop showing you that same picture of a memory bit now.) The only difference is that the pressure plate in the memory is linked to more than just r/w functions-- it's linked to existing devices. For instance, a bridge could raise when the goblin moves into TRUE position and lower when the goblin moves off of TRUE position.
You can see that a single output byte actually can have eight different outputs. How do we distinguish between which outputs? If output(0) opens our floodgates and output(1) raises our bridges, how do we do one without doing the other?
The answer doesn't lie in mechanics or design, but in programming.
Load 11111111 into b Arithmetic right shift b 7 times b is now 00000001 Arithmetic left shift b 1 time b is now 00000010 Load output into a XOR a and b if a(1) is 1 then a(1) becomes 0; if a(1) is 0 then a(1) becomes 1 Write a to output
We've just toggled the status of our second bit of output memory.
What if we don't want to toggle it, but just lower it? First, we need to know the state of output.
read output into b left shift b 6 times right shift b 7 times jump if not b (else) execute toggle
Or raise it?
... right shift b 7 times load 00000001 into a subtract a from b jump if not b
I told you-- we were going to do a lot more bit shifts than add-with-carries.
Decimal Output[edit]
By the time we're done with this computer, thinking in binary (base 2) will come to us as naturally as thinking in decimal (base 10). So we probably don't really need to output in decimal. But of course we want people to be impressed with our computer, and they're never as impressed by "0000100/00000001=00000010" as they are by seeing "8/2=4."
Decimal output is actually quite the problem. We'll have a goblin with ten possible positions (0-9) for each digit, which means a ten-armed path. But how many inputs do we need? We'll start with the ones:
Goblin is at 0 if output(0) is 0 and output(1) is 0 and output(2) and output (3) is 0 OR if output(0) is 0 and output(1) is 1 and output(2) is 0 and output(3) is 1. Right? Not quite. What if output (4) is 1? Then the goblin would be at zero when the value was 16 or 26! We have to examine ALL of our output bits to determine what our decimal ones value is. What if we want a high level of precision-- using 2 bytes to store a single number that we then want outputted in decimal? We have to examine all 16 bits! So we have 10 arms, each of which divides with OR statements, each of which evaluates 16 operands. Even then, it's not scalable.
There's a better way to do it: do the hard work in software. Consider the following program:
Given Variables: BOP(0-7), DOP100, DOP10, DOP1, Count; Constants: 1, 10, 100, 0 variables binary output, decimal output 100s, etc begin 100s loop: If 100>BOP write 0 to DOP100 and jump to end of loop Increment count Subtract 100 from BOP and write the result to BOP Return to begin 100s loop write count to DOP100 write 0 to count begin 10s loop: If 10>BOP write 0 to DOP10 and jump to end of loop Increment count Subtract 10 from BOP and write the result to BOP Return to begin 10s loop write count to DOP10 write 0 to count begin 1s loop: If 1>BOP write 0 to DOP1 and jump to end of loop Increment count Subtract 1 from BOP and write the result to BOP Return to begin 10s loop write count to DOP1
I didn't write it in our language-- I wrote it in functions that we know how to use already. You can see what it does-- it writes separate decimal values to three different bytes. The nice thing is that it's extensible to any size of decimal number. If we have a 2-byte number, we also need a DOP1000 and a DOP10000.
It's kind of a waste of our memory to store the decimal output (for which we really need no more than 4 bits) in an entire byte. Remember how we wrote separate instructions and data to a single byte before? We could do that again!
left shift DOP10 4 times left shift DOP1 4 times right shift DOP1 4 times XOR DOP10 and DOP1, write to DOP1
What we're doing is storing decimal values in hexidecimal (base 16) format. Now, we need two digits per output byte, and we still have ten paths, but the work is much much simpler. Here's decimal output for DOP1(0-3):
If DOP1(0)=0 AND DOP1(1)=0 AND DOP(2)=0 AND DOP(3)=0) then output 0 If DOP1(1)=1 AND DOP1(1)=0 AND DOP(2)=0 AND DOP(3)=0) then output 1 If DOP1(1)=0 AND DOP1(1)=1 AND DOP(2)=0 AND DOP(3)=0) then output 2 ... If DOP1(1)=0 AND DOP1(1)=0 AND DOP(2)=0 AND DOP(3)=1) then output 8 If DOP1(0)=1 AND DOP(3)=1) then output 9
Notice that output 9 is a special case. It just clamps the value. If it's higher than 9 (can go up to 15!) then we just output 9. That's because we need to send this circuit decimal output from software. Our program above should never output a value higher than 9 to any digit value.
Input[edit]
Input's harder. At first, you wouldn't think so. Just link a lever to write to true, right? And another to write to false?
The problem is in simultaneous r/w. You have to prevent a goblin from writing to an input bit at the same time that you're writing to it from a lever. It doesn't help much to make the input read only (by the computer)-- you still have the problem of reading while writing is occurring, which can lead to two paths for our poor r/w goblin (simultaneous read true and read false).
What you have to do is prevent the goblin from reading (or writing) while any writing is going on. That means that every input device (levers, but pressure plates are better) has to prevent a r/w goblin from reading it until 100 ticks after writing to the input.
Remember how I said that there was actually one extra space for an addressing door in our memory? I wasn't trying to suggest that we build 64 bytes of memory-- that would be a monster (as if 32 isn't.) What I was talking about was that we could use this to prevent writes to our memory immediately following a write from input. The easiest, simplest way to do that is end our line of addressing doors with a hatch, and link anything that writes to input to that hatch. That way, there won't be a path until 100 ticks after a write occurs. (You still have to make sure that you don't write to it from input so fast that it changes while the goblin's in the pathway! More on that later.)
Clock bits[edit]
A perfect example of this is a clock. Clocks are great input-- for instance, if we want a doomsday program to constantly check the clock until a certain time is reached, then open the magma floodgates, we need clock-input. You never want to write to a clock. But clocks change all the time.
### ###<# ##h#h# ##^#^## #hd#dh# ##^#^## #h#h## #<### ###
Here we have an incrementing memory cell. It's the perfect design for a clock. Somewhere, we have a repeater toggling this memory, any any incrementation ripples through the next bits. If clock(0) toggles every 200 ticks-- well, if clock(0) toggles every 200 ticks, then by the time we've run a single instruction, clock(0) won't be the same anymore. But if clock(0) toggles every 1200 ticks (that is, daily), we want to be able to grab a correct value-- not yesterday's value, not tomorrow's value, not some random value. I mean, what if clock(0) has toggled when we read it, but it has created a carry that hasn't yet percolated through clock(7) when we read it? It'll look like we went back in time.
Maybe we can link that hatch in our addressing circuit to the same thing that toggles this memory?
No. We can't. And it's a perfect demonstration of why we have our basic rules (remember them?) This makes it so that we have a goblin with spaces to move on, but with no path. What if the addressing goblin is right on the hatch when the clock rolls over? He drops, and the program stops, the whole computer stops, waiting for memory read to complete. (Maybe he kills some of our dwarves too. But our computer!)
In the case of the clock, what we want to do is block access to the entire circuit whenever a clock updates. That includes clock(7), which might be incrementing much later (let's be conservative, and say 1400 ticks later). We're going to institute a guard for the begin door, which is normally triggered whenever it's time to read or write to memory.
#### rddc ####
Leftmost door is linked to the thing that normally says that writes are allowed to memory. Rightmost door is linked to our actual write to clock. And continuation leads to an iterated delay circuit. (This breaks our rules, but again, since the doors will never close until continuation is reached, only open, it's okay.)
### ### rhc rhc ### ###
Continuation of clock guard trips the hatch; 100 ticks later, the 1st delay goblin runs to continuation of delay1, which trips the hatch on delay2; 100 ticks later, delay2 goblins runs over our second continuation, which opens the door to clock(0). We'll need to institute further iterated delays to read clock(2-7). Note that we only institute iterated delays on later clock reads-- if we delay 1600 ticks to read clock(0) it will have changed again already! We can do this, because we have separate access doors to read each of the seven bits.
This is bad. It means reading the clock is slow. First, we have to wait for the clock to change; then, we have to wait for all possible changes to percolate. (We could build a look-ahead unit for our clock to make this faster, but let's not go there.) Can this be optimized? Probably. Should it be optimized? A dwarf has only so much energy to devote to problems. (In other words, I don't yet know what the ideal solution is.)
This is also not easily generalizable-- that is, it's specific to clock bits, which we know will change sometime. But we can't just sit around waiting to see if you're going to throw a lever. You might never! We don't know if you're ever going to throw that lever or not. Do we?
We do-- so long as we can institute sufficient delay in your lever throw, and execute reads/writes first before executing your lever throw. What we do here is wait for our computer to complete a r/w cycle, then execute your lever throw, in the time between possible r/w. (There'll be plenty of time. Believe me.) This means instituting a similar delay, with a similar manager, to writes to input from lever (or, preferably, pressure plate, because we want everything to reset automatically), rather than to reads from input by computer. The mechanisms are the same.
The Doomsday Program[edit]
set time_to_die = clock set time_to_die = time_to_die + delay begin early loop write clock to original_clock if time_to_die < clock then: begin early loop: read clock if clock < original_clock go to late loop go to begin early loop begin late loop read clock if time_to_die < clock then XOR output with 10000000 go to begin late loop
Why are there two checks to see if the clock has passed ragnarok date? Because the clock, and our values, can roll over. What if our delay is 01000000 and our clock is at 11000000? Time to die will be at 00000000, and since that's less than clock at program start, we'll have a premature armageddon. Instead, we have to wait for our clock to roll over, which we verify by seeing that clock is at a lower value than it was at the beginning of evaluation. [What's output(7) do? Use your imagination. I prefer to think that it releases a troll into main memory.)
The Clock[edit]
You might notice that I never even said what's driving our clock. That's because it doesn't matter. What I'm going to use is something like the following:
### ### #X# #X# #h##########h# ##^d h^## #h########d# # # # # # # # # # # # # # # # # # # # # # # # # #d########h# ##^h d^## #h##########h# #X# #X# ### ###
It won't run perfectly, because goblins move probabilistically, but over the long term, that will average out. The goblin running it will also slow as time passes and his attributes rust, but eventually that will bottom out as well. So we won't know that this clock is moving predictably until a long time has passed-- after that, we see how many iterations it covers in a year (think of how useful our computer will be for that task!) and then do the math, and after that, we know what rate our clock is moving at.
Decimal Input[edit]
Every once in a while, you might want decimal input. It's hard to imagine why-- every possible means of inputting in DF is limited to binary-- but you might want to do it nonetheless, perhaps by a series of ten levers, each of which inputs a base 10 number from 0-9.
This is simple task-- at least it is, now that we've built a tool set. We have 10 bits of input (distributed through at least 2 bytes), each of which refers to a single digit; all of our input both writes true to the appropriate bit and writes false to the inappropriate bits. All we need to do is multiply the digits with an exponent, then add them to zero. Consider a calculator, where you enter a sequence of bits from 1s to whatevers-- the first number you enter will be the leftmost (most significant), but how big it is depends on how many digits you enter. Here's what a program reading input delivered in that style would look like:
If digit(0) and digit(1) jump to end If digit(0) then set BIP=00000000 If digit(1) then set BIP=00000001 ..... Set digit=00000011 (make sure our computer knows no further input has been generated) Begin exponentiation: If digit(1) XOR digit(0) else jump to this instruction Multiply BIP*10, output to BIP If digit(1) add 1 to BIP If digit(2) add 2 to BIP ..... Set digit=00000011 Go to begin exponentiation
What this does is wait for you to input a digit-- every time you do so, it considers that digit as an extra base 10 digit to an existing binary number. All this program would actually do is let you input a number-- you'd need another input (like an add key, for instance) to get the program to do anything else. This program would also overflow very easily-- anything greater than 255 would overflow. You might want to use a multi-word variable for this program.
Input/Output[edit]
You might wonder how you can build a bridge that you can operate with a lever, but that still plays nice with your computer-- so that you can open it or close it, but your computer can too. This is input/output. Instead of linking your lever directly to the bridge, you link it to an input bit. Then, you treat that input bit as an output bit, and link it to your bridge. Now your computer can operate the bridge, you can operate the bridge, and your computer can even tell if you pulled the lever while it wasn't looking.
The problem with this is that not only do you have to prevent the computer from reading (or, now, writing to) the bit while you're writing to it-- the computer also has to prevent you from writing to the bit until after it's done writing to it. We can't fall back on to read-only memory to get around the problem like we did with the clock. The computer can't wait around to see if you're going to throw the lever, because you might be waiting around to see if the computer's going to throw the lever. How are we going to get around this?
We're going to get around it because it's not a real problem. The computer isn't ever sure that you're going to provide input on a subject, but you're sure that the computer is going to provide input on some subject. That means that there is a specific time that you can provide input: after the computer has read memory to buffer, but before the computer has started to execute any instructions (some of which may include writes to i/o.)
What this means is that we need to institute a delay following memory read-- sufficient delay for input to be written. This can be managed by our cycle manager. All we need is 200 ticks for our computer to r/w, followed by 200 ticks for your input to r/w. Only eight hours, between the two!
Let's do it a little differently. If we can institute that delay only when writing to i/o memory, we can chop memory access times in half. That means specifying which bits are i/o, and instituting a delay on "write from register a" or "write from register b" operations dependent on what byte we're writing to. Let's designate byte 29 as i/o. (30 and 31 are designated variables, because they're faster to write to than to read from.)
Of course, that's kind of limiting. We might want use i/o on more than 8 devices. Have you considered that most programs want to work on all bridges, or all floodgates, or all doors? We need more than one byte of i/o, don't we?
Feedback: Memory multiplexing[edit]
We do. But there's one more really cool thing we could with i/o. It doesn't even involve creating any latency.
We can make a memory cell that outputs to computer functioning.
We've already demonstrated how to limit access to memory cells without changing the addressing bits. That same function could be used to allow access to one bit of memory (while denying access to another). What if we designed an output bit-- that outputted to which memory bits were addressed? Perhaps via that space that we have for another door or hatch in our latest memory design? Perhaps by more manager circuits instead? (Use the managers.)
Yes, we can do this. It's called multiplexing.
We could have 256 banks of memory, each 31 bytes large (well, maybe we could do it on a 16x16 embark) and switch between them with a multiplexer. All it is an output byte that controls the action of doors allowing memory r/w functions.
Why 31 and not 32? Well, because as soon as we multiplex, we no longer have access to any of the memory that we multiplexed out of. Don't we need to be able to multiplex back to it? So let's keep our multiplexer as shared memory, between all 256 banks. That way, all we have to do is write to our multiplexer to return us to the previous bank of memory.
Why do we even have 5 bits of addressing if we can do this? There's a serious disadvantage to multiplexing: we can't easily carry variables between multiplexed addresses. For instance, when we move from bank 00000000 to 00000001, we can only drag one piece of data with us (we have two registers, and one of them has to contain the value that we write to the multiplexer to change memory spaces.) If we try to read the value in 11111111, we'll grab the data in bank 00000001 instead of in bank 00000000, which aren't going to be the same thing.
Since 256 banks of memory would take up a huge space (but oh god, think of what we could do with 8kb, who could ever need more than that?), we're not going to do that. Instead, we're only going to multiplex one byte: our output byte.
By multiplexing, we can specify which one of 256 output bytes to write to. We can have 256 banks, each consisting of 8 devices, that we can toggle, turn on, turn off-- whatever. All we have to do is write to our multiplexer to specify to which output byte we want to write. No, we don't have to use all 256!
Prefer input bytes? I can't imagine why, but multiplex those instead! Hell, we could even have a split multiplex. By multiplexing a few data cells as well, we could maintain a large bank of variables to pass between memory spaces, while still keeping sufficient multiplexed instruction space.
And yes-- you can multiplex your multiplexers. 256 banks of 256 banks of 30 bytes (2 megabytes, we've reached the Macintosh).... You could multiplex your operations codes. You could multiplex your registers. You can use your computer to change how your computer behaves. That's what a Turing machine is all about.