v50 Steam/Premium information for editors
  • v50 information can now be added to pages in the main namespace. v0.47 information can still be found in the DF2014 namespace. See here for more details on the new versioning policy.
  • Use this page to report any issues related to the migration.
This notice may be cached—the current version can be found here.

Editing User:Vasiln/Goblin Logic 3

Jump to navigation Jump to search

Warning: You are not logged in.
Your IP address will be recorded in this page's edit history.


The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
 
==Optimization: Opcodes revisited==
 
==Optimization: Opcodes revisited==
 +
 
Right now, we have seven operations codes to spare: 0000, 1010, 1011, 1100, 1101, 1110, and 1111; and 15 arguments for XOR; and 8 arguments for bit shift or bit rotate.  Way more than we need.  At the same time, we don't have nearly enough memory.  We could do a couple of things: we could add instructions, so that for instance it wouldn't take two instructions to add two numbers.  We could also switch it up to a 3-bit instruction and a 5-bit address.
 
Right now, we have seven operations codes to spare: 0000, 1010, 1011, 1100, 1101, 1110, and 1111; and 15 arguments for XOR; and 8 arguments for bit shift or bit rotate.  Way more than we need.  At the same time, we don't have nearly enough memory.  We could do a couple of things: we could add instructions, so that for instance it wouldn't take two instructions to add two numbers.  We could also switch it up to a 3-bit instruction and a 5-bit address.
  
Line 15: Line 16:
 
Two of these need specifying.
 
Two of these need specifying.
  
=====100xxxxx Register operations=====
+
100xxxxx Register operations
 +
 
 
  *10000000 XOR a and b -> a
 
  *10000000 XOR a and b -> a
 
  *10000001 Add: XOR a and b -> a, write 9-bit carry word to buffer, XOR a and carry word->a, write 8-bit carry word to buffer, with carry(8)=output of first XOR(8) OR output of second XOR(8)
 
  *10000001 Add: XOR a and b -> a, write 9-bit carry word to buffer, XOR a and carry word->a, write 8-bit carry word to buffer, with carry(8)=output of first XOR(8) OR output of second XOR(8)
Line 26: Line 28:
 
  *10001000-10011111 Reserved (24 choices!)
 
  *10001000-10011111 Reserved (24 choices!)
  
=====101xxyyy Bit shifting register operations=====
+
101xxyyy Bit shifting register operations
 +
 
 
  *10100yyy Arithmetic right shift 0-7 units (y units)
 
  *10100yyy Arithmetic right shift 0-7 units (y units)
 
  *10101yyy Arithmetic left shift 0-7 units (y units)
 
  *10101yyy Arithmetic left shift 0-7 units (y units)
Line 43: Line 46:
  
 
===A note on the notion of "Turing complete"===
 
===A note on the notion of "Turing complete"===
 +
 
To be Turing complete, a computer has to have four things: a NAND, a NOR, a jump if, and infinite memory.
 
To be Turing complete, a computer has to have four things: a NAND, a NOR, a jump if, and infinite memory.
  
Line 54: Line 58:
  
 
==Optimization: Memory Addressing Revisited==
 
==Optimization: Memory Addressing Revisited==
 +
 
We spent a lot of time optimizing our adder, and ended up with a parallel add/look-ahead that was very fast-- less than half the time of our parallel ripple, and far faster than a full adder.  What I forgot to mention is that there was a lot of stupid delay built into the add just by virtue of requiring two memory accesses (two XOR operations).  We've been assuming that a single goblin will address all of our bits in a particular bit location, but that takes a lot of time: trip the hatch takes 100 ticks just to get him started, and with 32 bits to address, we're probably looking at something like 30 moves to make it to our addressable bit-- so we've got around 400 ticks just to read memory/buffer, followed by 100 ticks of latency for our memory.
 
We spent a lot of time optimizing our adder, and ended up with a parallel add/look-ahead that was very fast-- less than half the time of our parallel ripple, and far faster than a full adder.  What I forgot to mention is that there was a lot of stupid delay built into the add just by virtue of requiring two memory accesses (two XOR operations).  We've been assuming that a single goblin will address all of our bits in a particular bit location, but that takes a lot of time: trip the hatch takes 100 ticks just to get him started, and with 32 bits to address, we're probably looking at something like 30 moves to make it to our addressable bit-- so we've got around 400 ticks just to read memory/buffer, followed by 100 ticks of latency for our memory.
  
Line 126: Line 131:
  
 
==Optimization: Memory Latency==
 
==Optimization: Memory Latency==
 +
 
So far, we've only been linking our operations to the true part of memory.  We've been using FALSE and NOT TRUE interchangeably.  But they're not quite interchangeable-- not really.  Consider the following two circuits:
 
So far, we've only been linking our operations to the true part of memory.  We've been using FALSE and NOT TRUE interchangeably.  But they're not quite interchangeable-- not really.  Consider the following two circuits:
  
Line 147: Line 153:
  
 
==Cycle Manager==
 
==Cycle Manager==
 +
 
Every cycle of our computer follows a certain flowchart.  Let's spell it out.
 
Every cycle of our computer follows a certain flowchart.  Let's spell it out.
  
Line 171: Line 178:
  
 
==Special Memory: Getting stuff done==
 
==Special Memory: Getting stuff done==
 +
 
So far, our computer doesn't do anything.  I mean, it can compute anything we throw at it, but it doesn't kill goblins or irrigate our farms or anything like that.  What we need are special bytes: output and input.
 
So far, our computer doesn't do anything.  I mean, it can compute anything we throw at it, but it doesn't kill goblins or irrigate our farms or anything like that.  What we need are special bytes: output and input.
  
 
===Output===
 
===Output===
 +
 
Output bytes look like any other kind of byte (yeah, I'm going to stop showing you that same picture of a memory bit now.)  The only difference is that the pressure plate in the memory is linked to more than just r/w functions-- it's linked to existing devices.  For instance, a bridge could raise when the goblin moves into TRUE position and lower when the goblin moves off of TRUE position.
 
Output bytes look like any other kind of byte (yeah, I'm going to stop showing you that same picture of a memory bit now.)  The only difference is that the pressure plate in the memory is linked to more than just r/w functions-- it's linked to existing devices.  For instance, a bridge could raise when the goblin moves into TRUE position and lower when the goblin moves off of TRUE position.
  
Line 207: Line 216:
 
I told you-- we were going to do a lot more bit shifts than add-with-carries.
 
I told you-- we were going to do a lot more bit shifts than add-with-carries.
  
 +
===Input===
  
====Decimal Output====
 
By the time we're done with this computer, thinking in binary (base 2) will come to us as naturally as thinking in decimal (base 10).  So we probably don't really need to output in decimal.  But of course we want people to be impressed with our computer, and they're never as impressed by "0000100/00000001=00000010" as they are by seeing "8/2=4."
 
 
Decimal output is actually quite the problem.  We'll have a goblin with ten possible positions (0-9) for each digit, which means a ten-armed path.  But how many inputs do we need?  We'll start with the ones:
 
 
Goblin is at 0 if output(0) is 0 and output(1) is 0 and output(2) and output (3) is 0 OR if output(0) is 0 and output(1) is 1 and output(2) is 0 and output(3) is 1.  Right?  Not quite.  What if output (4) is 1?  Then the goblin would be at zero when the value was 16 or 26!  We have to examine ALL of our output bits to determine what our decimal ones value is.  What if we want a high level of precision-- using 2 bytes to store a single number that we then want outputted in decimal?  We have to examine all 16 bits!  So we have 10 arms, each of which divides with OR statements, each of which evaluates 16 operands.  Even then, it's not scalable.
 
 
There's a better way to do it: do the hard work in software.  Consider the following program:
 
 
  Given Variables: BOP(0-7), DOP100, DOP10, DOP1, Count; Constants: 1, 10, 100, 0        variables binary output, decimal output 100s, etc
 
  begin 100s loop:
 
        If 100>BOP write 0 to DOP100 and jump to end of loop
 
        Increment count
 
        Subtract 100 from BOP and write the result to BOP
 
        Return to begin 100s loop
 
  write count to DOP100
 
  write 0 to count
 
  begin 10s loop:
 
        If 10>BOP write 0 to DOP10 and jump to end of loop
 
        Increment count
 
        Subtract 10 from BOP and write the result to BOP
 
        Return to begin 10s loop
 
  write count to DOP10
 
  write 0 to count
 
  begin 1s loop:
 
        If 1>BOP write 0 to DOP1 and jump to end of loop
 
        Increment count
 
        Subtract 1 from BOP and write the result to BOP
 
        Return to begin 10s loop
 
  write count to DOP1
 
 
I didn't write it in our language-- I wrote it in functions that we know how to use already.  You can see what it does-- it writes separate decimal values to three different bytes.  The nice thing is that it's extensible to any size of decimal number.  If we have a 2-byte number, we also need a DOP1000 and a DOP10000.
 
 
It's kind of a waste of our memory to store the decimal output (for which we really need no more than 4 bits) in an entire byte.  Remember how we wrote separate instructions and data to a single byte before?  We could do that again!
 
 
left shift DOP10 4 times
 
left shift DOP1 4 times
 
right shift DOP1 4 times
 
XOR DOP10 and DOP1, write to DOP1
 
 
What we're doing is storing decimal values in hexidecimal (base 16) format.  Now, we need two digits per output byte, and we still have ten paths, but the work is much much simpler.  Here's decimal output for DOP1(0-3):
 
 
If DOP1(0)=0 AND DOP1(1)=0 AND DOP(2)=0 AND DOP(3)=0) then output 0
 
If DOP1(1)=1 AND DOP1(1)=0 AND DOP(2)=0 AND DOP(3)=0) then output 1
 
If DOP1(1)=0 AND DOP1(1)=1 AND DOP(2)=0 AND DOP(3)=0) then output 2
 
...
 
If DOP1(1)=0 AND DOP1(1)=0 AND DOP(2)=0 AND DOP(3)=1) then output 8
 
If DOP1(0)=1 AND DOP(3)=1) then output 9
 
 
Notice that output 9 is a special case.  It just clamps the value.  If it's higher than 9 (can go up to 15!) then we just output 9.  That's because we need to send this circuit decimal output from software.  Our program above should never output a value higher than 9 to any digit value.
 
 
===Input===
 
 
Input's harder.  At first, you wouldn't think so.  Just link a lever to write to true, right?  And another to write to false?
 
Input's harder.  At first, you wouldn't think so.  Just link a lever to write to true, right?  And another to write to false?
  
Line 269: Line 227:
  
 
====Clock bits====
 
====Clock bits====
 +
 
A perfect example of this is a clock.  Clocks are great input-- for instance, if we want a doomsday program to constantly check the clock until a certain time is reached, then open the magma floodgates, we need clock-input.  You never want to write to a clock.  But clocks change all the time.
 
A perfect example of this is a clock.  Clocks are great input-- for instance, if we want a doomsday program to constantly check the clock until a certain time is reached, then open the magma floodgates, we need clock-input.  You never want to write to a clock.  But clocks change all the time.
  
    ###
+
    ###
  ###<#
+
  ###<#
  ##h#h#
+
##h#h#
##^#^##
+
##^#^##
#hd#dh#
+
#hd#dh#
##^#^##
+
##^#^##
#h#h##
+
#h#h##
#<###
+
#<###
###
+
###
  
 
Here we have an incrementing memory cell.  It's the perfect design for a clock.  Somewhere, we have a repeater toggling this memory, any any incrementation ripples through the next bits.  If clock(0) toggles every 200 ticks-- well, if clock(0) toggles every 200 ticks, then by the time we've run a single instruction, clock(0) won't be the same anymore.  But if clock(0) toggles every 1200 ticks (that is, daily), we want to be able to grab a correct value-- not yesterday's value, not tomorrow's value, not some random value.  I mean, what if clock(0) has toggled when we read it, but it has created a carry that hasn't yet percolated through clock(7) when we read it?  It'll look like we went back in time.
 
Here we have an incrementing memory cell.  It's the perfect design for a clock.  Somewhere, we have a repeater toggling this memory, any any incrementation ripples through the next bits.  If clock(0) toggles every 200 ticks-- well, if clock(0) toggles every 200 ticks, then by the time we've run a single instruction, clock(0) won't be the same anymore.  But if clock(0) toggles every 1200 ticks (that is, daily), we want to be able to grab a correct value-- not yesterday's value, not tomorrow's value, not some random value.  I mean, what if clock(0) has toggled when we read it, but it has created a carry that hasn't yet percolated through clock(7) when we read it?  It'll look like we went back in time.
Line 289: Line 248:
 
In the case of the clock, what we want to do is block access to the entire circuit whenever a clock updates.  That includes clock(7), which might be incrementing much later (let's be conservative, and say 1400 ticks later).  We're going to institute a guard for the begin door, which is normally triggered whenever it's time to read or write to memory.
 
In the case of the clock, what we want to do is block access to the entire circuit whenever a clock updates.  That includes clock(7), which might be incrementing much later (let's be conservative, and say 1400 ticks later).  We're going to institute a guard for the begin door, which is normally triggered whenever it's time to read or write to memory.
  
####
+
####
rddc
+
rddc
####
+
####
  
 
Leftmost door is linked to the thing that normally says that writes are allowed to memory.  Rightmost door is linked to our actual write to clock.  And continuation leads to an iterated delay circuit.  (This breaks our rules, but again, since the doors will never close until continuation is reached, only open, it's okay.)
 
Leftmost door is linked to the thing that normally says that writes are allowed to memory.  Rightmost door is linked to our actual write to clock.  And continuation leads to an iterated delay circuit.  (This breaks our rules, but again, since the doors will never close until continuation is reached, only open, it's okay.)
  
### ###
+
### ###
rhc rhc
+
rhc rhc
### ###
+
### ###
  
Continuation of clock guard trips the hatch; 100 ticks later, the 1st delay goblin runs to continuation of delay1, which trips the hatch on delay2; 100 ticks later, delay2 goblins runs over our second continuation, which opens the door to clock(0).  We'll need to institute further iterated delays to read clock(2-7).  Note that we only institute iterated delays on later clock reads-- if we delay 1600 ticks to read clock(0) it will have changed again already!  We can do this, because we have separate access doors to read each of the seven bits.
+
Continuation of clock guard trips the hatch; 100 ticks later, the 1st delay goblin runs to continuation of delay1, which trips the hatch on delay2; 100 ticks later, delay2 goblins runs over our second continuation, which opens the door to clock(0).  We'll need to institute further iterated delays to read clock(2-7).
  
 
This is bad.  It means reading the clock is slow.  First, we have to wait for the clock to change; then, we have to wait for all possible changes to percolate.  (We could build a look-ahead unit for our clock to make this faster, but let's not go there.)  Can this be optimized?  Probably.  Should it be optimized?  A dwarf has only so much energy to devote to problems.  (In other words, I don't yet know what the ideal solution is.)
 
This is bad.  It means reading the clock is slow.  First, we have to wait for the clock to change; then, we have to wait for all possible changes to percolate.  (We could build a look-ahead unit for our clock to make this faster, but let's not go there.)  Can this be optimized?  Probably.  Should it be optimized?  A dwarf has only so much energy to devote to problems.  (In other words, I don't yet know what the ideal solution is.)
Line 308: Line 267:
 
execute your lever throw, in the time between possible r/w.  (There'll be plenty of time.  Believe me.)  This means instituting a similar delay, with a similar manager, to writes to input from lever (or, preferably, pressure plate, because we want everything to reset automatically), rather than to reads from input by computer.  The mechanisms are the same.
 
execute your lever throw, in the time between possible r/w.  (There'll be plenty of time.  Believe me.)  This means instituting a similar delay, with a similar manager, to writes to input from lever (or, preferably, pressure plate, because we want everything to reset automatically), rather than to reads from input by computer.  The mechanisms are the same.
  
=====The Doomsday Program=====
 
set time_to_die = clock
 
set time_to_die = time_to_die + delay
 
begin early loop
 
write clock to original_clock
 
if time_to_die < clock then:
 
  begin early loop:
 
  read clock
 
  if clock < original_clock go to late loop
 
  go to begin early loop
 
begin late loop
 
  read clock
 
  if time_to_die < clock then XOR output with 10000000
 
  go to begin late loop
 
  
Why are there two checks to see if the clock has passed ragnarok date?  Because the clock, and our values, can roll over.  What if our delay is 01000000 and our clock is at 11000000?  Time to die will be at 00000000, and since that's less than clock at program start, we'll have a premature armageddon.  Instead, we have to wait for our clock to roll over, which we verify by seeing that clock is at a lower value than it was at the beginning of evaluation.  [What's output(7) do?  Use your imagination.  I prefer to think that it releases a troll into main memory.)
+
===Input/Output===
 
 
=====The Clock=====
 
You might notice that I never even said what's driving our clock.  That's because it doesn't matter.  What I'm going to use is something like the following:
 
 
 
###        ###
 
#X#        #X#
 
#h##########h#
 
##^d      h^##
 
  #h########d#
 
  # #      # #
 
  # #      # #
 
  # #      # #
 
  # #      # #
 
  # #      # #
 
  # #      # #
 
  #d########h#
 
##^h      d^##
 
#h##########h#
 
#X#        #X#
 
###        ###
 
 
 
It won't run perfectly, because goblins move probabilistically, but over the long term, that will average out.  The goblin running it will also slow as time passes and his attributes rust, but eventually that will bottom out as well.  So we won't know that this clock is moving predictably until a long time has passed-- after that, we see how many iterations it covers in a year (think of how useful our computer will be for that task!) and then do the math, and after that, we know what rate our clock is moving at.
 
 
 
====Decimal Input====
 
Every once in a while, you might want decimal input.  It's hard to imagine why-- every possible means of inputting in DF is limited to binary-- but you might want to do it nonetheless, perhaps by a series of ten levers, each of which inputs a base 10 number from 0-9.
 
 
 
This is simple task-- at least it is, now that we've built a tool set.  We have 10 bits of input (distributed through at least 2 bytes), each of which refers to a single digit; all of our input both writes true to the appropriate bit and writes false to the inappropriate bits.  All we need to do is multiply the digits with an exponent, then add them to zero.  Consider a calculator, where you enter a sequence of bits from 1s to whatevers-- the first number you enter will be the leftmost (most significant), but how big it is depends on how many digits you enter.  Here's what a program reading input delivered in that style would look like:
 
 
 
If digit(0) and digit(1) jump to end
 
If digit(0) then set BIP=00000000
 
If digit(1) then set BIP=00000001
 
.....
 
Set digit=00000011 (make sure our computer knows no further input has been generated)
 
Begin exponentiation:
 
  If digit(1) XOR digit(0)  else jump to this instruction
 
  Multiply BIP*10, output to BIP
 
  If digit(1) add 1 to BIP
 
  If digit(2) add 2 to BIP
 
  .....
 
  Set digit=00000011
 
  Go to begin exponentiation
 
  
What this does is wait for you to input a digit-- every time you do so, it considers that digit as an extra base 10 digit to an existing binary number.  All this program would actually do is let you input a number-- you'd need another input (like an add key, for instance) to get the program to do anything else.  This program would also overflow very easily-- anything greater than 255 would overflow.  You might want to use a multi-word variable for this program.
 
 
===Input/Output===
 
 
You might wonder how you can build a bridge that you can operate with a lever, but that still plays nice with your computer-- so that you can open it or close it, but your computer can too.  This is input/output.  Instead of linking your lever directly to the bridge, you link it to an input bit.  Then, you treat that input bit as an output bit, and link it to your bridge.  Now your computer can operate the bridge, you can operate the bridge, and your computer can even tell if you pulled the lever while it wasn't looking.
 
You might wonder how you can build a bridge that you can operate with a lever, but that still plays nice with your computer-- so that you can open it or close it, but your computer can too.  This is input/output.  Instead of linking your lever directly to the bridge, you link it to an input bit.  Then, you treat that input bit as an output bit, and link it to your bridge.  Now your computer can operate the bridge, you can operate the bridge, and your computer can even tell if you pulled the lever while it wasn't looking.
  
Line 382: Line 283:
  
 
===Feedback: Memory multiplexing===
 
===Feedback: Memory multiplexing===
 +
 
We do.  But there's one more really cool thing we could with i/o.  It doesn't even involve creating any latency.
 
We do.  But there's one more really cool thing we could with i/o.  It doesn't even involve creating any latency.
  
Line 401: Line 303:
  
 
Prefer input bytes?  I can't imagine why, but multiplex those instead!  Hell, we could even have a split multiplex.  By multiplexing a few data cells as well, we could maintain a large bank of variables to pass between memory spaces, while still keeping sufficient multiplexed instruction space.
 
Prefer input bytes?  I can't imagine why, but multiplex those instead!  Hell, we could even have a split multiplex.  By multiplexing a few data cells as well, we could maintain a large bank of variables to pass between memory spaces, while still keeping sufficient multiplexed instruction space.
 
And yes-- you can multiplex your multiplexers.  256 banks of 256 banks of 30 bytes (2 megabytes, we've reached the Macintosh)....  You could multiplex your operations codes.  You could multiplex your registers.  You can use your computer to change how your computer behaves.  That's what a Turing machine is all about.
 

Please note that all contributions to Dwarf Fortress Wiki are considered to be released under the GFDL & MIT (see Dwarf Fortress Wiki:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)