Casual Observation
I wonder why this blog peaks in April and November, but is dead when school is out… Hmmm ( :
I wonder why this blog peaks in April and November, but is dead when school is out… Hmmm ( :
4.1. Give the name of the algorithm that results from each of the following special cases:
A local search is the opposite of a systematic path exploration from an initial state; a local search is performed on a (single) current node in the state space, evaluating and modifying one or more current states. When we disregard path cost, we opt for a local search to reach the solution state. Local search algorithms were inspired by statistical physics methods (simulated annealing) and evolutionary biology (genetic algorithms).
Local searches are not omniscient and must plan for contingencies, i.e. each condition that a percept may reveal. It is a partially observable world that requires keeping track of the current state. It moves to neighboring nodes from its current node, disregarding previous nodes and their state. This leads to a light memory footprint and reasonable performance in large or even infinite (continuous) state spaces where more “nostalgic”, systematic algorithms might fall short. This makes them good for optimization problems.
The aim of local search is to find the global minimum (nadir or lowest point or lowest valley) if elevation corresponds to cost, and the global maximum (zenith, or highest point, or highest peak) or if elevation corresponds to an objective function. While the former is apparent, the latter means that we are looking for the objective function that is performing at peak capacity. A local beam search deviates from this minimalistic simplicity: instead of not remembering any states, it keeps track of k states, states that are randomly generated. If any of those k states is not the solution state, the algorithm selects the best successors from the complete state and repeats; if it is the goal, the algorithm simply halts.
A local beam search is a modified beam search, the latter being a path-cost algorithm. The difference between a local beam search and a random restart, is that a local beam search keeps a running tally of useful information derived from past states/ actions that it passes to parallel search threads, while a random restart, each thread runs independently of others.
Thus, if k = 1, it is keeping track of one state, then this is not as helpful as keeping track of more than one stage, since the more states you keep track of, the more information you have to work with, and the less of a blind, exhaustive search it is. It is more of a local search as opposed to a local beam search; since a local search operate on the current (singular) node, great for optimization but no so good for efficiency/ smart adaptive performance.
Therefore, when k = 1 for a local beam search, it simply becomes a hill-climbing algorithm.
b. Local beam search with one initial state and no limit on the number of states retained:
If k = ∞, we have a continuous (“real-world”) environment on our hands. Primitive algorithms, 1st-choice hill-climbing & simulated annealing notwithstanding, are ill-suited to attempt to have any meaningful, intelligent results. This is where a local beam search with an infinite k might be useful; it is similar to thinking of it as Darwinian evolution as opposed to an organism’s sexual or asexual reproduction. In this infinite-dimensional space, it helps to discretize the neighborhood of each state, chopping up the problem into smaller chunks to meet: therefore, while k is effectively ∞, we can have directly achieve this by harnessing attainment of smaller goals to meet. If we don’t want to discretize this process, we can achieve this via hill-climbing or simulated annealing.
Hill-climbing (steepest-ascent) is an algorithm that is a loop that moves in a continual motion in the direction of increasing value (uphill) and only terminates when it reaches a peak where no neighbor has a higher value. However, hill-climbing does not maintain a history, a search tree: it is very well-put in Russel & Norvig’s Artificial Intelligence: A Modern Approach, 3rd Edition, p. 122, “This resembles trying to find the top of Mount Everest in a thick fog while suffering from amnesia.” There is obviously the issue of local maxima, peaks higher than other current neighbors, but that are not the global maximum. There are also ridges, in which it is difficult for navigation, as well as plateaux, where no uphill exits exist. These shortcomings can be overcame by (i) stochastic hill-climbing, randomly going uphill, (ii) first-choice hill-climbing, where successors are generated randomly until a better option is arrived upon and chosen, and (iii) random-restart hill-climbing, using randomly generated initial states until a solution is arrived upon. The success of a hill-climbing algorithm greatly depends on the landscape, the fewer the snags (local maxima and plateaux), the better the probability and the success rate. This is a graphical representations of typical features of the hill-climbing world.
Simulated annealing borrows its name from the metallurgical process of tempering or hardening metals (or glass) by heating them to a high temperature, and then cooling it gradually, attaining a low-energy crystalline state. As opposed to heading upwards to climb a hill, it seeks to find the lowest minimum, that minimizes the cost. It picks a random move, if the move advances the cause, it is always chosen. If this is not a sure-fire improvement, it is a assigned a probability less than 1, i.e. not a definite good move. This decrease in probability, with 0 being a horrible move and 1 being the best move, is exponential, therefore if a move is viewed as worse than another, its probability will shrink exponentially faster compared to a mildly undesirable move. There is also less impact at the beginning of the search, and this laxness decreases as we proceed further into the search. Simulated annealing was extensively used to solve VLSI (very large scale integration) in the 80s.
However, both cannot handle continuous (infinite) state and action spaces due to their infinite branching factors. This looks like a breadth-first search, testing ALL options across the board, and then progressing deeper into the search, as opposed to picking a lane and sticking to it.
c. Simulated annealing with T = 0 at all times ( and omitting the termination test):
As described above, simulated annealing is hill climbing with a random step or walk, that helps prevent terminating at local maxima that results in false positives. T refers to the temperature, which means that at the start of the mission/ search, we are lax and can take the ‘hit’ of a bad call, and therefore, we are more likely to perform tasks that might have a less-than-optimal probability. However, as we go deeper into the search, the options are decreasing of where and what we can do, and therefore, we are less relaxed and we are careful to pick “better”, or rather, safer, routes/ choices. This can be seen in most sporting events: at the beginning, teams or atheletes can test the waters, the Doctor’s 122nd Rule “(poking) it with a stick.” This is when an error can still be recovered from; a similar or even less costly error in the final throes of a seach have less resources to utilize to recover from. This is why when the temperature is cool, or even at 0, the options we have, and the options we can choose to take, decrease as we approach 0. Setting T to 0 means we are being very careful, and very pessimistic, we are only choosing plays that have a very high probability for success. The result is that we have SUBLIMATED the temperature from a high, initial starting point, to a hard, zero temperature, that s not what simulated annealing should be about: simulated annealing should be about GRADUALLY “cooling” or lowering T enough to find the global optimum probability that is approaching 1. Dropping T suddenly does not result in a optimum probability, but on an algorithm that is very safe and does not take any risks, as well as that does not work at gradual optimization: this sounds like a greedy best-first search that is expanding nodes with minimal h(n).
d. Simulated annealing with T = infinity, at all times:
If T, however, is infinity, this means we are seeking for maximum efficiency but minimal optimality. We can take as long as we need to slowly shake the egg tray until the ball is kicked out of all but the deepest valley we can find. This adds an efficiency bump, meaning we are shaking it slower and more gradually to ensure the most optimum results, but this is taking a longer time. This looks like a breadth-first search, testing ALL options across the board, and then progressing deeper into the search, as opposed to picking a lane and sticking to it. However, it is also a random stepping, since we do not care about much, or utilize our known historical moves.
e. Genetic algorithm with population size N = 1:
In genetic algorithms, which are actually just stochastic beam searches, successor (subsequent) states are generated by combining 2 parent states rather than modifying a single state. This is similar to cell division in sexual reproduction: grabbing a mom and dad cell to get a daughter cell. When N(x) = 1, where x is a RANDOM-SELECTION, this sets it to a completely random sequence of selection from the population.
BAKING PI — University of Cambridge Tutorials
While not as verbose as a “real” article, I will paste the actual source code with verbose comments for significance, or non-redundant lines of code. There are also pastebins in the event that syntax is not maintained by cross-pasting from a text editor into WordPress:
OK03:
Functions & Reusability
A function provides a reusable, cookie-cutter, tried-and-tested template for an action or set of actions. It is a set of code that we can call and it will perform tasks that we might routinely perform multiple times. An example is loops in most programming langauges. We know we need to do something either conditionally or exhaustively, we need to do this often and even multiple times in the same program. Why not externalize this action, package it in a pretty little package with a functional name, and simply call that package as opposed to rewriting what an “if” loop does?
Here is a pastebin of the code with C syntax formatting:
Here is the same code, for the click-wary:
// gpio.s
//
// Modified/ Created by n91bu1n15 on 11/27/13 from Cambridge “Baking Pi” Labs.
//Simple Function Number 1:
.globl GetGpioAddress // the global command tells the assembler to make the assembler accessible to all
GetGpioAddress:
ldr r0,=0x20200000
mov pc,lr //MOvES the contents of lr into pc.
//LR is the return address that we branch back to when the function is finished
//It should contain the same address after the function completes.
//pc is a special register that contains the address of the next-in-line instruction.
//It represent the program counter.
/*This return is acheived by a simple branch, a branch that returns to the address specified, specified by this explicit setting of the next instruction to be the address of the LR.*/
/**********************************************************************************************/
//Bigger Function Number 2:
.globl SetGpioFunction
SetGpioFunction:
cmp r0,#53 //compares register 0 with “53”, signifying the 54 GPIO pins.
//CMP (COMPARE) : direct comparison of 2 values.
//If r0 > 53, cmpls does not run, but movhi does.
//If r0 <= 53, cmpls runs…
cmpls r1,#7 //CMPLS ( COMPARE LESS THAN) :
//Only runs if previous comparison result is less than or is 53.
//If r1 > 7, movhi runs.
//If r1 <= 7, movhi does not run and we know that both conditions have been satisfied.
movhi pc,lr //MOVHI ( MOVE HIGHHER) : if movhi runs, ending the function.
//CALLING THE METHOD:
push {lr} //Pushing LR places it at the top of the stack:: Last In First Out
//This means that this is the first thing that will be returned upon calling it,
//effecively returning that address… exactly what is needed.
mov r2,r0 //the GPIO controller does not modify any other register other than r0
//since GPIO controller does not affect r2, this is why we move the address in r0
//to r2, so it is saved and not overwritten.
bl GetGpioAddress //herein lies the actual functionality, branching to the function after updating the
//return address, i.e. lr. This is what bl does, it sets lr to the address of the next
//instruction, and branches to the label “gpioAddress”, that we labelled at the beginning
/**********************************************************************************************/
//WHAT IS WHERE?
//After the function ends, or returns:
// r0 contains the GPIO ADDRESS
// r1 contains the FUNCTION CODE
// r2 contains the GPIO PIN NUMBER
//However, in which actual block of 10 GPIO functions is this GPIO PIN (r2) in?
functionLoop$:
cmp r2,#9 //compares our current pin number against 9: if > 9,
//subtract 10 from it & add 4 to the GPIO Controller Pin (this latter action
//means its in the next GPIO Pin block, not the current one).
//It repeats this until it gets a number that is <= 9.
subhi r2,#10 //SUBTRACT HIGHER: The value in r2 is now a value btw 0 and 9. This is the modulo
//or remainder of dividing the pin number by 10.
addhi r0,#4 //r0 now conatains the address in the GPIO controller, of our current pin’s function.
bhi functionLoop$
//this code is the same as doing:
// GPIO Controller Address + 4 x (GPIO Pin Number / 10).
/**********************************************************************************************/
//Tucking the sheets
add r2, r2,lsl #1 //this is multiplication by 3 since in the binary world, a logical shift to the left
//moves from, e.g. bin 0001 (i.e. dec 1) to bin 0010 (dec 2), aka 1*2 = 2, and then we add
//r2 to this result, r2 being 1.
//the beauty is that this is 2 ops in one, first a lsl and then an add to that result.
lsl r1,r2 //now we shift r1 by the value of r2; we want to set the bits that correspond to our pin number,
//3 bits per pin.
str r1,[r0] //STORE: the computed function value at the address is then stored in r1
pop {pc} //POP: this returns the value that was at the top of the stack, viz. lr
/**********************************************************************************************/
/*
/* Another function: SET GPIO PIN
/*
/**********************************************************************************************/
//This function turns the GPIO pin, whose address is stored in r0, off, if it receives a value of 0 in r1, and if not 0, turns the pin on.
.globl SetGpio //makes this function accessible from elsewhere
SetGpio:
pinNum .req r0 //req is how we create and use register aliases, and r0, for example can be called GPIO_PIN or cyborg, or whatever.
pinVal .req r1 //in this case, we are renaming r0 and r1, as pinNum and pinVal, respectively.
cmp pinNum,#53 // we already did this: we are comparing r0, aka pinNum now, with 53
movhi pc,lr //we immediately return a higher value
push {lr} //we push lr again into the top of the stack so we can preserve this location to return to
mov r2,pinNum //we also push the value to r2
.unreq pinNum //’unreq’ does the opposite of ‘req’ destroying the association previously created now pinNum =/= r0 anymore.
pinNum .req r2 //we then associate r2 with the pinNum
bl GetGpioAddress //here, we call the “GetGpioAddress” function we created and labelled .globl, for this very reason.
gpioAddr .req r0 //creating an alias called ‘gpioAddr’ for r0
pinBank .req r3
lsr pinBank,pinNum,#5 //LOGICAL SHIFT RIGHT: = division, the decimal point is getting shifted to the right, or we are going negative
//on the number line. In this case, we are moving the decimal point by #5 (100000), which is the same as dividing by 32.
lsl pinBank,#2 //we now multiply this by 4 by shifting by 2, or a binary (100)
add gpioAddr,pinBank //now, GPIO pin contains either 2020 0000 base16 which means it is in block 0 – 31, or 2020 0004 base16 if it is in block 32-53. If we add 28
//base 10, we get the pin on address and if we add 40 base 10, we get the address for the pin off.
.unreq pinBank
and pinNum,#31 //an “and” operation results in a number with 1s in all binary digits whre there were ones in both inputs, otherwise zeroes everwhere else.
//and-ing 001001 & 110000 would be 111001.
//Given inputs of pinNum and 31 base10 = binary 11111, the result can only have 1’s in the last 5 places, or 0 > result > 31 :
//there are only 1s where there once were 1s in pinNum’s last 5 places, and this equals the remainder of the division by 32.
setBit .req r3 //for correct location of bit set for turning the GPIO pin on/ off, we use the modulo of the division.
mov setBit,#1
lsl setBit,pinNum
.unreq pinNum
teq pinVal,#0 //TEST EQUAL: if pinVal is zero, we turn the pin off. teq is a basic similarity comparison with no regard to which is larger or smaller.
.unreq pinVal
streq setBit,[gpioAddr,#40] //STORE IF EQUAL: so, if they are equal, we would store setBit at 40 away from the Gpio address,
strne setBit,[gpioAddr,#28] //STORE IF NOT EQUAL: otherwise, we store the setBit 28 away.
.unreq setBit
.unreq gpioAddr
pop {pc} //popping the program counter sets the value stored when the link register, lr, was pushed earlier.
//***********************************complete**********************************
The live video of the blinking LED is the exact same as for the previous lesson, OK02. The only difference was the execution of the code: more gracefully and efficiently by using functions.
The next step in getting things going in baking Pi will be getting a simple DC motor. After purchasing a simple 1.5 – 3.0 volt DC motor (speed/ load: 8300 RPM max, speed/ no load: 11,600 RPM, Current Load: 0.98A) from RadioShack, I built up on the knowledge I got from simply lighting up an LED via the Pi/ breadboard, to get this motor to simply spin. After successfully completing this, the next step will be to programatically send inputs that can be the signals to move the motor(s), and with some troubleshooting, can help birth a very rudimentary AI agent. But, first things first…..
In addition, I needed to purchase a stepper motor control chip that reverses the direction of current through the motor, and by doing this, effectively reverse the direction the motor turns… popularly known by the super-jargon, “reversing”. I purchased several L293D Stepper Motor Drivers or ICs (integrated circuits): 600mA output current/ channel, 1.2A/ Channel (non-repetitive) peak output current, logical 0 input up to 1.5V, 16 pins, 7V Vi Input signal and Ven Enable voltage, and 4W total power dissipation. These cockroach looking guys are stepper motors:
The L293D can control two motors independently: most of the pins on the right hand side of the chip are for a controlling a second motor: unfortunately, the Pi only has one PWM output, which also doubles as the audio port. What is PWM?
To control power, Pulse Width Modulation, or PWM, is used, which in turn controls how fast the motor spins. If there is no pulse, the motor will not spin, while a short pulse will turn slowly, and if the pulse stays high, the motor keeps turning. If the pulse is on half the time, the motor receives half the power than if the pulse was on the full sweep.
Since I have a HYNIX chip (little black chip in the center of the Pi that says “Samsung”), I was worried that the Occidentalis v 0.2 rom would not work. [More details after the jump…]
Here is the pin-out for the L293D:
It has 2 +V pins (8 (+Vmotor) & 16) that provide power for the motors, and for the chip’s logic, respectively. The 5V pin of the Pi will be connected to pin 16 on the L293D, and pin 8 of the L239D will be connected to a power source.
The L239D is useful for a project such as this because the Pi cannot drive a motor by itself, and trying to do this might damage the Pi. Secondly, the L239D controls direction (forwards & backwards) as well as its speed, via 2 control pins: exactly what we want.
Switching to a softer side, we shall recruit the awesome Python GPIO (General Purpose Input Output) library, to control pins 4 and 17, especially. Before actually using the GPIO pin, it would be advisable to understand more about it. It is a connector to which external hardware can be attached. There are several typse of GPIO pins:
(a) True GPIO pins that can, e.g. turn LEDs on and off,
(b) I2C interface pins that allow controlling hardware modules with just 2 pins. These are typically the pins labelled MOSI (GPIO10), MISO (GPIO and SCKL,
(c) SPI interface with SPI devices (kinda similar to I2C, but a different standard), and,
(d) Serial Rx and Tx pins for communication with serial peripherals.
There are also some pins for PWM (power control) and PPM (Pulse Position Modulation, for controlling servo motors). The GND (Ground) and 3.3V pins have been used in a previous module, as has the GPIO 0 pin.
The video shows the process of creating a python file from within an SSH’d terminal, editing and saving it on the Raspberry Pi using nano and running it using superuser Python.
And now for something different: the actual motor spinning and reversing direction:
The bootable image is on the SD card, but what now? Well, we can remove it from the computer and plug it into the Pi. Until you set up network connectivity and SSH, you need some way to view the Pi’s OS: you can use an HDMI cable or an old-school RCA/ phono (yellow) cable depending on your set-up. I was lucky to have 2 monitors already, and had got the Adafruit start-up care-package that came with an HDMI cable. I also had purchased a wireless-N adapter ( http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=wireless+N+adapter) that let me connect to my router without, you guessed it, Ethernet wiring. Lastly, I had a Logitech wireless USB keyboard/mouse (http://www.amazon.com/Logitech-Wireless-Keyboard-Built-In-Multi-Touch/dp/B005DKZTMG/ref=sr_1_2?ie=UTF8&qid=1385821240&sr=8-2&keywords=logitech+wireless+keyboard). All mentioned peripheral accessories were plug-and-play on the Pi on both Raspbian Wheezy and Occidentalis.
sudo nano /etc/network/interfaces
Nano allows editing of the file opened, and we need superuser privileges. If you choose to broadcast your SSID, type the following once the file loads in the in-terminal editor:
auto lo iface lo inet loopback iface eth0 inet dhcp allow-hotplug wlan0 auto wlan0 iface wlan0 inet dhcp wpa-ssid "ssid" wpa-psk "password" Replace the entries in the quotes with the correct values on your end. If you choose to not broadcast your SSID, type this in lieu of the SSID guys:
auto lo
iface lo inet loopback
iface eth0 inet dhcp
auto wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-scan-ssid 1
wpa-ap-scan 1
wpa-key-mgmt WPA-PSK
wpa-proto RSN WPA
wpa-pairwise CCMP TKIP
wpa-group CCMP TKIP
wpa-ssid "My Secret SSID"
wpa-psk "My SSID PSK"
iface default inet dhcp
BAKING PI — University of Cambridge Tutorials
While not as verbose as a “real” article, I will paste the actual source code with verbose comments for significance, or non-redundant lines of code. There are also pastebins in the event that syntax is not maintained by cross-pasting from a text editor into WordPress:
OK01:
Here is a pastebin of the code with C syntax formatting: http://pastebin.com/zFZgiSm8
Here is the same code, for the click-wary:
OK02:
Here is a pastebin of the code with C syntax formatting: http://pastebin.com/19ZdzdRa
Here is the same code, for the click-wary:
There are quite a few parts even in the simplest DC motors. Before delving into specifics of DC motors, it would be good practice to examine a motor works.
A simple motor has 6 parts:
(i) ROTOR or ARMATURE: is an electromagnet. It is typically thin metal plates stacked together, with thin copper wire coiled around each pole,
(ii) AXLE: holds the armature and the commutator,
(iii) FIELD MAGNET: a permanent magnet that helps save power in smaller motors unlike using an electromagnet,
(iv) COMMUTATOR: a split-ring device used to reverse the current at that point. Electric current is supplied externally through it,
(v) BRUSHES: the electrical contacts to the rotating ring (Contemporary motors use spring-loaded carbon contacts, but the name stuck from copper brush contacts),
(vi) DC POWER SUPPLY: a DC (direct current) motor works by exploiting the effects of passing a current (within a coil) through a magnetic field.
Without any adjustments, applying a current to a magnetic field would result in a coil spinning a half-turn due to the repulsion of the like poles, but the other end would be an unlike pole, causing attraction, thus ending the spinning. By reversing the current each half revolution, this would ensure that the torque kept turning, thereby turning the coil in the same direction. This could be achieved by flipping the current, and therefore, the flow of electrons. This is impractical, and therefore, a solution had to be formulated.
This is why the commutator is a split ring: instead of having to flip the power source to make the electrons flow the opposite way, the gap in between the ring achieve this effect. The contacts of the commutator are attached to the axle of the electromagnet, and spin with the magnet. When the ring has spun to the position with the gap on either side, it attains a different charge, and continuing the half loop, into a complete loop.
( http://hyperphysics.phy-astr.gsu.edu/Hbase/magnetic/motdc.html#c4 )
F-YEAH.
The mission of WordPress.com is to democratize publishing. We’re inspired every day by the ways creators use our platform to bring their voices to the world. Unfortunately, we also see many cases of censorship aimed at WordPress.com authors and users.
One area where we’ve seen a number of problems is the censoring of criticism through abuse of copyright law. Two recentcases of abuse really caught our attention and made us think that we needed to take action to fight back on behalf of our users and everyone who believes in the internet’s promise for free expression.
A common form of censorship by copyright stems from improper use of legal creations called DMCA takedown notices. The DMCA stands for the “Digital Millennium Copyright Act,” which is a US federal law that created a system for protecting copyrights online. The DMCA system works pretty well, but has a…
View original post 663 more words
I purchased (from Amazon/ Adafruit –> ) a FC-26P ribbon cable, an Adafruit Pi Cobbler breakout board, a BB400 solderless plug-in breadboard, a set of Male-to-Female jumper wires, a set of Female-to-Female jumper wires, (and from RadioShack –> ) resistors [ 220-Ohm 1/4-Watt, 330-Ohm 1/4-Watt, 1k-Ohm 1/4-Watt, 10-Ohm 1/4-Watt, 5.6-Ohm 1/2-Watt], 1 4-pin green LED [960 mlm intensity, 20mA FW current, 3.5V typical 4V max supply], 2 2-pin red LEDs [80mcd intensity, 20mA current, 1.9V typical 2.4V max supply] and a 1.0µF metallized-film capacitor [20mm spacig, tolerance 10%, 250WVDC max].
_______……….._______……….._______……….._______……….._______……….._______……….._______……….._______……….._______……….._______……….._______………..
Ingredients:
Lights, Jumpers & the Resistance –
Jumper wires are typically 22 gauge solid-core wires with insulation pre-stripped and readied for insertion in or out.
Resistors resist the flow of electrical current, thereby controlling WHERE and HOW FAST it flows. [Thought experiment: If controlling water, using thin pipes would let less water through but would flow through faster than a similar amount of water in a thicker pipe, albeit the latter would let a larger quantity of water through. If you had a fire hydrant, you want high resistance, i.e. highly pressurized water: if you have a water fountain, you might prefer less resistance, and less pressure. If you had the pressure from a fire hydrant in a water fountain, this would lead to a very unpleasant drinking experience, and water fountain-level pressure in a fire hydrant would lead to happier arsonists.]
An LED is a light emitting diode. A diode lets current flow only one way, from positive to negative. An LED has a quick distinguishing feature: the positive leg/ lead is visibly longer than the negative lead. If the leads are clipped, there are 2 other ways to tell: (1) the anode (in a device that consumes power, the anode is positive, but in a device that produces power, the anode is negative) or positive is going to be the bigger metal piece inside a transparent LED, or (2) test using a power source.
Resistance is measured in ohms, written Ω. The bigger the Ω, the greater the resistance. Measuring resistance can be done either by color-coordinated stripes on the resistor’s body (resistor color code), or more accurately using a multimeter. A resistor is the same from either pin/ end; there is no forward or backward direction.
Ribbon Cable & Pi –
Breaking out my breakout board –
The breakout board is an extended pin-out from the Pi, to the breadboard ( or any other device for that matter). It replicates the existing pin-out on the Pi, communicating through the ribbon cable.
Lastly, an upside breadboard –
A breadboard (pictured below) is a device used for prototyping circuits. It allows easy and fast access of components: as fast as you can pull out a thin wire from a slot. One of its advantages is that it does not need soldering, which makes most n00b errors, mulligans. A breadboard is basically a plastic brick/ block that has a lot of holes, typically spaced 0.1″ apart. Into these holes doth one plug in leads/ wires, that are then held in place by metal contacts. There are many columns, in my case, numbered long-side from 1 – 30, and short side from a – j: 30 holes by 10, for a total of 300. In addition, there are +ve and -ve holes on either side of these 300, each 2 holes by 5 per chunk, by 5 chunks per side, i.e. 50 per side and 100 total +/- holes. The +/- is the power and ground rail respectively.
It is important to understand how these rows and columns work. If you put power into, say, the right side the whole row, i.e. from the red + straight down to the other red + only on that side. The same applies for the negative (ground), and the same applies for the other side. Within the body, the lettered/ numbered holes (not in the power rails), work the opposite way, i.e. in columns, and only on its side of the center ridge. Plugging a jumper wire into c-28 would communicate with a-28, b-28, d-28, and e-28. It wouldn’t communicate with f-28, because of the ridge’s separation. You don’t have to match power/ground, simply powering/ grounding one side effectively performs that action to all the holes on that side of the power rail. To reiterate, from a to a or – to – is a row, and from 25 to 25 is a column.
Notice the bridge in the center: this literally and electrically separates either side of a breadboard. When a side is live, i.e. there is current flowing through it, the live “holes” will light up, but that’s another story we haven’t gotten to yet. Breakout boards attached should be attached on opposite sides of this bridge: if not on opposite sides, will lead to a over-baked Pi.
_______……….._______……….._______……….._______……….._______……….._______……….._______……….._______……….._______……….._______……….._______………..
Armed to the gills, I set out to complete several of the online tutorials from Adafruit/ LadyAda.net & Cambridge.
Step 1: Plug in the breakout board.
The two rows of pins MUST be on opposite sides of the center ridge, and firmly seated after aligning the pins with matching holes. The Pi should not be plugged in at this point.
Step 2: Juice!
For power, we shall connect the ‘3V3’ pin, that has, shockingly, 3.3V. The 3V3 pin is connected to the power rail, and to the corresponding hole in the board that the 3V3 is in. The same is done for the GND, the ground pins. When we connect the Pi in upcoming steps, the current shall flow through these connections, completing the circuit and powering the breadboard.
Step 3: L-E-D to G-P-I-O
Pin Out Diagram:
The next step is wiring the LED into the board. There are 2 considerations: connecting the positive lead (longer) to a GPIO pin ( I used GPIO pin 0 – SDA) and connecting the negative lead of the LED to the ground using a resistor. This is easier than it sounds: we use a jumper wire from the column the GPIO pin is in, and connect it to a free column anywhere. We then connect the positive lead of the LED to the same free column we have chosen. For the negative lead of the LED, one connects it to any different free column, and subsequently connects any end of a transistor to that same column. This is then connected to the ground, leading to the circuit being completed, via the resistor.
Once you connect the Pi to a power source, the Pi’s red PWR LED should turn on as well as the LED on your breadboard.
Yay. Success!
3.2 Your goal is to navigate a robot out of a maze. The robot starts in the center of the maze facing north. You can turn the robot to face north, east, south, or west. You can direct the robot to move forward a certain distance, although it will stop before hitting a wall.
a) We assumed what the center of the maze is, since there can be different distances from our current position to the exit, or to either of the 4 walls, since walls can cause us to double back instead of get closer to our destination.
*Working shalt be shown for all exercises.
Exercise 3.1
As opposed to adding and subtracting binary and decimal numbers, there are other numbering systems that are popular with in the computer world. An example is the base-8, or octal, numbering system. The following table shows pairs of octal numbers.
+++++++++++++++++++++
The octal numeral system, aka oct, is made from binary numerals by grouping consecutive binary digits into groups of 3, starting from the right. Therefore, in base 10, 99 would be
(9 x 10^1) + (9 x 10^0); in binary, it would be 1100011, or 64+32 = 96+3 = 99; and for oct, we would group consecutive binary digits, from right to left, in groups of 3. This means 001,100,011. The two leading 0’s are added for grouping purposes. Reading this in binary, this equates to 1,4,3 or 143.
*http://www.electronics.dit.ie/staff/tscarff/signed_numbers/signed_numbers.htm
Signed/ Unsigned:
An 8-bit number system can create 2^8, or 256 integers, viz. 0 – 255.
In signed number systems, the first half, or 0-127, represent positive numbers, while the other half, 128-255, is used to represent negative numbers. This is easily conceptualized with a circle, with 0 at the North Pole, 127/ 128 bordering the South Pole. 255, near 0, would be -1, and while 128 would would actually represent -128. In unsigned number systems, 255 would simply be (+) 255.
In binary, all the signed numbers that have a ‘0’ in the m.s.b. (Most Significant Bit) represent positive numbers, while ‘1’ in the msb represents negative numbers. The two’s Complement of a number is the negative equivalent of the positive number, shown below.
0000 0010 –(one’s complement or 1 – (this value)) –> 11111101 –(add ‘1’ or 00000001)–> 11111110, which is 254, or -2 unsigned.
In 16-bit systems, the scope shifts from a mere 256. to 65536; 0 – 32767 (positive signed) and 32768 – 65536 (negative signed), or a larger 65536 combinations if unsigned.
The sign extension in 16 bit numbers is similar to 8 bit numbers, with a 0 or 1 in the msb signifying positive or negative numbers, respectively.
Binary Addition: ( http://www.allaboutcircuits.com/vol_4/chpt_2/2.html )
Same numbers result in 0, Different numbers result in 1.
0 + 0 = 0
0 + 1 = 1
1+ 1 = 0 , but more accurately, 0 carry 1. Ergo, this actually equals 10, or a binary 2 ( i.e. 0010).
1 + 1 + 1 = 11, i.e. 1 carry 1. (It is worth noting that in binary, 0001 + 0001 + 0001 would result in 0011, which is 3!)
*******************************************************************************************************************************
3.1.1. What is the sum of A and B if they represent unsigned 12-bit octal numbers? The results should be written in octal. Show your work.
Part (a) unsigned:
decimal 3174 + decimal 522 = decimal 3696 (duh)
To convert to octal, we can either perform straightforward addition using 8 as we do 10 in decimal, or convert the number to binary and then group the binary figure in threes (and only for 4 sets, in our case, since we are asked to use 12-bits).
3174
0522 +
——
3716. ( < —– WHAT????! )
<a href=”http://www.threadbombing.com/details.php?image_id=3025″><img src= “http://www.threadbombing.com/data/media/31/hahawaitwhat-cd4.gif” border=”0″ alt=”OMG” /> </a>
Since we are using base -8 and not base-10, the “odometer” rolls over at 8, not 10! This has significant implications: adding any number less than 3, or any result less than 7, is similar to integer math, but anything that results in values greater than 8 needed to be handled with care.
This is because the maximum value 3 -bits can handle is 7, i.e.
1) 001 or 1
2) 010 or 2
3) 011 or 3
4) 100 or 4
5) 101 or 5
6) 110 or 6, and
7) 111 or 7.
If we were to attempt to represent 8, we would have overflow, since we don’t physically have any space or representation of the 4th bit.
Therefore,
adding oct 3 + oct 3 = oct 6
but adding oct 3 + oct 5 = oct 0 (carry 1)
and adding oct 3 + oct 7 = oct 2 carry 1
It is easier to visualize this in binary:
oct 3 or 011
oct 7 or 111 +
(1)010 , which is 2, carry 1, since we only have 3 columns.
Similarly, oct 4 or 100
oct 7 or 111 +
(1) 111, which is oct 7 carry 1.
To be as error-free as possible, I will convert the octal representation to binary, and add it in binary, and then perform the math. With time, octal (and other bases’) math is just as easy in integer form as is base-10, or decimal. For the first example, I shall do the equivalent of kindergarten math with beans or buttons.
3174 = oct 011,001,111,100
0522 = oct 000,101,010,010
011 001 111 100
+ 000 101 010 010
011 111 001 110, which is oct 3,7,1,6.
Since it is unsigned, we don’t care about any leading values or sign-extensions.
>>The answer is 011,111,001,110 (octal).
Part (b) unsigned:
decimal 4165 + decimal 1654 = decimal 5819.
=> binary 1000001000101 ( https://www.wolframalpha.com/input/?i=convert+4165+to+binary ), or oct 1,000,001,000,101, which is oct 1,0,1,0,5 ( https://www.wolframalpha.com/input/?i=convert+4165++to+octal) (since its 12-bit math, we have no choice but to drop the leading 1), leading to oct 000,001,000,101 or oct 0105.
=> binary 011001110110 ( https://www.wolframalpha.com/input/?i=convert+1654++to+binary ), or oct 011,001,110,110 which is 3,1,6,6, or 3166.
oct 4165 = 100,001,110,101
oct 1654 = 001,100,101,100 +
———————-
110,000,100,001; which is oct 6,0,4,1.
To verify this, one can then perform straightforward addition, but using 8, as we do 10 in decimal:-
4165
+ 1654
6041.
>>The answer is 100,000,100,001 (octal).
By The Way:
If, for example, we have a number to represent, but only so many bits to represent it in, do we lose any value, i.e. precision? Yes, there is loss of precision resulting from representing an n-bit number in m-bits, where m < n. This is known as sign contraction, (http://www.plantation-productions.com/Webster/www.artofasm.com/Windows/HTML/DataRepresentationa5.html) or more technically, diophantine approximation (deals with approximation of real numbers using rational numbers) (http://www.encyclopediaofmath.org/index.php/Diophantine_approximations). In our case, our 12-bit constraint is to blame. This is similar to the audio loss we get when we compress large files to make them compact (smaller) files, and the subsequent loss-less compressions of data that have been born as we understand compression algorithms. Of note is the pigeonhole principle (https://en.wikipedia.org/wiki/Pigeonhole_principle), originating from Dirichlet’s drawer/box principle, named after famous scientist Peter Gustave Dirichlet, who was also credited with advancements in analytical and algebraic number theory, and the definition of a function.
The pigeonhole principle is observed if you had nine pigeonholes and ten pigeons: to contain the pigeons, one of the pigeonholes had to have more than one pigeon. Similarly, if you have 3 socks of at least but only 2 different colors, at least 2 socks must be of a similar color. In discrete terms such as socks and pigeons, there is less of a domino effect of error, but as you get into bigger numbers, the error margin deviates at a more pronounced rate. In systems where lack of precision is fatal, such a system is unacceptable.
***********************************************************************************************************
3.1.2. What is the sum of A and B if they represent signed 12-bit octal numbers stored in sign-magnitude format. The result should be written in octal. Show your work.
Binary Math: Small Details:
If subtracting a smaller minuend from a larger subtrahend, or effectively, adding a large negative number to a smaller positive number. It is easier to break this into a simple decimal problem e.g. (-8) + 3. This would equate to (+3) – 8, which results in -5. We traverse the number line from right to left, starting at +3 and move 3 steps to 0, and move another 5 steps to -5. Simple, right?
The equivalent to precise decimal subtraction, or addition of a larger negative number to a smaller positive number, is the Two’s Complement system ( http://students.byu.edu/~cs124ta/references/readings/two%27s%20complement.html ) ( https://www.cs.cornell.edu/~tomf/notes/cps104/twoscomp.html ) or the Radix Complement system ( http://phoenix.goucher.edu/~kelliher/f2003/cs220/oct13.html ). It is a binary inversion of a number, added to 1. 2’s complement is the number that must be added to the number in question. The subsequent logical question is: why not use One’s Complement?Negative Zero is not just a positively awesome band,
but refers to signed zero, or zero with an associated sign. This was provided for in Clause 3 (“Formats”) of I.E.E.E. 754-2008, that replaced 754-1985. This update merged the old standard with I.E.E.E. 854, the radix floating-point arithmetic. It defines -0 as a very small number approaching 0 from the lower limit. To avoid any confusion with negative zero, we take an extra step and look for 2’s complement, that is, a number above 1 (which is why we add 1), but in between 1 and 2. If we were looking for 3’s complement, we would add 2 to the number, and use the complement of this 2.x number to match it to 3.
For example, if searching for the value that is -30. We can begin with what we know: 30 is 0001 1110. The inverse of this, meaning if you flip the bits, 1110 0001: adding 1 to this results in 1110 0010. This last number, 1110 0010 is 226. In the number line, Using 256 ( more accurately, 0), if we subtract 256 – 226, we end up with 30, or rather, this represents a negative 30!
This ability makes binary subtraction a lot easier, since we can use a “unified” rationale, logic and even physical circuitry to represent BOTH the addition and subtraction operations, instead of handling the two as separate operations. To correctly, and accurately, perform this subtraction, we would take the 2’s Complement by flipping all the bits in the negative number, adding 1 to that result and then adding that number to the initial operand. After completing the operation, we need to restore order by replacing what we did, viz. changing the current sign bit to its opposite, and flip the numbers as well as add 1 to the result again. To perform binary subtraction, or addition of a negative number, we need to have a concept of a bit that carries or keeps the sign, so we know if the result is positive or negative.
One’s complement, or the diminished radix complement, is the number that when adding to current number in question, results in 1. A simple example is 0.2 is the 1’s complement of 0.8, since it is the number we can add to 0.8 to make it 1, or rather, 1.0. To take it to the next level, or 2.0, we would do the same, but instead add a 1 since we it is the next whole number and the fraction left over to get to 2, ergo, the 2’s complement of 0.8 would be 0.2, add 1.0 to get 1.2. 0.8 + 1.2 = 2.0.
2’s Complement can be viewed as B’ or B prime, where B’ = 2 to the n (or 2 ^ n) – B, where B is the number that B’ is the complement of.
B’ = 2^n – B.
If we are performing A – B, then
A – B = A + B’ – 2^n.
We can derive this from the first equation by moving the 2^n to the B’ side, thereby resulting in
B’ – 2^n = – B.
We should get a carry-out, which is the result’s sign.
Also, once we assign the left-most bit, or the msb, to be the sign bit, it changes what the value is, i.e. unsigned 4 and signed +4 is 0000 0100, but – 4 is 1000 0100, instead of 1000 0100 representing unsigned 132.
On to the actual problem!
******************************************** ***************************************** ******************************************** *****************************************
a) If sign-extended, then suddenly, we care about the msbs of each value. The msb is no longer used for counting and operations, but as the sign, which to reiterate, if a binary ‘1’ is negative, while a binary ‘0’ is positive (http://www.cs.uaf.edu/~cs301/notes/Chapter4/node3.html).
We are no longer dealing with the sum of decimal 3174 and decimal 522. Therefore, in the sign-magnitude binary world, these numbers, and the subsequent results of summation, will vary.
decimal 3174 = binary 011,001,111,100
decimal 522 = binary 101,010,010
Since the zeros are the placeholders, 522 is 000,101,010,010…. which is positive as the msb is 0.
011,001,111,100 < —– positive
000,101,010,010 + < —– positive
———————— ————————
011,111,001,110 < —– positive.
The left most bit addition (0 + 0) is 0, with no carry forward. (If there was a carry, we would have overflow as we only have 12 bits.) This keeps the result positive.
This result is oct 011,111,001,110 or oct 3716.
Cross-check:
oct 3174
+ oct 0522
oct 3716
What is this “underflow” business?
Underflow is one of 5 I.E.E.E. Floating-Point Arithmetic exceptions ( http://docs.oracle.com/cd/E19957-01/806-3568/ncg_math.html ); the others are overflow, division by zero, invalid operation and inexact. Underflow occurs when the result of an arithmetic operation is so small that it cannot be stored in its intended destination format without experiencing an unacceptable rounding error.
There are different underflow thresholds, i.e. the threshold beyond which underflow occurs. For example, the smallest normal number is 1.17549421e-38, and the largest subnormal number is 1.17549435e-38, for single precision numbers. The smallest normal number is a tiny number, approaching zero, that can be seen as a threshold for no-precision-loss arithmetic. The positive subnormal numbers are numbers between this number, the smallest normal number, and zero, that might produce a subnormal result. Subnormal numbers provide greater precision for small number math, even though they might have less bits of precision than normal numbers. When a result has a magnitude smaller than the smallest positive normal number, and we choose to pursue depicting this number as opposed to just defaulting to zero, this ambition might lead to what is known as gradual underflow; mathematically correct but hard to depict. After all, this number is less than 1.17549435 x 10 to the 38, or 0.000000000000000000000000000000000000011754944 ( https://www.wolframalpha.com/input/?i=1.17549435e-38+ ). The error, or rather, the lack of precision, resulting from the rounding, generates an underflow warning: a subnormal result does not always return an underflow warning, but does return a subnormal warning, which is not the same thing ( http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/db2/rbafzdecfloatmodel.htm).
Underflow/ overflow is not always a bad thing: Within a factor of 2, two numbers being added or subtracted are error free because un-trapped underflow cannot occur due to the presence of subnormal numbers. Underflow is a really interesting topic, with more details available at both the OracleDocs site, as well as at IBM Information Center, both highlighted in the paragraph above.
Overflow ( http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Data/underflow.html) refers to an operation’s result that is larger than the largest number a number system can represent. Say we are only allowed to use the one’s column in decimal addition. Every addition consisting of numbers less than 5 would be easy and error-free, since the largest number representable would be 8 ( i.e. 4+4). However, if we included 5, i.e. addition of numbers less than or equal to 5, we might have some issues. 5+4 = 9, which would be fine, but if we had 5 + 5, we would be at odds on how to deal with the 1 that represents the ten’s column. Where would this one go, since we are only allowed to use the one’s column? If we ignore it, we are claiming that 5 + 5 = 0, and if we include it correctly, we have violated the rules of this universe we have chosen to be a part of. We have overflow. Underflow, on the other hand, would result if we were performing subtraction in the one’s-column-only universe. As long as we could stipulate that the subtractand, the number we are subtracting the subtractor from, is greater than or equal to the subtractand. This means that, say, 4 – 3 or 8 – 8 has no issues existing peacefully in this world. When we choose to enter the realm of subtractands smaller than the subtractor, “we have a problem, city in Texas”. What is 4 – 9? This is, obviously, (borrow 1) –> 14 – 9 = 5 with a debt, or -5. But where did the 1 we borrowed come from? The tens column, that does not exist in our one’s only column universe. Therefore, we run into issues when we ask out number systems to do what those number systems do not know how to represent. We are asking dolphins to peel bananas, or monkeys to ask for crackers.
To prevent overflow and underflow in this exercise, that has a 12-bit constraint, we need to use the 12th bit, the most significant bit, as our universe’s representation of the sign extension bit. As usual, we shall denote a ‘0’ as positive ( a quick mnemonic is the ‘O’ in positive matches 0) and a ‘1’ to represent the negative sign.
b) We still consider the msb.
Binary subtraction is somewhat straightforward and similar to integer/ decimal subtraction.
Therefore,
(i) 0 – 0 = 0
(ii) 1 – 1 = 0
(iii) 1 – 0 = 1.
The fourth rule is a bit more complicated, stating 0 -1 = 1. This is because when you borrow from the 2nd column, we have 10 (or 2). 2 – 1 = 1, therefore:
(iv) 0 – 1 = 1.
On this premise, octal 4,1,6,5 becomes [4] 0,1,6,5. What exactly is the [4]?
This 4 signifies a negative value, since it has a leading 1 ( i.e. 100), as does 5 (101), 6 (110) and 7 (111). If it was a 3 or less, it would have a msb of 0 [3 (011), 2 (010), 1 (001) or 0 (000)] and would be positive.
To understand this, let us take 8’s Complement. We would still use the Two’s Complement method, but since this is not binary arithmetic, we instead have to use 8’s Complement, i.e. add 7 instead of 1. To get to (base) 8, we would add 7 to a number less than 1, to get to 8. We need to (a) get the Eight’s Complement of the first number, 4165 (Subtractand) to make it “negative”, (b) add this to the second number (Subtractor), (c) change the sign bit to zero (negative), (d) get the Eight’s Complement of the result, and that should be the answer.
oct 1654
oct -0165 < —- this is (4)165, but the 4 stands for a negative since in binary, it is 100, and msb is 1.
oct 1467.
Breaking down the math:
The left most column: 4 – 5 –> borrow ‘1’, or 8, since this is base 8, not base 10. If it was base 10, borrowing ‘1’ would result in 10 + 4 …. 14 -5 = 9. This is child’s’ play. However, since this is base 8, when we borrow ‘1’, we effectively add ‘8’ to the “requesting” number: borrowed 8 + 4 = 12. 12 – 5 = 7.
2nd left most column: Since we borrowed 1, we have 4 (not 5) – 6. We need to borrow 8, ergo adding 8 + 4 = 12, and 12 – 6 = 6.
3rd left most column: We borrowed from this column, 5 – 1 = 4.
4th most column: 1 – 0 = 1.
This is how we arrive at 1,4,6,7.
Cross-Check:
(1) 00,001,110,101 < —– negative
(0) 01,110,101,100 + < —– positive
translates to
01,110,101,100
– 00,001,110,101
001 100 101 111 , which is oct 1,4,6,7.
A Second Cross-Check Using Eight’s Complement:
–> Larger number = 1654 or 001,110,101,100
(a) 8’s Complement (flip all bits) –> 110,001,010,011
(a) Add 7 or 0111 –> 000,000,000,111 +
______________
Subtractand 110,001,011,010
Subtractor is smaller number, i.e. 0165
(b) Add (a) to Subtractor –> 000,001,110,101 +
_______________
110,011,001,111
(d) Flip the bits –> 001,100,110,000
(d) Add 7 –> 111 +
________________
oct 001,100,110,111.
This is oct 001,100,110,111 or oct 1,4,6,7.
This third method would be helpful since it doesn’t need to keep track of the sign bit per se, but by complementing and addition, reaches the same, corerct answer as by specifically tracking the sign bit.
3.1.6. What makes base 8 (octal) an attractive numbering system for representing values in computers?
Octal is easy to read, from the human programmer’s point of view ( http://thestarman.pcministry.com/asm/hexawhat.html ). A phone number or social security number illustrates this principle, as well as another important one: why group things in 4’s or HEX (more on this, after the break). Back to S.S.Ns and phone numbers: e.g. 123-456-7890. If this was a phone number (maybe VoIP, not an area code) or a S.S.N. (issued in the state of NY) (http://www.stevemorse.org/ssn/ssn.html), remembering this in 2 groups of 3 and the last group of 4, is a typical ‘American’ lingua franca: how many people typically give a phone number as 5 55 5555555?
The same binary number, e.g. 1010110000111111, that is decimal 44095, can be represented by grouping it in 3’s or octal (1,010,110,000,111,111) and in 4’s or hex (1010,1100,0011,1111) i.e. oct 1,2,6,0,7,7 or hex 9,12,3,15 (aka 9C3F). It is easier to remember octal 126077, and even easier if hex 9C3F.
fin -__-
HWK_II
>>>> /*Footnotes, labelled using * (asterisks) will henceforth be used and will point to the foot of the particular subsection. Therefore, a footnote will be marked and the linking annotation will be at the end of the subsection, e.g. in 2.1.1, there is an asterisk after SPEC, and the annotation is at the bottom of the 2.1.1 answer, before 2.1.2. */<<<<
Exercise 2.1
The following problems explore translating from C to MIPS. Assume that the variables f,g,h, and i are given and could be considered 32-bit integers as declared in a C program.
2.1.1. For the C statements above, what is the corresponding MIPS assembly code? Use a minimal number of MIPS assembly instructions.
Atomic exchange or atomic swap is the process for building synchronization operations which interchanges a value in a register for a value in memory. Synchronization is needed to avoid a data race, where multiple memory access attempts to one memory location, from different threads, can cause a program’s results to be drastically altered. It is known as a critical section problem, which is an intersection with traffic lights that only one car can be in at a time. If there are multiple cars in the intersection at the same time, there would be an accident. If the light never changes, some of the cars would never get to move forward. This is a problem that can be fixed by performing read and writes as efficiently and cost-effectively as possible.
MIPS originally meant Microprocessor without Interlocking Pipeline Stages. It is a 64-bit Register-to-Register RISC (reduced instruction set computer) instruction set architecture that was created in 1981. It is bi-endian, both Small and Big Endian: endianess is a byte ordering nomenclature, with small endian places the least significant byte at the low end while big end places the most significant bit at the low end.A handy way of visualizing this is the number line where values are placed with varying “size” with respect to the 0. MIPS instructions are all 32 bits.
Endianess originated from Jonathan Swift’s Gulliver’s Travels, where the Lilliputians cracked their boiled eggs at the smaller end, while the Blefsuscu residents arbitrarily preferred the big end.
The atomic exchange is not resource cost-effective, needing a read instruction/ operation, as well as a store instruction/ operation. Instead instruction pairs are used, and are executed as if they were an atomic operation. In MIPS, these include a load linked instruction and a store conditional instruction. If the memory location specified by the load linked instruction is changed before the store conditional occurs, then the conditional instruction (i.e. store) does not execute. A store conditional returns 1 only if successful and 0 if not. This leads to MIPS being resource cost-effective.
To subtract variable h from variable g and store the result in variable f, we can either attempt to subtract h from g, or add a negative h to a positive g. For example, 2-1 is the same as 2+ (-1).
Adhering to the 3rd hardware design principle [1) Simplicity Favors Regularity, 2) Smaller Is Faster, 3) Make The Common Case Fast, and 4) Good Design Demands Compromise] , the principle of immediate operands is introduced, wherein constants are used that are only loaded once and placed in memory at boot time. More than half of MIPS arithmetic instructions have a constant as an operand per SPEC* CPU2006 benchmarks. Since constants are loaded only once, they greatly decrease fetch-decode instructions, increasing efficiency and overall performance. Further optimizing this process is the add immediate operand, or addi, that is a constant operand for a quick add instruction.
Another commonly used constant is the constant zero operand, labelled $zero. This is commonly used with the move operation, which simply adds using the constant zero operand.
a) After instantiating g as $t0, h as $t1, and f as $s0, the following MIPS code gets the job done:
subu $s0,$t0,$t1
Bare bones baby step:
Below is the .asm file that can be assembled and executed in MARS MIPS Simulator.
<script src=”http://pastebin.com/embed_js.php?i=Nh6W04LC”></script>
>>>>>>>>>>>>>>>
________________________________________________________________________________________________________________________________________
For part b), I expanded on the rudimentary MIPS program that had hard coding in it, to a better solution that has input output capability, as one would hope for in contemporary computing.
Template Pre-Computation:
(&I placed two identical pastebins but with different formatting since it was acting wonky in different browsers and OSes.)
<script src=”http://pastebin.com/embed_js.php?i=eM78pym5″></script>
I track the progress of my MIPS code from the rudimentary hard coded stage to a more evolved version that takes input and returns a correct value.
Finished Code:
Using the rudimentary code from (a), I proceeded to model (b) independently, and with IO functionality.
b) In addition to the variables instantiated above, a few other variables were added, viz.
temp as $t2, as a placeholder for the operation in the parentheses, and k as $s1, for the constant that is in this case, -5 or if unsigned, 5.
<script src=”http://pastebin.com/embed_js.php?i=jKXJ0HRQ”></script>
sub $t2,$t1,$s1
add $s0,$t0,$t2
*CPU2006 is SPEC’s next-generation, industry-standardized, CPU-intensive benchmark suite, stressing a system’s processor, memory subsystem and compiler [ http://www.spec.org/cpu2006/ ].
2.1.2. For the C statements above, how many MIPS assembly instructions are needed to perform the C statement?
a) subu $s0,$t0,$t1 —>
To perform this “simple” action, there is quite a lot going on in the background.
While the main action is performed only by #7, the previous 6 steps are needed to load the registers and set the stage for the subtraction.
lui = load immediate unsigned
lw = load word
#2 adds the hex value or 0, to the contents contained within register $1. This is the meaning of the values contained in the parentheses here and other instances as well.
As mentioned earlier, different registers have different arbitrary meanings. This is highlighted in the table below.
|
Quantum Key Distribution
This paper will focus mainly on public key infrastructure, the concept of the one-time pad that resolves PKI woes, and a quantum key distribution solution to the shortcomings of the one time pad.
Security is a prevalent part of modern existence in the “real” world. It goes without saying that anything of any worth to another person or entity needs to be monitored and protected to ensure its safety and well-being. As the world today is advancing to a future in which the digital is sewn into the very fabric of our “analog, flesh-and-bones” society, we are faced with challenges surrounding protecting our digital data that in terms translates to our digital persona’s behavior. There are trends towards the ‘Internet of things’ (where vehicles, buildings, and appliances communicate intelligently with each other and other devices), wearable/ ingestible/ skin-embeddable technology, neural networks, and big data processing to reap the benefits of harvesting all this data. As our data and digital personas increase in size and complexity, this data is valuable to many entities: debt collectors, marketers, data miners/ brokers, the government and other prying eyes. How does the individual, business or nation-state protect their most valuable secrets as well as a simple shopping list?
Digital security should address these concerns and always err on the side of caution. This project will look into a feasible way of securing data on the move, i.e. data in transit. Current standards fall short in respect to sending data in the clear after the initial authentication. When data is at rest, it is arguably easier to secure as there are no “moving pieces” to contend with, as long as good encryption and access exclusivity are practiced. In opposition to simply encrypting the handshaking process, we need to understand that an eavesdropper, Eve, will persist both before and after Alice and Bob, or Points A and B, meet and authenticate each other. There is also an issue of a suitable compromise between data transit speed, convenience, and security.
Simon Singh’s The Code Book is a timeline of cryptography, starting from the Arabs, the Egyptians and Julius Caesar, to modern techniques such as Diffie-Hellman exchanges, and to the future, quantum cryptography. The Security Now podcast has been aired live on Wednesdays, and as a downloadable podcast, since 2006. It entails Leo Laporte of the TwiT Network interviewing Steve Gibson of the Gibson Research Group, who coined the term “spyware”, about weekly digital security events. The article that delves into PKI and long-term sustainability was the launching point of this paper, and will be used to build a backbone off of. The author used it in the initial draft of this proposal paper.
The author currently works in the Information Protection department of a Fortune 500 Company, and is an avid reader of security-related books, news, publications, and trends. This prior knowledge as well as a keen interest in this field should bolster the efforts of researching this extremely important aspect of contemporary digital existence. Primary research and knowledge gathering shall be done using published material on cryptography, as well as peer-reviewed papers in this field from online databases. The author also plans on reaching out to authors of papers that are behind “pay-walls”, an ubiquitous hindrance in scholarly paper access.
This project will be partitioned into 3 phases: two weeks to gather baseline data on PKI, one-time pads and quantum key distribution (QKD), in which pros, cons and current limitations would be researched and documented; two weeks to delve into quantum keys and quantum computing challenges, breakthroughs and future features; and the last two weeks to tie it all together into a timeline from the one time pad of old, to quantum key distribution models of tomorrow, and a conclusion about the potential and reality of QKD. Milestone touch-points with the instructor will be scheduled to chart progress and correct missteps in a timely fashion.
(Preliminary Bibliography)