- Home
- Martin Erwig
Once Upon an Algorithm Page 21
Once Upon an Algorithm Read online
Page 21
Everything under Control
When Groundhog Day starts to repeat itself, Phil Connors is everything but in control of his life. On the contrary, he is under the control of the Groundhog Day loop and is painfully aware of this fact. Here being under the control of a loop means that he lives inside the body of a loop that controls when the repetition ends and when he can escape.
A loop controls how often its body is executed, but the effect of the loop is obtained solely by the steps of its body. In other words, a loop itself does not have any effect directly but rather indirectly through the repetition of the steps of its body. Since it generally matters how often an algorithmic step is executed, a loop exerts its influence through the number of times it executes its body. Since a loop controls the effect of its body (through the termination condition), it is called a control structure. A loop is a control structure for repeatedly executing a group of algorithmic steps. The other two major control structures are sequential composition and the conditional.
Sequential composition connects two (groups of) steps into an ordered sequence of steps. I have previously used the word and for this purpose, but to indicate that both steps are to be executed in sequence, not in parallel, it might be better to use a keyword such as andThen or followedBy. However, for simplicity and succinctness I adopt the notation used in most programming languages, namely, connecting two steps by a semicolon. This notation is similar to how we write down lists of items. Moreover, it is short and doesn’t distract from the actual steps of the algorithm. For example, get up; have breakfast means to first get up and then have breakfast. The order of the steps matters, of course, and for some people have breakfast; get up is a welcome change of routine on Sundays. The general form of sequential composition is as follows:
step; step
Here step is a nonterminal that can be substituted by any simple or compound step. In particular, step can be another sequential composition of other steps. Thus, if you want to squeeze in a shower between getting up and having breakfast, you can use sequential composition twice by expanding the first step to get up; take a shower and the second one to have breakfast, which altogether produces get up; take a shower; have breakfast.4
As with the example of folding a piece of paper twice, “fold and fold,” which we now write as fold; fold, the steps connected by ; need not be different. Executing a loop repeat fold until paper fits (or repeat fold three times) yields the same computation as the sequence fold; fold; fold, which illustrates that loops are a tool to describe sequences of actions. The important contribution of a loop is that it produces arbitrarily long sequences of steps while having to mention the actions to be repeated only once.
The conditional selects one of two (groups of) steps for execution depending on a condition. Like a loop it uses a condition to make the decision. The general form of a conditional is as follows:
if condition then step else step
Whenever Punxsutawney Phil is asked to foretell the weather, he essentially executes the following weather-reporting algorithm:
if sunny then announce six more weeks of winter else announce early spring
The conditional is a control structure that allows algorithms to make choices and select between alternative steps. The preceding conditional is part of a yearly loop for Punxsutawney Phil and a daily loop for Phil Connors. This illustrates that control structures can be combined in arbitrary ways, that is, conditionals can be part of loops or sequential compositions, loops can occur in the alternatives of a conditional or as part of a sequence of steps, and so on.
The basic steps of an algorithm are like the moves in a game (for example, passing the ball or taking a shot on a goal in soccer, or attacking a piece or castling in chess). Then the control structures define strategies in those games, for example, repeat pass until in front of goal (or if you are Lionel Messi, repeat dribble until in front of goal), that is, control structures compose basic moves into bigger plays.
As shown in chapter 8, there are many different notations to describe music. Similarly, there are different notations for algorithms. Each programming language is an example of a specific notation for algorithms, and while languages can differ significantly in which specific control structures they offer, most provide some form of loop, conditional, and composition.5 A notation that nicely illustrates the difference between control structures is a flowchart. A flowchart depicts an algorithm as boxes connected by arrows. Basic actions are displayed inside the boxes, and decisions are enclosed in diamond shapes. The arrows indicate how the computation progresses. For sequential composition this means to follow single arrows from box to box, but for conditionals and loops, whose conditions have two outgoing arrows, which arrow to follow depends on the condition. Some examples of the flowchart notation are shown in figure 10.1.
Figure 10.1 Flowchart notation for control structures. Left: A sequence of steps Phil Connors takes every morning. Middle: A conditional showing the decision that Punxsutawney Phil faces on each Groundhog Day. Right: A loop expressing the life of Phil Connors during the movie Groundhog Day. The “no” arrow and the arrow to the condition form a cycle through the two nodes.
It is striking how similar the notations for conditionals and loops are. Both consist of a condition with two possible continuations. The only crucial difference is that the “no” path from the condition in the loop leads to a step that continues back to the condition. The cycle thus formed provides a nice visual explanation of the name loop for this control structure.
Flowcharts are a visual language. In contrast to a textual language that presents an algorithm as a linear sequence of words and symbols, a visual language presents symbols in two- (or three-)dimensional space, connected by spatial relationships. Flowcharts use arrows to express the “execute next” relationship between actions, which are represented as boxes. A flowchart looks similar to a transportation network. The junctions are places where actions take place, and the connections lead from one action to another. Think of moving through an amusement park. The park can be regarded as an algorithm for having fun. Different people will visit individual attractions in a different order and a different number of times, depending on their predispositions and experiences with the attractions. Or envision the aisles of a supermarket that connect different sections and shelves. The supermarket can be viewed as an algorithm for different shopping experiences.
Flowcharts were quite popular in the 1970s, and they are sometimes still employed for the documentation of software, but they are rarely used these days as a programming notation. One reason is that the notation does not scale well. Even moderate-sized flowcharts are difficult to read—the profusion of arrows has been called spaghetti code. Moreover, the similarity between the notation for conditional and loop, while quite helpful to illustrate their relationship, makes it hard to identify and distinguish between control structures in flowcharts.
The control structures presented here are those used by algorithms for single computers. Modern microprocessors come with multiple cores that can execute operations in parallel. People, too, can compute in parallel, especially when they are in a team. To make use of parallelism, an algorithm needs control structures that are specifically dedicated for this purpose. For example, we could write walk ‖ chew gum to instruct somebody to walk and chew gum at the same time. This is different from walk; chew gum, which means first walk and then chew gum.
Parallel composition is useful when two results that do not depend on each other are needed by another computation. For example, Sherlock Holmes and Watson often split their investigative work in solving a crime. On the other hand, one cannot get up and take a shower at the same time. Those two actions must be executed strictly in sequence (and the order matters).
Related to parallel computing is distributed computing, in which computation happens through communication among interacting agents. For example, when Phil Connors and his team create a report about the groundhog’s prognostication, they need to synch
ronize the operation of the camera with Phil Connors’s talking. An algorithm to describe this coordination requires its own control structures, in particular, for sending and receiving messages.
In general, any domain-specific language, that is, a language for a specialized application area, may have its own control structures. For example, music notation contains control structures for repetition and jumps, and languages for recipes contain constructs for choice that allow for the creation of variety in recipes. Control structures are the glue to connect primitive operations into larger algorithms for describing meaningful computations.
A Loop Is a Loop Is a Loop
In his attempt to escape the Groundhog Day loop, Phil Connors is effectively trying to discover its termination condition. This is a rather unusual way to deal with algorithms in general, or loops in particular. Typically, we express an algorithm and execute it to achieve a desired computation. In contrast, Phil Connors is part of a computation whose describing algorithm he doesn’t know. In his search for an action that will turn the termination condition true, he is trying to reverse-engineer the algorithm.
Loops and their termination play an important role in computation. Loops (and recursion) are arguably the most important control structures, since they get computation off the ground. Without loops we can only describe computations with a fixed number of steps, which is limiting and misses out on the most interesting computations.
Given the importance of loops, it comes as no surprise that there are different ways to describe loops. The loop schema we have used so far, repeat step until condition, is called a repeat loop. It has the property that the loop body will be executed at least once, no matter what the termination condition is. On the other hand, a while loop executes the body only if its condition is true, and thus it may not execute it at all. A while loop has the following form:
while condition do step
Even though both loops are controlled by a condition, the role of the condition is different in each loop. Whereas the condition controls the exit of a repeat loop, it controls the (re-)entry of a while loop. In other words, if the condition is true, the repeat loop ends, whereas the while loop continues; if the condition is false, the repeat loop continues, whereas the while loop ends.6 This difference is also highlighted by the flowchart notation for the two loops, shown in figure 10.2.
Despite their apparently different behaviors, one can express a repeat loop using a while loop, and vice versa. For this, one has to negate the condition, that is, transform the condition so that it is true in exactly those cases when the original condition is false. For example, the termination condition for the Groundhog Day repeat loop, “being a good person,” becomes the entry condition “not being a good person” or “being a bad person” for the corresponding while loop. Moreover, one must be careful to ensure the same number of iterations. For example, the repeat loop lets Phil Connors experience Groundhog Day at least once, no matter what, whereas the while loop does that only if he is a bad person. Since this is the case in the story, both loops behave the same.
The language representation lets us express the equivalence between the loops more formally through a simple equation:
Figure 10.2 Flowchart notation illustrating the different behavior of repeat and while loops. Left: Flowchart of a repeat loop. Right: Flowchart of a while loop.
repeat step until condition = step; while not condition do step
Again, the initial step before the while loop is required because its body might not be executed at all, unlike the body of the repeat loop, which is executed at least once. The difference between the two loops sometimes really matters. Consider, for example, Hansel and Gretel’s algorithm. Expressed as a repeat loop, it reads as follows:
repeat find pebble until at home
The problem with this algorithm is that if Hansel and Gretel executed this algorithm when they were already at home, the loop would never terminate, since they could not find a pebble. Humans would never do such a silly thing and instead abort the loop, but a strict adherence to the algorithm would lead to a nonterminating computation.
Yet another description of loops can be achieved using recursion. I explain recursion in detail in chapter 12, but the basic idea is easy to grasp. (In fact, recursion was described in connection with divide-and-conquer algorithms in chapter 6). For a recursive description of an algorithm, we first need to assign it a name, then use that name in its own definition. The Groundhog Day loop can thus be described as follows:
GroundhogDay = experience the day; if good person? then do nothing else GroundhogDay
This definition effectively emulates a repeat loop: after living through the day, the conditional checks the termination condition. If it is false, the computation simply terminates by doing nothing. Otherwise, the algorithm is executed again. The recursive execution of the algorithm is like a jump to the beginning of the sequence and triggers the reexecution of the loop.
All the different loop descriptions encountered so far (repeat, while, and recursion) have in common that their termination is controlled by a condition that is reevaluated before or after each execution of the body. The termination of the loop depends on the body to have an effect that ultimately makes the termination condition true (or the entry condition false, in the case of a while loop). This means that it is not known in advance how many iterations a loop will go through; it is not even clear that any such loop will terminate at all. This uncertainty is actually an important part of the Groundhog Day loop that Phil Connors experiences.
For some computations described by loops, however, it is clear how many times the loop should be executed. For example, if the task is to compute the square of the first ten natural numbers, it is clear that this computation can be achieved by a loop that repeats the squaring operation exactly ten times. Or, recall the algorithm to fold a piece of paper to fit into an envelope, which is described by a loop that is executed exactly twice. For cases like these, we employ for loops, which have the following general form:7
for number times do step
Using this schema, the paper-folding loop would be expressed as for 2 times do fold. The advantage of a for loop is that it is absolutely clear even before it is executed how many iterations will be performed. This is not the case for the other loops because one finds out only when executing the loop. This is an immensely important difference, since the for loop is guaranteed to terminate, while the other loops may run forever (see chapter 11).
Closely related is the question of the runtime of loops. It is clear that a loop that is executed, say, 100 times takes at least 100 steps. In other words, a loop is linear in the number of its iterations, and this is the case for every kind of loop. In addition to the number of iterations we also have to consider the runtime of the loop’s body. The runtime of a loop is the number of its iterations times the runtime of the body. For example, selection sort is a loop whose body consists of finding a minimum in a list. The loop is linear in the size of the list, and the body takes, on average, time proportional to half of the list length. Therefore, the runtime of selection sort is quadratic in the size of the list.
Given that a for loop seems to behave so much more predictably than all the other kinds of loops, why not use for loops exclusively? The reason is that for loops are less expressive than while loops and repeat loops (and recursion), that is, there are problems that can be solved using a while loop or a repeat loop that can’t be solved by a for loop. The Groundhog Day loop is an example where it is not known at the beginning (at least, not to Phil Connors) how many iterations it will take. It is easy to see that any for loop can be represented by a while (or repeat) loop by explicitly maintaining the for loop counter, but the reverse is not true because one cannot immediately see how many iterations a while loop or a repeat loop needs before it terminates.
Predictability has its price. While uncertainty about the duration or outcome of adventures is welcome, we would rather know in advance how long a computation
takes before we use it and rely on it—in particular, if it might run forever.
Stop at Nothing
After you’ve folded your letters and put them into envelopes, it is time to welcome your new colleague who recently moved into an office one floor above you. Your walk to her office is the result of executing an algorithm that repeats taking steps until you have reached her office. But what if you don’t know the exact location of her office? In that case, you have to walk around the floor and try to find her name on the office door. But what if her start date was pushed back and she hasn’t even moved in yet? In that case, your simple algorithm that takes steps until the target office is reached won’t terminate; rather, it will continue forever.
Of course, you wouldn’t actually do that. You would execute a different algorithm, which, after searching the whole floor (maybe repeatedly, just to be sure), would abandon the search and lead back to your own office. This algorithm has a more sophisticated termination condition. It would not require only finding the office but would also allow for termination after a certain time.
Loops that do not terminate seem like a silly idea. Although some computations are the result of executing a loop without a termination condition (for instance, a web service or even a simple timer or counter), a computation to produce one fixed result would only use loops that are intended to terminate because a nonterminating loop would prevent the result from being constructed.
In general, algorithms are expected to terminate because otherwise they would not be an effective method for solving problems. It would therefore be helpful if we had a way of telling whether an algorithm terminates when executed. Since the only reason for an algorithm not to terminate is that one of its loops doesn’t terminate,1 judging the termination of an algorithm boils down to determining whether the loops employed by it actually terminate. The different algorithms for folding paper all do in fact terminate. This is clear for the for loop fold three times, since it explicitly mentions the number of iterations of the loop. The repeat loop fold until paper fits also terminates, but this may not immediately be obvious. Even if we know that each folding step reduces the size of the paper, we also have to assume that folding happens along different axes when needed. In that case, the size of the folded paper must eventually become smaller than that of the envelope, since it halves with every folding step. While it is not clear how many iterations the loop will go through and thus how long the algorithm will take, it is clear that the algorithm terminates eventually.