This page summarizes the goals and important ideas from each week of Computer Science 100, including lists of important terms and ideas from the text, The Most Complex Machine. Information is available for

- Week 1 (Chapter 1)
- Week 2 (parts of Chapter 2)
- Week 3 (parts of Chapter 3)
- Week 4 (Chapter 4)
- Week 5 (Section 5.2, and material from Lab)
- Week 6 (Chapter 6)
- Week 7 (Chapter 7)
- Week 8 (Chapter 9)
- Week 9 (Chapter 11)
- Week 10 (Chapter 12)

Your primary goal in reading the first chapter should be to understand "structured complexity", the idea that complex systems can be built up level-by-level from simple components. I claim that complexity, and this approach to dealing with complexity, are central ideas in computers science.

It is important to understand what "level-by-level" means here. A person can handle only so much complexity in one gulp; this seems to be a basic fact about the way our brains work. But we do have this neat ability to "chunk" together a manageable portion of complexity and then to deal with it as a unit. That unit can then be used as a building block in the construction of another, higher level of complexity. The step from one level to the next is easy; the step from the bottom level to the top level can be breathtaking. It is only through multiple levels of chunking that real complexity is achieved.

Three main examples of structured complexity are introduced in Chapter 1: Complex data structures are built level-by-level starting from single bits; complex circuits, including entire computers, are build level-by-levle starting from transistors, which are simple switches; and complex programs are build level-by-level from simple machine-language instructions.

Important terms and ideas from Chapter 1:

Mechanical manipulation of symbols Fetch-and-execute cycle Bits Program counter Binary numbers Machine language ASCII code High-level language Pixels Loops and decisions Transistor Jump instructions AND, OR and NOT gates Subroutines Main memory Structured complexity CPU (Central Processing Unit) Chunking

In Chapter 2, I have asked you to read only: from the beginning of the chapter through Subsection 2.2.2, the Chapter Summary, and Questions 1 through 4 (along with their answers in the back of the book). You should also be familiar in general way, from lecture and lab, with some of the material in the rest of chapter.

The goal of this material is simply to understand how circuits can be built that compute. A circuit has inputs and outputs. If a circuit does not have any feedback loops, then it associates some output to each possible combination of inputs. One could make a table showing the output for each set of inputs. This table describes the computation performed by the circuit. In this chapter, you learn how to go in the other direction: Given any input/output table describing a desired computation, it is possible to build a circuit to perform that computation. An intermediary in this process is Boolean algebra, since there is a neat relationship between Boolean algebra and circuits without feedback loops.

In practice, in designing large circuits, we also make use of "structured complexity" by designing components to perform small computations and combining them to perform the complex computations that computers do. We can build a circuit for adding 16-bit binary numbers, for example. And we can build an ALU that can do several different arithmetic and logical operations.

A small but important side issue in this chapter is what to do with circuits that DO have feedback. These are used primarily to build memory circuits. In a memory circuit, the output doesn't just depend on the current input: The output can depend on what happened in the past.

Important ideas in Chapter 2:

Rules for AND, OR, and NOT gates The correspondences among true/false, 1/0, and ON/OFF Boolean algebra Input/Output tables Adding binary numbers The basic idea and operation of an ALU The basic idea and operation of a multibit memory

In Chapter 3, I have asked you to read from the beginning of the chapter, Section 3.1 (omitting the rather technical second half of Subsection 3.1.1), Section 3.3, the Chapter Summary, and Questions 5, 6, and 7. From the lab, you should also be familiar with the basic idea of assembly language and with some of the most basic assembly language instructions.

The goal is to understand how a circuit can be constructed that can automatically and mechanically perform computations under the control of a program that is stored in memory. Chapter 2 showed how circuits can compute and how they can "remember". The question now is, how can a computer execute a program? This is done by the "fetch-and-execute" cycle. Each cycle represents the execution of one machine language instruction. It takes the CPU several steps to perform each fetch-and-execute cycle. Here are the key ideas:

- The steps of the fetch-and-execute cycle are extremely simple, involving things like copying a number from one place to another and adding one to a number.
- Everything is done simply by turning control wires on and off.
- The contents of three registers (COUNT, IR, and AC) completely determine which control wires need to be turned on at any given time.
- It is possible to build a Control Circuit to turn the control wires on and off. This is just a big logic circuit. Its inputs are connected to the COUNT, IR, and AC registers; its outputs are connected to the control wires.
- The actual event that makes things happen is the ticking of the Clock. When the clock ticks, the value in the COUNT register is incremented by 1, which in turn makes the Control Circuit turn on the control wires that perform the next step in the fetch-and-execute cycle.

It is important for you to understand that this is a **mechanical**
process made up of simple steps. Nevertheless, it is the basis for
running programs of any degree of complexity.

Important terms and ideas in Chapter 3:

Main memory Address of a location in memory Clock Register Fetch-and-execute cycle implemented as a sequence of simple steps Control wires Control circuit Basic idea of assembly language Basic idea of the operation of xComputer

The goal of Chapter 4 is to provide some basic understanding of what might be called the "nature of computation". What can we say, in general terms, about what it means to "compute" and what can be done by computing?

In a sense, this question can be answered by looking at what any computer does. It turns out that it doesn't depend on which computer you look at. A fundamental and surprising result in computer science is that, ignoring limitations of memory and running time, all computers are equivalent in the problems they can solve. This property of a computer--that any computer can do anything that any other computer can do--is called "computational universality." This can be rigorously proved. The "Church-Turing Thesis" is the claim that even if someone comes up with some "new and improved" type of computation, it will still be something that computers could have done.

The second section of Chapter 4 introduces Turing Machines to show just how simple a machine can be and still do computation. You'll also find another example of constructing a complex system, in this case a Turing Machine, from simpler components. Look for the analogy between building Turing Machines and writing programs.

The last section shows that there are questions about computation that cannot be answered, at least not by computation. The Halting Problem asks: Write a program that will be able to analyze any program and tell, in advance, whether or not it will ever halt when it is run. We can prove that this is unsolvable, that is, that no such program can ever be written.

Important terms and Ideas from Chapter 4:

Computational universality Simulation, including: Translation Interpretation Low-level simulation The Church-Turing Thesis Turing Machines, including: the tape the rules the method of operation Computable functions Universal Turing Machine Unsolvable problems The halting problem

A test this week limited the amount of material that could be covered. The reading from Chapter 5 is just one section, which deals with how real computers work. Real computer systems have more than just a CPU and main memory. They include many devices, such as disk drives, keyboards, monitors, printers, scanners, sound cards, and network connections. The purpose of Section 5.2 is to explain something about how such systems can be organized and how they work. The important ideas that you should be familiar with include:

Operating systems, including: device drivers user interface subroutines Interrupts and interrupts handlers Busses for communicatin among devices

The lab this week was unusual in that it introduced a lot of material that was not previously covered in class. The lab was an introduction to a "high-level programming language", xTurtle. The idea of the lab was that having seen similar ideas in other contexts, you should be able to absorb the basics of the xTurtle language. The ideas that you should take away from the lab include:

built-in xTurtle subroutines (forward, turn, PenUp, ...) variables assignment statements input/output loop statements if statements

These ideas are also covered, along with some more theoritical stuff, in Sections 6.1 and 6.2 of the text.

Chapter 6 is the first of three chapters on programming. (We will only cover the first two of these in this course, but I would urge you to read Chapter 8 some time, maybe even after the course is over.) Chapter 6 introduces a made-up high-level programming language called "xTurtle" and uses it to illustrate some of the fundamental ideas in computer programming. Besides the specifics of xTurtle that were covered in the previous week's lab, the important ideas in this chapter include:

software engineering syntax and semantics subroutine call statements parameters flow of control in a program BNF (Backus-Naur Form) processes and states preconditions and postconditions programming

The syntax of a language is the set of rules that determines what strings of symbols are legally part of that language; the syntax of a program or statement refers to its "structure," that is, to the way this one particular string of symbols conforms to the grammar of the language. Syntax is mechanical, and can be checked by a computer. BNF is a formal method for describing the syntax of a language. A BNF "template" is a picture or recipee for construting legal strings of symbols.

The semantics of a language is the set of rules that determines the meaning of syntactically correct strings of symbols in that language; the semantics of a program or a statement in a programming language can be thought of as the effect that that program or statement will have when it is executed. As we know from the unsolvability of the Halting Problem, the only completely general way to determine the meaning of a program is to run it. To deal with semantics is a somewhat formal way, we can use states and processes. The meaning of a statement in a program can be thought of as the change that it causes in the state.

Preconditions and postconditions are especially important. They are related to the idea that programs should be designed, not simply thrown together. Programming is difficult, but there are ways of thinking about and approaching programming problems that can make them more manageable. Preconditions and postconditions are one of the tools of software engineering.

The reading for this week is Chapter 7, not including Section 7.4. The first section covers the intricacies of defining and using new subroutines in the xTurtle language. The second section concentrates more on the design of programs and on the software life cycle. The third introduces recursion and recursive subroutines in xTurtle. Important ideas include:

subroutine declarations dummy parameters and actual parameters local variables and global variables black boxes interface and implementation modules and modularity the software life cycle, including specification, analysis and design, coding, testing, and maintenance top-down design and bottom-up design recursion and recursive subroutines.

The ideas of black boxes, the interface/implementation distinction, and modularity are closely connnected. You should understand why they are important in all phases of the software life cycle.

This chapter has three sections. The first discusses very briefly some of the major applications of desktop (or "personal") computers. Most people are familiar with at least some of these. The intent here is not to cover the applications in detail but to survey what is available and to give some idea of what is involved in implementing the applications on a computer. The applications that are surveyed include:

word processing and desktop publishing painting and drawing spreadsheets databases networks and communication.

The second section of the chapter deals with applications that require too much computing power to be done comfortably on desktop computers (although with their ever-increasing power, that might not be true for long). Two major classes of application are discussed:

computer modeling, such as weather forecasting optimization, such as linear programming

In computer modeling, a set of equations that represent some system are programmed into the computer. A "system" here might mean a phsical system such as the atmosphere or a galaxy full of stars. It could also mean an economic system, an ecosystem, etc. By applying these equations to data representing the current situation, the computer can make predictions on what will happen in the system in the future. The quality of the prediction can depend on the accuracy of the model, the quality of the initial data, and the amount of computer processing power that is available to apply to the task. One example of computer modeling is weather forecasting.

Optimization means finding the best option among a range of possible options. Usually, the range of posible options is limited by some constraints, such as the amount of raw materials available. In this case, optimization would mean determining how to allocate the resourses to maximize, for example, profit. One type of optimization is linear programming, which is an example of resource allocation in which the constraints are given by linear functions.

The final section of the chapter is a complete departure from the first two parts. It covers the analysis of algorithms, which is the theoretical study of practical limits on computing. Much of the section is devoted to the question of measuring exactly how fast or efficient an algorithm is. The main ideas include:

algorithms O(n^2) vs. O(n*log(n)) sorting algorithms the P=NP problem

Most of the computer graphics you see are created in a two step process: "modeling" followed by "rendering". In geometric modeling, an abstract wire-frame representation of a scene made up of polygons is constructed. The objects in the scene can then be assigned attributes such as color, reflectivity, and surface texture; this gives a more complete model of the scene. Rendering is the process of producing, pixel-by-pixel, an actual two-dimensional image of the scene. The major ideas in the chapter include:

geometric modeling wire-frame models transformations: scaling, rotation, translation animation rendering hidden line removal lighting, shadows, etc. ray tracing scene description language

In the tenth and final week of the course, we come to a philosophical question: Can machines think. This question was raised in 1950 by Alan Turing in a paper in which he proposed a test which could be used to judge whether computers are intelligent. (This is now called the "Turing test".) For a while, it was supposed that computers could be made to think, but there were from the beginning philosophical objections, and the debate continues today.

Besides the philosophical questions, we can talk about the many practical applications of AI, such as Expert Systems and Neural Nets. Even if true general intelligence is never achieved, the field has made valuable contributions.

The major ideas in Chapter 12 include:

The Turing Test The Physical Symbol System Hypothesis Heuristics Microworlds Expert Systems Knowledge Representation Arguments against AI: Limits of Logic, and limations of rule-following The Chines Room argument The problem of consciousness Neural Nets Artificial Life The Genetic Algorithm Emergence