This page summarizes the goals and important ideas from each week of Computer Science 100, including lists of important terms and ideas from the text, The Most Complex Machine. Information is available for
The reading assignment this week is Chapter 1 is the text. This introductory chapter is primarily about the major theme of the course: structured complexity. It discusses three different areas in which stuctured complexity can be observed: (1) the representation of data in a computer, (2) the construction of that most complicated of electronic circuits known as a computer, and (3) the programs that computers execute in order to perform specific tasks. This chapter introduces most of the ideas that will occupy us for the rest of the course.
The lab for this week (Lab #0) is an introduction to the Macintosh computer and to some aspects of the programs you will use in the remaining labs in this course. There is also an "addendum" to the lab which will introduce you to the Internet.
There will be a quiz next Wednesday, April 3. For quizzes, you are responsible for all material from the preceding week, including readings, lecture, and lab. A lab report on Lab #0, including the addendum, is also due next Wednesday. I do not ordinarily accept lab reports late.
Here is a list of especially important terms from Chapter 1 (You should not be surprised to see some of these on the quiz):
mechanical manipulation of symbols binary numbers ASCII code data structures main memory central processing unit (CPU) AND, OR, and NOT gates programs fetch-and-execute cycle machine language high-level language loops, decisions, and subroutines structured complexity
Advice about "reading": When I say "read", I really mean "read, study, ponder, and react to". (This is what any professor means, I think, and it is especially true for technical readings.) "Reading" is something that takes time, and it probably cannot be done in one sitting. It might mean, for example:
You should read Chapter 2. Subsections 2.2.3, 2.2.4, and 2.3.1 are optional. The goal of this material is simply to give you some understanding of how circuits can be built that compute. A circuit has inputs and outputs. If a circuit does not have any feedback loops, then it associates some output to each possible combination of inputs. One could make a table showing the output for each set of inputs. This table describes the computation performed by the circuit. In this chapter, you learn how to go in the other direction: Given any input/output table describing a desired computation, it is possible to build a circuit to perform that computation. An intermediary in this process is Boolean algebra, since there is a neat relationship between Boolean algebra and circuits without feedback loops.
In practice, in designing large circuits, we also make use of "structured complexity" by designing components to perform small computations and combining them to build more-and-more complex circuits. We can build a circuit for adding 16-bit binary numbers, for example. And we can build an ALU that can do several different arithmetic and logical operations.
A small but important side issue in this chapter is what to do with circuits that DO have feedback. These are used primarily to build memory circuits. In a memory circuit, the output doesn't just depend on the current input: The output can depend on what happened in the past.
Important ideas in Chapter 2:
Rules for AND, OR, and NOT gates Boolean algebra and its relation to circuits Input/Output tables Adding binary numbers Circuits that add binary numbers The time it takes for a circuit to compute Arithmetic-Logic Unit (ALU) The basic idea of a multibit memory
Our goal this week is to understand how a circuit can be constructed that can automatically and mechanically perform computations under the control of a program that is stored in memory. Chapter 2 showed how circuits can compute and how they can "remember". The question now is, how can a computer execute a program? This is done by the "fetch-and-execute" cycle. Each cycle represents the execution of one machine language instruction. It takes the CPU several steps to perform each fetch-and-execute cycle. Here are the key ideas:
It is important for you to understand that this is a mechanical process made up of simple steps. Nevertheless, it is the basis for running programs of any degree of complexity.
The reading for this week is parts of Chapter 3. You are not responsible for all the details in this chapter. Some of this material is repeated in the lab for this week (Lab #4), and you should pay attention to what is covered in the lab. Here is a section-by-section guide to the reading in Chapter 3:
Important terms and ideas in Chapter 3:
Registers (including PC, AC, ADDR, IR, and COUNT) Clock Control Circuit Control wires Steps in the fetch-and-execute cycle Black Box Interface vs. implementation How main memory functions (as a black box) Address of a location in memory Assembly Language Basic assembly language instructions
There is a test this week on Friday. The test covers material from Chapters 1, 2, and 3 as well as from the labs that you did in the first three weeks. It does not cover new material from Chapter 4 that we cover on Monday and Wednesday this week.
The reading for this week is the introductory material in Chapter 4 and Section 4.1. We will not cover Turing machines, which means that you don't have to read Section 4.2. I will cover some of the material from Section 4.3 in class, but without mentioning Turing machines. So, you should pay attention to what I do in class and just skim Section 4.3 for the material that you can follow. Among the exercises in Chapter 4, you should look at only Exercise 1 and Exercise 6.
Because the basic operations in computation are so simple, even a relatively simple machine like our model "xComputer" can be programed to imitate the operation of any other computer. (We say that any computer can "simulate" any other computer, at least if it is given enough time and memory.) Thus, if we ignore limitations of time and memory, any computer is equivalent to any other computer as to the problems they can solve. This is called "computational universality".
Once we know that all computers can solve the same problems, it is natural to ask whether there are any natural problems that cannot be solved by any computer, even given unlimited time and memory. In fact, there are such problems. One of the most famous is the Halting Problem. The "computational unsolvability" of the Halting Problem shows that there are certain fundamental limits to what can be done by computers.
The Halting Problem can be stated as: "Write a program H that can scan any other program P and input data D and determine, correctly and in advance, whether or not P will ever halt when it is run with input D." If I had such a program H, it would save me from running programs that will run forever without ever producing an answer. This would be nice, but unfortunately it can be shown that no such program H exists.
Informally, the unsovability of the Halting Problem and similar problems means that the only general method for finding out what a program will do is to run it; there is no general way to predict in advance what an arbitrary program will do. (Of course, in many individual cases, you can predict what the program will do. But there is no general method for making such predictions.) Thus, computation is, in some fundamental way, surprising.
Here are some important terms and ideas from Chapter 4:
Computational Universality Simulation Translators and Interpreters Low-level simulation The Church-Turing Thesis The Halting Problem
The model computer constructed in Chapters 2 and 3 is unrealistic because it is simpler that any real computer. The theoretical "computationally universal" computers in Chapter 4 are unrealistic because to be truly computationally universal a computer must have an infinite amount of memory. In Chapter 5 we turn, finally, to a discussion of real computers.
Real computers are computer systems in which a large number of devices communicate with each other and with the user. Often, these devices communicate by conneting to busses that carry data, address, and control information. One type of control signal is an interrupt, which causes the CPU to interrupt whatever it is doing and execute special code--called an interrupt handler--to respond to the interrupt signal. The software necessary to operate a real computer system is called the OS (operating system).
The discussion of real computers in Chapter 5 also includes sections on the history of computers and on the social impact of computer technology. There are a lot of ideas here. Sections 5.1 and 5.3 are fairly straightforward. Section 5.2 is more technical, but not so much as Chapters 2 and 3. So for the most part you should find this chapter readable and reasonably easy to follow. Here is a list of the major ideas in this chapter:
Charles Babbage, Ada Lovelace, and the Analytical Engine Electronic, general-purpose, stored-program computers Eckert and Mauchly and the ENAIC John von Neumann and the Von Neumann Machine Relays, vacuum tubes, and transistors Integrated circuits and microprocessors Batch processing Time-sharing Command-line interface GUI (Graphical User Interface) Interrupts, interrupt handlers, and I/O Busses Operating System Device drivers Start-up program Applications programs API (application programming interface) Infromation Age De-skilling Computerized databases
Chapter 6 is the first of three chapters on programming. It introduces a made-up high-level programming language called "xTurtle" and uses it to illustrate some of the fundamental ideas in computer programming. The important ideas in this chapter include:
Turtle graphics Syntax Semantics Variables Declaring a variable Assignment Statements Built-in subroutines Subroutine call statements Parameters I/O Flow of control in a program LOOP statements IF statements Processes and states Preconditions and postconditions
The syntax of a language is the set of rules that determines what strings of symbols are legally part of that language; the syntax of a program or statement refers to its "structure," that is, to the way this one particular string of symbols conforms to the grammar of the language. Syntax is mechanical, and can be checked by a computer.
The semantics of a language is the set of rules that determines the meaning of syntactically correct strings of symbols in that language; the semantics of a program or a statement in a programming language can be thought of as the effect that that program or statement will have when it is executed. As we know from the unsolvability of the Halting Problem, the only completely general way to determine the meaning of a program is to run it. To deal with semantics is a somewhat formal way, we can use states and processes. The meaning of a statement in a program can be thought of as the change that it causes in the state.
Preconditions and postconditions are especially important. They are related to the idea that programs should be designed, not simple thrown together. Preconditions and postconditions can be used to check that different parts of a program fit together correctly. Programming is difficult, but there are ways of thinking about and approaching programming problems that can make them more manageable. Preconditions and postconditions are among the tools of software engineering.
The reading for this week is Chapter 7, Sections 1 through 3. Section 4 is optional; there will be no questions on it on tests or quizzes.
In Chapter 6, the built-in subroutines of xTurtle were introduced. These are true black boxes that you can use without any understanding of their implementation. However, as you know, being able to build new black boxes is the real key to creating complex systems. In the case of xTurtle programs, the programmer can create new subroutines using SUB and END SUB.
The second section looks more carefully at the process of developing complex programs. Large, complex programs have a life cycle, referred to as the "software life cycle." This cycle includes specification and analysis of the problem, design of a solution, translation of that solution into code written in some programming language, testing of the program, and then maintenance of that program for years afterwards as bugs are found and new requirements are imposed. Large progrms are made up of modules--just another name for black boxes in this context. More formally, a module is a relatively self-contained component of a system that interfaces with other parts of the system in simply, clearly defined, easy-to-understand ways.
The third section of the chapter looks at one of the nicest applications of subroutines: recursion. A subroutine is called recursive if it calls itself, either directly or indirectly through another subroutine. Recursion turns out to be very powerful and important, but for now we just investigate its use in drawing pretty pictures in xTurtle.
Here are some of the important ideas in Chapter 7:
Defining new subroutines in xTurtle with SUB and END SUB Dummy parameters and actual parameters Local variables Global variables Software engineering The software life cycle: Specification, Analysis, Design, Coding, Testing, and Maintenance Modules Top-down design and bottom-up design Recursion
There is a TEST this week on Friday. The test covers material from: Chapters 4, Section 1 plus the Halting Problem; Chapter 5, Sections 1 and 2; Chapter 6; and Chapter 7, Sections 1 and 2. (The test will not cover Section 3 from Chapter 7, but that section will be covered on the final exam.) There will be a quiz next week on the first two sections of Chapter 8. We will not cover Section 3 of Chapter 8. We will also skip Chapters 9 and 10 and will spend the last two weeks of the course on Chapters 11 and 12.
If we look at the history of programming languges, we can see several major themes in their evolution. These are covered in Section 1, along with a brief history of some of the major programming languages.
In the previous two chapters, you learned some of the basics of programming by looking at the simple language, xTurtle. This language has many features common to the great majority of programming languages (such as subroutines, loops, if statements, variables, assignment statements, and input/output). But it lacks one major important feature: data types.
In most "real" programming languages, each variable has a type that tells what sort of data that variable can hold. All languages have several built-in, basic data types such as integer, real, and string. They also provide the ability to define new data types that describe data structures. A data type can be thought of as both a blueprint for making data structures and as a roadmap to a data structure once it has been created.
Important ideas in this chapter include:
Themes in the evolution of programming languages: Support for abstraction ("put it in the language"), Increasing respect for data, Moving away from the machine Significant programming languages: FORTRAN, ALGOL, COBOL, Basic, Pascal, C, C++, Ada Abstraction: Control abstraction, Procedural abstraction Data abstraction Algorithm Data types Data structures Arrays Records UNITs in Pascal