Typo disclaimer: they will happen because this keyboard is not very sensitive. If you want to alert me to any, please leave a comment like: Typo in p5s1: teh -> the. This will tell me to fix the typo in the first sentence (s1) of the fifth paragraph.
August 27, 2012 (This will not be included in the book :-P)
I had an opportunity to meet the professor for this class a few days previously. He immediately struck me as a total prick, but at least English is his first language. You need to understand that in mathematics departments there is worse than a fifty-fifty chance of this. He also sounds exactly like Mr. Mackey, which is a source of interminable amusement for me.
So imagine Mr. Mackey guy telling you that reading the textbook won’t do you much good because he won’t be following it very closely, and that Mathematica will not be an acceptable substitute for Matlab and you have a good idea of our first meeting. On to the real stuff.
What is an approximation for pi?
A couple of common approximations are 3.14, or 22/7 (which is correct to three digits, 22/7 = 3.1428…). In fact, back in the days before fire was invented and when calculators were prohibitively expensive, it was normal for students to plug in 22/7 for pi when solving problems, so much so that a few of the dimmer ones believed (and probably still believe) that pi = 22/7.
But what if I said pi was approximately 28? Technically, it’s true, although I’d have a hard job convincing NASA to hire me for my brilliant sense of humor. The problem is that it’s not a very good approximation of pi.
A good numerical approximation is not very different from the number it is approximating. Any difference between the two is called error. There are two important kinds of error, called absolute error and relative error.
If p* (usually pronounced “p star”) is approximating another number p then the difference between the two is called the absolute error. That is,
Example: The absolute error between pi and 22/7 is
But this presents an incomplete picture of the error. An absolute error of one million would be remarkably good if you were measuring the Gross Domestic Product (in dollars), and an absolute error of 0.0001 would be terrible if you were measuring **insert something small when you’re feeling creative**. That’s where relative error comes in.
Again using p* as the approximation for the real number p, the difference between the two divided by the real number is the relative error.
While it’s obviously important to understand the simple mathematical definitions, in practice these are often referenced in a purely theoretical sense because it is rarely possible to know what p should be. (Otherwise, we’d just be using p instead of trying to approximate it.) That said, there’s no better method for checking that a numerical algorithm is working properly than comparing it with the exact numbers (found algebraically) and computing the absolute and relative error.
Machine Storage of Numbers
**Note: Include a primer on binary numbers and floating point numbers, perhaps as appendices.**
Because computers have a finite number of bits, they can’t properly store nonterminating numbers like Such numbers have to be truncated in some way. There are two primary methods for this, called k-digit chopping and k-digit rounding.
K-digit chopping is exactly what it sounds like:
**Note: Introduce floating point notation so that these examples will actually make sense.**
Example: Find , using
- 1-digit chopping arithmetic
- 3-digit chopping arithmetic
- 10-digit chopping arithmetic
and express the answers in base-10.