Computers don’t understand numbers

Do computers understand anything? That’s a valid question. They don’t understand English, or love, or toast. That doesn’t mean you can’t have a computer interact with English (I’m not going to address love or toast here), and, in their way, handle it better than people who do actually understand English. Spellcheckers or grammar checkers, for example.

But numbers? That’s a bolder claim. Some may argue that a computer understands nothing _but_ numbers – the first step in allowing a computer to process anything is to turn it into numbers.

At the heart of the CPU are two key parts (of which a modern CPU may have many) – the ALU (arithmetic logic unit) and the FPU (floating point unit). The ALU does integer arithmetic, and the floating point unit does floating point arithmetic. If a computer understands anything, integer and floating point arithmetic are definitely in that set.

Only thing is – integers and floats aren’t numbers. They’re concepts which are similar enough in most use-cases to fool us that they behave like numbers, but have enough exceptional behaviour to make that assumption dangerous.

Int =/= Integer

For the purpose of this section, I’m going to use int to talk about n-bit computer integers, including bytes, shorts and longs, and integer to talk about the infinitely-large number-theoretic set of values.

Ints are represented by a stream of binary digits of a fixed length – typically 8, 16, 32 or 64 bits long, depending on the architecture. That limits the number of numbers the stream can represent: an 8-bit int can represent 2^8 (=256) different numbers, whereas a 32-bit int can represent 2^32 (=4294967296) different numbers. Any operation on integers will then result in another integer. This leads to two categories of problems:

1) You can’t represent non-integers.

Ints handle division just fine. 10/2 results in 5. The problem they have is when that division doesn’t result in another int.

In many languages, the expression 2/3 results in 0, not 0.666… That’s because any operation on ints has to result in an int: there’s no way of representing 0.666… in a binary stream where each possible combination represents an integer, and the decision was made when doing division to truncate – always round towards zero.

This point seems fair enough, as long as it’s clear you’re dealing with ints. The problem comes when representing numbers as integers is the default, meaning that if you ask for the value of 1/7, you’ll get 0, and the standard shortcut for that is to ask for 1.0/7. You have to use a workaround to get numbers which behave like numbers.

2) You can’t represent all integers.

I’m going to talk about 8-bit ints here, because they’ve got nice small, familiar limits. The same reasoning applies to more typical int sizes, only I’ll spend more time typing and double-checking numbers.

If you take a number and you add one to it, then you get a number bigger than the previous. That’s true of integers, but not of ints. Which makes sense: if you can only represent 256 numbers, there’s going to be a biggest number you can represent (255 if you don’t care about negative numbers, or typically 127 if you do).

Well, this seems reasonable, on the face of it. There are, after all, infinitely many integers, and therefore in order to represent all of them, you’d need an infinite amount of memory.

So if you’ve got a biggest number you can represent, what happens if you add 1 to it? Typically, you get the smallest number you can represent. 127 + 1 yields -128. That’s the easiest answer for the ALU to provide (it’s what you get when there’s no special-case logic made for that operation), and it also in its own way makes the most sense.

That’s a case you’ll hit quite frequently if you’re using 8-bit ints: you’ll very quickly realise in general usage that you can’t represent 300. When the default int size is 32-bit, though, and you can represent up to a little over 2 billion, that’s different: you’ll have logic which handles all reasonable inputs until suddenly it doesn’t.

You also get weird behaviour at the very edges: finding the absolute value of an int, for example. abs(3) is 3, and abs(-3) is also 3. By its definition, you’d expect abs(x) >= 0 for every x. And if you try, you’ll find that abs(x) >= 0 for every single value, except one: abs(-128) is -128.

The thing is, whilst I don’t have an infinite amount of memory installed in my PC, I do have more than 32 bits. Most sensible languages can represent numbers larger than the ALU can handle – at added cost, of course, the larger the numbers are the slower they’ll be and the more memory they’ll need (it’s worth pointing out that 0, 1, and 1,952,325,401 all occupy the same amount of memory with 32-bit integers). If they can do that… why don’t they?

Ints are nothing, however, compared to floats.

Floats =/= real numbers

Floats too are represented by a stream of binary digits, typically either 32 or 64 bits long (16 bits don’t give enough precision to really be useful, but we are seeing 80-bit floats showing up more). 32-bit floats are often referred to as float, whereas 64-bit floats are usually called doubles. So we have the same basic root issue as integers: you can only represent 2^n distinct numbers.

Floats however have far more practical issues than integers. The number is represented by three parts: sign, mantissa, exponent. All numbers are represented in effectively scientific notation: eg, 310,000 is 3.1E5. The mantissa here would be 3.1, and the exponent 5. Except, what with this being a computer and all, each of these is done in binary. 0.5 would be represented by 1E-1 (ie, 1 * 2^-1). 0.25 with 1E-2. This means we can exactly represent any fraction whose denominator is a power of two.

It also means we can’t exactly represent any fraction that isn’t a power of two. Like, for example, 0.1. Or 0.01. This immediately makes it a bad representation of non-whole numbers for general-purpose use, especially as the general way to specify a float value is as a decimal: most of the time you specify a float value less than 1, you have a subtly incorrect value before you’ve even done anything to it.

This is a pretty common lesson: most programmers are taught very early on not to use floating-point values to represent currency. It is, however, frankly nonsense that it needs to be taught, that in most languages the text “0.01” in source code doesn’t represent 0.01.

Plus, most people are taught not to use floats for currency: instead use integers, where 1 is a penny and 100 is a pound. Makes sense, until I have to deal with around 2 billion pennies (£20,000,000)…

With actual numbers, a + b = b is only true for a = 0. That’s not true for floats. You’re only representing a certain amount of precision: operations have an implicit rounding to them. So, for example, adding 0.000001 to 1000000 yields exactly 1000000. Loss of precision is to be expected, except it doesn’t always happen how you’d expect it. It’s also lost on integers: 1.0 + 1*10^16 is 1*10^16.

It’s not just that you get implicit rounding: it’s that you get implicit rounding after every operation. For actual numbers, (a + b) + c == a + (b + c) is true. (a * b) * c == a * (b * c) is true. These are fundamental, defining properties of addition and multiplication on numbers. They doesn’t hold for floats – because of that implicit rounding.

Again, this is ingrained into the programmer’s toolbox: don’t compare floats for exact equality, because performing the same sequence of operations on the same numbers, just in an unpredictable order, can give different answers. Instead, check that one number’s within a range of the other. This is a sensible thing to do when comparing theoretical values with a real-world measurement: it’s another thing when all your values are theoretical.

This is still while considering floats which represent actual numbers. Not all floats do. There are three special values which don’t represent numbers: POSITIVE_INFINITY (the result of, amongst others, dividing a positive number by zero), NEGATIVE_INFINITY (dividing a negative number by zero), and NOT_A_NUMBER (one way to get this is to subtract infinity from infinity).

Yes, that’s right, this number format can represent things which aren’t numbers. And, because it’s a valid floating-point value, you can still perform arithmetic on it. Most of this behaves in a surprisingly normal way: almost all operations on NaN yield NaN.

Where NaN gets interesting is the concept of ordering. Numbers are ordered: a given number is either less than, equal to, or greater than another number. Computer integers are ordered. Floats are, by merit of NaN, not properly ordered. NaN is neither less than, equal to, nor greater than 4, and it would be meaningless to say any of them should be true. Now, admittedly, the specification for floating-point numbers does provide a total ordering – but the way sorting algorithms are generally implemented means that in the vast majority of sort implementations for the vast majority of languages, sorting a list of floating-point numbers with NaNs in will give less than useful results.

Leaky Abstractions

It’s a truism that all meaningful abstractions are somehow leaky: that eventually they betray their implementation details and behave in a manner other than how you’d expect the abstraction to behave. That’s particularly true of the integers and floats, which ultimately are an abstraction over the concept of numbers. For general purpose use, they’re frankly bad abstractions, particularly when you consider how abstractions are used.

It’s one thing to know that adding two numbers in the two-billion range is liable to cause an overflow. It’s another thing to have a function which internally adds two numbers for general purpose use, without considering how big those numbers are, being called by another function which passes in two values which may contain numbers without knowing what’s going to happen to them.

The danger of leaky abstractions is when you use them, you need to allow for their leakiness – which means you need to consider the internals of said abstraction, or at least the consequences thereof. If you do that, then it’s no longer truly an abstraction.

Only thing is: numbers are important when programming. Really important. Too important to put up with bad abstractions.

Is there a better way?

Well, that depends what you mean by better. For Minstrel, I’m choosing to make the default number representation rational numbers: every number is an arbitrarily large integer divided by another arbitrarily large integer. That can exactly represent the result of any number of arithmetic operations on any number of precise values. 0.01 means exactly one divided by a hundred.

Some have asked me what to do about irrational numbers. Well, let’s be frank: you can’t precisely represent irrational numbers with any numeric representation. That’s the definition of irrational numbers. So it’d be foolhardy to even try. If you want to know the value of sqrt(2), you’re not getting an exact answer.

So, what should you get? Usually, you get a rational value which is the closest value double-precision floating point can represent. That’s fair, given you’re living with the limitations of floating-point number representation – but you’re getting a rational number regardless, so that’s something easily represented with a rational number type. So I could default to something with the exact precision that double provides. How gloriously arbitrary would that be?

A better approach would be to specify the precision you need, and get the closest possible value to that given precision – so the same sqrt function can give you two decimal points when that’s all you need, and 1000 when you need that (close to never).

Then, at least, your numbers can behave like numbers, and when you can’t get an exact number, you can at least get what you need (and be made aware that you’re not getting a precise solution at the same time).

Same thing for pi, or e, or whatever. A program which prints pi to a million decimal places would simply be:

print[pi[decimalPlaces[10000000]]];

That sounds slow!

Is it slower than using machine types? Definitely – especially the current, somewhat naive implementation. Is that a problem? Well, I don’t know. There’s a lot of room for optimisations within the type, and of all the things a siginificant piece of software does, math is usually somewhere between negligible and zero.

And if you want to use machine types, I don’t see any reason why there shouldn’t be support for that. Most experienced programmers know their way around the potential pitfalls of using them, and they’re undoubtedly more efficient both in terms of space and time. I don’t have a problem with machine types – I just think they’re a terrible default for numeric types.

Long story short: I want numbers to behave like numbers, up until the point where I run out of memory. For various reasons – which mostly made a lot of sense when they were made, although I’d argue they’re less compelling now – it’s not done. It’s not unusual for languages to have arbitrary-size integers as their default representation, nowadays. It doesn’t seem like a big ask to extend that to rationals.

Advertisements

3 thoughts on “Computers don’t understand numbers

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s