Numbers are as fundamental to computing as breath is to human life. Even programs that have nothing to do with math need to count the items in a data structure, display average running times, or use numbers as a source of randomness. Ruby makes it easy to represent numbers, letting you breathe easy and tackle the harder problems of programming.
An issue that comes up when you're programming with numbers is that there are several different implementations of "number," optimized for different purposes: 32bit integers, floating-point numbers, and so on. Ruby tries to hide these details from you, but it's important to know about them because they often manifest as mysteriously incorrect calculations.
The first distinction is between small numbers and large ones. If
you've used other programming languages, you probably know that you must
use different data types to hold small numbers and large numbers (assuming
that the language supports large numbers at all). Ruby has different
classes for small numbers (
large numbers (
Bignum), but you don't
usually have to worry about the difference. When you type in a number,
Ruby sees how big it is and creates an object of the appropriate
1000.class # => Fixnum 10000000000.class # => Bignum (2**30 - 1).class # => Fixnum (2**30).class # => Bignum
When you perform arithmetic, Ruby automatically does any needed conversions. You don't have to worry about the difference between small and large numbers:
small = ...