Unlike many languages, JavaScript does not make a distinction
between integer values and floating-point values. All numbers in
JavaScript are represented as floating-point values. JavaScript
represents numbers using the 64-bit floating-point format defined by
the IEEE 754 standard,^{[1]} which means it can represent numbers as large as
±1.7976931348623157 × 10^{308} and as small as
±5 × 10^{−324}.

The JavaScript number format allows you to exactly represent all
integers between −9007199254740992 (−2^{53})
and 9007199254740992 (2^{53}), inclusive. If
you use integer values larger than this, you may lose precision in the
trailing digits. Note, however, that certain operations in JavaScript
(such as array indexing and the bitwise operators described in Chapter 4) are performed with 32-bit integers.

When a number appears directly in a JavaScript program, it’s
called a *numeric literal*. JavaScript supports
numeric literals in several formats, as described in the following
sections. Note that any numeric literal can be preceded by a minus
sign (-) to make the number negative. Technically, however, - is the
unary negation operator (see Chapter 4) and is
not part of the numeric literal syntax.

In a JavaScript program, a base-10 integer is written as a sequence of digits. For example:

`0`

`3`

`10000000`

In addition to base-10 integer literals, JavaScript recognizes hexadecimal (base-16) values. A hexadecimal literal begins with “0x” or “0X”, followed by a string of hexadecimal digits. A hexadecimal ...

Start Free Trial

No credit card required