WorldOfTopics.com

# How information is encoded on a computer: differences between binary and hexadecimal numbers (Topic)

» » How information is encoded on a computer: differences between binary and hexadecimal numbers

## How information is encoded on a computer: differences between binary and hexadecimal numbers We are all accustomed to the decimal number system, and many people do not even realize that such a system is inapplicable for digital devices. Your computer operates with numbers in the same way as a human, but the number system is binary. What does this mean and why are such difficulties?

All over the world people use the so-called "positional" system: the meaning of a digit in a number depends on its position. This means that the numbers 1, 4 and 2 together will make the number one hundred and forty-two, but if you swap these numbers, we get a completely different number. Each of the signs in the number has its own place, so we can easily read the written number and determine its value.

But the positional system of constructing numbers was not always what we used to see it. For example, the number zero was invented after the main series of numbers was invented. Until then, the numbers were added differently.

Roman numerals used repeating characters to write a new number. For example, 10 looks like X, 30 looks like XXX, and 1123 in Roman notation looks like MCXXIII.

### How does the computer think?

The computer uses an elementary binary system of calculus: numbers are added only of two signs - zero and one. This is primarily due to the internal structure of the computer. There are millions of transistors inside the processor, which by their technical features have two states: on (current flows freely) or off (no current flows).

Differences between binary, decimal and hexadecimal systems

### Binary system

Computers cannot use the normal human numbering system, as one transistor cannot have ten different states. Technically, this is not feasible, even to achieve the third (semi-switched) and other intermediate states from the transistor is very difficult. To do this, you need to make the transistor work at high frequencies without failures.

Microelectronics copes with the binary system successfully. The computer operates only with zeros and ones. Each switched cell (transistor) is referred to as a bit. Modern computers have a standard cluster of 1 byte, which is equal to eight bits (2 to the power of 8), its maximum value is 256 characters together with zero:

• 1 bit - one bit and two values (from 0 to 1);
• 4 bits - four bits and 16 values (from 0000 to 1111)
• 8 bits - eight bits and two hundred and fifty six values (00000000 to 11111111).

1 bit is the minimum unit for digital devices.

The computer reads the number in binary form, and then converts it into a human-readable form. The picture below shows the number 142 in binary.

This system differs from the usual decimal and binary in the number of digits per digit. It uses a series of numbers from 0 to 15, where in the range from ten to fifteen, Latin letters act as numbers.

(

This format is used in programming to simply write one byte of information, as well as in web design, for example, to encode a palette of colors. Instead of a long binary notation of eight characters, only two bits are used. The maximum two-digit value of this system can be represented as FF, which equals 1111 1111 in binary or 255 in decimal.

Sometimes you may see a 0x prefix in front of the value, which means that the following number should be interpreted as hexadecimal: 0x8E is the decimal number 142.

### So why are there such difficulties?

Of course, it is preferable for any person to use the same numbering system in different areas. But alas, this is not applicable in the digital world: the usual decimal system is simple, but the computer does not understand it in hardware, and the binary system is impossible to read. The hexadecimal system makes the code a little more readable, but only for the enlightened who know how to use HEX editors.

Therefore, the hexadecimal system is a kind of transitional state between machine code and human-readable code. 