While humans use the decimal (base-10) number system, computers operate in binary (base-2). Hexadecimal and octal are also widely used in programming and computing. This guide explains all three clearly.
Why Do Computers Use Binary?
Computers use binary because electronic circuits have exactly two states: on (1) and off (0). Binary maps directly to these physical states, making it the most reliable and efficient system for digital electronics.
Binary (Base-2)
Binary uses only 0 and 1. Each position represents a power of 2:
Hexadecimal (Base-16)
Hex uses digits 0–9 and A–F (where A=10, B=11, C=12, D=13, E=14, F=15). It is commonly used for colour codes, memory addresses, and representing large binary numbers compactly.
Octal (Base-8)
Octal uses digits 0–7. It was historically used in computing but is less common today. It is still used in Unix file permissions.
Conversion Table
| Decimal | Binary | Hexadecimal | Octal |
|---|---|---|---|
| 0 | 0000 | 0 | 0 |
| 5 | 0101 | 5 | 5 |
| 10 | 1010 | A | 12 |
| 15 | 1111 | F | 17 |
| 16 | 10000 | 10 | 20 |
| 255 | 11111111 | FF | 377 |
How to Convert Decimal to Binary
Repeatedly divide by 2 and record the remainders:
6 ÷ 2 = 3 R 0
3 ÷ 2 = 1 R 1
1 ÷ 2 = 0 R 1
Read remainders bottom to top: 1101
Real-World Uses
- Binary: CPU operations, boolean logic, file data
- Hexadecimal: CSS/HTML colour codes (#FF0000 = red), memory addresses, MAC addresses
- Octal: Unix/Linux file permissions (chmod 755)