Convert data storage units — bytes, KB, MB, GB, TB, PB.
| Unit | Name | Value |
|---|---|---|
| B | Byte | 0.125 |
| KB | Kilobyte | 0.00012207031 |
| MB | Megabyte | 1.1920929e-7 |
| GB | Gigabyte | 1.164153e-10 |
| TB | Terabyte | 1.136880e-13 |
| PB | Petabyte | 1.110248e-16 |
Formula: Byte = Bit × 0.125
Multiply any bit value by 0.125 to get byte. One bit equals 0.125 B.
Reverse: Bit = Byte × 8
Common bit values with real-world context — factor: 1 bit = 0.125 B
| Bit (bit) | Byte (B) | Context |
|---|---|---|
| 1 bit | 0.125 B | Single bit |
| 8 bit | 1 B | One byte |
| 16 bit | 2 B | One byte |
| 32 bit | 4 B | Integer (32-bit) |
| 64 bit | 8 B | Double/pointer (64-bit) |
| 128 bit | 16 B | Double/pointer (64-bit) |
| 256 bit | 32 B | 125 bytes |
| 1,000 bit | 125 B | 125 bytes |
| 8,000 bit | 1,000 B | 1 KB |
| 1e+06 bit | 1.25e+05 B | 125 KB |
| 8e+06 bit | 1e+06 B | 1 MB |
| 1e+09 bit | 1.25e+08 B | 125 MB |
| 8e+09 bit | 1e+09 B | 1 GB |
| 1.000e+12 bit | 1.25e+11 B | 125 GB |
| 1.000e+15 bit | 1.250e+14 B | 125 TB |
Bits ÷ 8 = bytes. Always exact — 8 bits per byte by definition.
8 bits = 1 byte, 16 bits = 2 bytes, 32 bits = 4 bytes, 64 bits = 8 bytes.
100 Mbit/s internet = 100/8 = 12.5 MB/s download speed.
Works at bit level for register sizes, flag fields, and protocol frame analysis.
Specifies key lengths in bits — AES-128, AES-256, RSA-2048 are standard.
Designs packet headers with bit-level field specifications.
Programs bit-level logic for custom digital circuits.
Analyzes entropy and bit-per-symbol efficiency of compression algorithms.
Evaluates brute-force difficulty based on key size in bits.
The bit is the most fundamental unit of information in computing and communications, representing a binary value of 0 or 1. Claude Shannon formalized the bit in his landmark 1948 paper 'A Mathematical Theory of Communication'.
Bits define network speeds (Mbps, Gbps), pixel color depths (8-bit, 16-bit), and cryptographic key lengths. Internet connection speeds are quoted in bits per second (bps), not bytes per second.
Interesting fact: The term 'bit' was coined by John Tukey in 1947 as a contraction of 'binary digit'. A standard coin flip is a perfect analog for a single bit.
The byte is the fundamental unit of digital information, almost universally defined as 8 bits. The term was coined by Werner Buchholz in 1956 during the design of the IBM Stretch computer. Early computers used variable byte sizes; the 8-bit standard emerged through IBM's System/360 in 1964.
Bytes are the basic unit for file sizes, memory capacities, and data transfer rates in computing. A single ASCII character occupies one byte; a UTF-8 emoji typically takes 3-4 bytes.
Interesting fact: The word 'byte' was intentionally misspelled from 'bite' to avoid accidental misreading as 'bit'. A single byte can store 256 distinct values (0–255).
Converting bit to byte is a common task in computing, networking, and data management. Storage manufacturers, operating systems, and network equipment often express data sizes in different units — understanding the conversion is essential for comparing specifications, planning storage capacity, and interpreting network speed versus file size relationships.
As a practical reference: 5 bit = 0.625 B and 10 bit = 1.25 B. For larger quantities, 100 bit = 12.5 B. The reverse conversion uses the factor 8, so 1 B = 8 bit. Note that decimal prefixes (KB=1,000, MB=1,000,000) differ from binary prefixes (KiB=1,024, MiB=1,048,576) — always check which standard your software or hardware uses.
All conversions use the internationally recognized factor of exactly 1 bit = 0.125 B, calculated with IEEE 754 double-precision arithmetic accurate to at least 8 significant figures.