When it comes to computing, the bit represents the smallest unit of data that can be stored . A bit can be set to 0 (off - without power) or 1 (on - with power). The primary form of physical storage of a bit is as an electrical charge above or below a standard level allocated to a single capacitor within a memory device.
Some multiples of the bits are: nibble, quilobit (kb), megabit (Mb), gigabit (Gb), terabit (Tb) and etc. The notation used to reference a bit is the lower case "b", do not confuse with the upper case "B" notation (kB, MB, GB, TB), which is used to refer to a byte.
Computers can be programmed to manipulate and store bits in different ways, but the most common and defined according to ISO / IEC 2382-1: 1993, is the grouping of 8 bits to form the equivalent of 1 byte (or octets).
Bits are commonly used: via light (reading and writing CDs and DVDs, fiber optic cables), via electromagnetic waves (wireless network), or also, via magnetic polarization (hard drives - HD's).
In the case of data communication (file transfer, communication between systems, internet ...) the metric definition of a kilobyte (1,000 bits equivalent to 1 kilobyte) is used, whereas the binary definition of a kilobyte (1,024 bits equivalent to 1 kilobyte) is used in areas such as data storage (hard disk, ram, rom, flash ...), but not to express bandwidth and throughput.