All quantities in computer science. What are bits and bytes (kilobyte, megabyte, gigabyte, terabyte), as well as features of units of measurement of information

In modern computers we can enter text information, numeric values, as well as graphic and audio information. The amount of information stored in a computer is measured by its “length” (or “volume”), which is expressed in bits. Bit - minimum unit of measurement of information (from English BInary digiT - binary digit). Each bit can take the value 0 or 1. A bit is also called a bit of a computer memory cell. The following units are used to measure the amount of information stored:

1 byte = 8 bits;

1 KB = 1024 bytes (A KB is read as a kilobyte);

1 MB = 1024 KB (MB reads like megabyte);

1 GB = 1024 MB (a GB is read as a gigabyte).

Beat (from English. binary digit; also play on words: English. bit- A little)

According to Shannon, a bit is the binary logarithm of the probability of equally probable events or the sum of the products of the probability by the binary logarithm of the probability of equally probable events.

One bit of binary code (binary digit). Can only take two mutually exclusive values: yes/no, 1/0, on/off, etc.

A basic unit of measurement for the amount of information equal to the amount of information contained in an experience that has two equally probable outcomes. This is identical to the amount of information in the answer to a question that allows the answers “yes” or “no” and nothing else (that is, the amount of information that allows you to unambiguously answer the question posed). One binary bit contains one bit of information.

IN computer technology and data networks, the values ​​0 and 1 are usually transmitted different levels voltage or current. For example, in TTL-based chips, 0 is represented by a voltage in the range +0 to +3 IN, and 1 in the range from 4.5 to 5.0 IN.

The data transfer speed of a network is usually measured in bits per second. It is noteworthy that with the increase in data transmission speed, the bit also acquired another metric expression: length. So, in a modern gigabit network (1 Gigabit/sec) there are approximately 30 meters of wire per bit. Because of this, the difficulty network adapters has increased significantly. Previously, for example, in one-megabit networks, a bit length of 30 km was almost always obviously greater than the length of the cable between two devices.

In computing, especially in documentation and standards, the word “bit” is often used to mean binary digit. For example: the first bit is the first binary digit of the byte or word in question.

Currently, a bit is the smallest possible unit of information in computing, but intensive research in the field quantum computers assume the presence of q-bits.

Byte (English) byte) - a unit of measurement of the amount of information, usually equal to eight bits, can take 256 (2 8) different values.

In general, a byte is a sequence of bits, the number of which is fixed, the minimum addressable amount of memory in a computer. IN modern computers general purpose a byte is equal to 8 bits. To emphasize that an eight-bit byte is meant, in the description network protocols The term "octet" is used. octet).

Sometimes a byte is a sequence of bits that make up a subfield of a word. Some computers can address bytes of different lengths. This is provided by the field extraction instructions of the LDB and DPB assemblers on the PDP-10 and in Common Lisp.

In the IBM-1401, a byte was equal to 6 bits, just like in Minsk-32, and in BESM - 7 bits, in some computer models manufactured by Burroughs Computer Corporation (now Unisys) - 9 bits. Many modern digital signal processors use bytes that are 16 bits or larger in length.

The name was first used in 1956 by W. Buchholz when designing the first supercomputer IBM 7030 for a bunch of bits simultaneously transmitted in input-output devices (six pieces); later, as part of the same project, the byte was expanded to eight (2 3) bits.

Multiple prefixes to form derivative units for a byte are not used as usual: firstly, diminutive prefixes are not used at all, and units of information smaller than a byte are called special words (nibble and bit); secondly, magnifying prefixes mean for every thousand 1024 = 2 10 (a kilobyte is equal to 1024 bytes, a megabyte is equal to 1024 kilobytes or 1,048,576 bytes, etc. with gigabytes, terabytes and petabytes (not used anymore)). The difference increases with the weight of the console. It is more correct to use binary prefixes, but in practice they are not yet used, perhaps due to the cacophony - kibibyte, mebibyte, etc.

Sometimes decimal prefixes are used in the literal sense, for example, when indicating capacity hard drives: for them, a gigabyte can mean a million kibibytes, i.e. 1,024,000,000 bytes, or even just a billion bytes, and not 1,073,741,824 bytes, as, for example, in memory modules.

Kilobyte (kbyte, kB) m., skl . - a unit of measurement of the amount of information equal to (2 10) standard (8-bit) bytes or 1024 bytes. Used to indicate the amount of memory in various electronic devices.

The name “kilobyte” is generally accepted, but formally incorrect, since the prefix kilo - means multiplication by 1,000, not 1,024. The correct binary prefix for 2 10 is kibi - .

Table 1.2 - Multiple prefixes to form derivatives

Megabyte (MB, M) m., skl. - a unit of measurement of the amount of information equal to 1048576 (2 20) standard (8-bit) bytes or 1024 kilobytes. Used to indicate the amount of memory in various electronic devices.

The name “Megabyte” is generally accepted, but formally incorrect, since the prefix mega - , means multiplying by 1,000,000, not 1,048,576. The correct binary prefix for 2 20 is mebi - . Taking advantage of the current situation large corporations, producing hard disks, which, when labeling their products, understand a megabyte to be 1,000,000 bytes, and a gigabyte to be 1,000,000,000 bytes.

The most original interpretation of the term megabyte is used by computer floppy disk manufacturers, who understand it as 1,024,000 bytes. Thus, a floppy disk with a capacity of 1.44 MB actually holds only 1440 KB, that is, 1.41 MB in the usual sense.

In this regard, it turned out that a megabyte can be short, medium and long:

short - 1,000,000 bytes

average - 1,024,000 bytes

long - 1,048,576 bytes

Gigabyte is a multiple unit of measurement of the amount of information, equal to 1,073,741,824 (2 30) standard (8-bit) bytes or 1,024 megabytes.

SI giga prefix - is used erroneously because it means multiplying by 10 9 . For 2 30 should be consumed binary prefix gibi-. Large corporations that produce hard drives take advantage of this situation, and when labeling their products, a megabyte means 1,000,000 bytes, and a gigabyte means 1,000,000,000 bytes.

A machine word is a machine-dependent and platform-dependent quantity, measured in bits or bytes, equal to the width of the processor registers and/or the width of the data bus (usually some power of two). The word size also matches minimum size addressable information (bit depth of data located at one address). The machine word defines the following characteristics of the machine:

bit depth of data processed by the processor;

addressable data width (data bus width);

maximum value of an unsigned integer type directly supported by the processor: if the result arithmetic operation exceeds this value, an overflow occurs;

maximum volume random access memory, directly addressed by the processor.

Maximum value words of length n bits can be easily calculated using the formula 2 n −1

Table 1.3 - Machine word size on various platforms

Our high-tech age is distinguished by its wide possibilities. With the development of electronic computers, amazing horizons opened up for people. Any news of interest can now be found in global network completely free, without leaving your home. This is a breakthrough in the field of technology. But how can so much data be stored in computer memory, processed and transmitted over long distances? What units of information measurement exist in computer science? And how to work with them? Nowadays, not only people directly involved in writing computer programs, but ordinary schoolchildren should also know the answers to these questions. After all, this is the basis of everything.

in Computer Science

We are accustomed to thinking that information is all the knowledge that is conveyed to us. But in computer science and computer science, this word has a slightly different definition. This is the basic component of all electronic science. computers. Why basic or fundamental? Because computer technology processes data, stores and communicates it to people. Minimum unit Measuring information is calculated in bits. The information is stored on the computer until the user wants to view it.

We are used to thinking that information is a unit of language. Yes, this is true, but computer science uses a different definition. This is information about the state, properties and parameters of objects in the environment around us. It is absolutely clear that the more information we learn about an object or phenomenon, the more we understand that our understanding of them is scanty. But now, thanks to such a huge volume of completely free and accessible materials from all over the world, it has become much easier to study, make new acquaintances, work, relax and just relax by reading books or watching movies.

Alphabetical aspect of measuring the volume of information

Printing documents for work, articles on websites and maintaining your personal blog on the Internet, we do not think about how data is exchanged between the user and the computer itself. How is a machine able to understand commands, and in what form does it store all files? In computer science, the unit of information measurement is a bit, which can store zeros and ones. The essence of the alphabetical approach is in measurement text characters consists of a sequence of characters. But do not intertwine the alphabetical approach with the content of the text. These are completely different things. The volume of such data is proportional to the number of characters entered. Thanks to this, it turns out that the information weight of a sign from the binary alphabet is equal to one bit. There are different units of information measurement in computer science, just like any other measures. A bit is the minimum value of a measurement.

Content aspect of calculating the amount of information

Information measurement is based on probability theory. IN in this case the question of how much data is contained in the message a person receives is considered. This is where theorems of discrete mathematics come into play. To calculate materials, two different formulas are taken depending on the probability of the event. At the same time, the units of measurement of information in computer science remain the same. The tasks of calculating the number of characters and graphics using the content approach are much more difficult than using the alphabetical approach.

Types of information processes

There are three main types of processes carried out in an electronic computer:

  1. How it goes this process? Through data input tools, be it a keyboard, optical mouse, printer or others receives information. Then converts them to binary code and records on HDD in bits, bytes, megabytes. To translate any unit of measurement of information in computer science, there is a table from which you can calculate how many bits are in one megabyte and carry out other translations. The computer does everything automatically.
  2. Storing files and data in the device memory. A computer can remember everything in binary form. Binary code consists of zeros and ones.
  3. Another of the main processes occurring in an electronic computer is data transfer. It is also carried out in binary form. But information is displayed on the monitor screen in a symbolic or other form familiar to our perception.

Coding of information and measure of its measurement

The unit of information measurement is a bit, which is quite easy to work with, because it can contain the value 0 or 1. How does a computer encode ordinary decimal numbers V binary code? Let's look at a small example that will explain the principle of encoding information by computer technology.

Let's say we have a number in the usual number system - 233. To convert it into binary form, you need to divide by 2 until it becomes less than the divisor itself (in our case, 2).

  1. We begin division: 233/2=116. We write down the rest separately, these will be the components of the response binary code. In our case it is 1.
  2. The second action will be this: 116/2=58. The remainder of the division - 0 - is again written separately.
  3. 58/2=29 without remainder. Don’t forget to write down the remaining 0, because if you lose just one element, you will get a completely different value. This code will then be stored on the computer’s hard drive and will represent bits - the minimum units of information in computer science. 8th graders are already able to cope with converting numbers from decimal to binary and vice versa.
  4. 29/2=14 with a remainder of 1. We write it separately to the already received binary digits.
  5. 14/2=7. The remainder of the division is 0.
  6. A little more and the binary code will be ready. 7/2=3 with a remainder of 1, which we write into the future binary code answer.
  7. 3/2=1 with a remainder of 1. From here we write two units as the answer. One - as a remainder, the other - as the last remaining number, which is no longer divisible by 2.

It is necessary to remember that the answer is written in reverse order. The first result binary number from the first action will be the last digit, from the second - the penultimate, and so on. Our final answer is 11101001.

This is recorded in the computer's memory and stored in this form until the user wants to look at it on the monitor screen. Bit, byte, megabyte, gigabyte - units of measurement of information in computer science. It is in these quantities that binary data is stored in a computer.

Reverse conversion of a number from binary to decimal system

In order to carry out the reverse translation from a binary value to decimal system calculus, you need to use the formula. We count the number of characters in a binary value, starting from 0. In our case there are 8, but if we start counting from zero, then they end serial number 7. Now you need to multiply each digit from the code by 2 to the power of 7, 6, 5,…, 0.

1*2 7 +1*2 6 +1*2 5 +0*2 4 +1*2 3 +0*2 2 +0*2 1 +1*2 0 =233. Here is our starting number, which was taken even before translation into binary code.

Now you know the essence computer device and a minimum measure of information storage.

Minimum unit of information: description

As mentioned above, the smallest measurement of information is considered to be a bit. This is a word of English origin, translated it means “binary digit”. If you look at this value from the other side, we can say that this is a memory cell in electronic computers, which is stored in the form of 0 or 1. Bits can be converted into bytes, megabytes and even larger amounts of information. The electronic computer itself carries out this procedure when it saves the binary code into the memory cells of the hard drive.

Some computer users may want to manually and quickly convert volume measures digital information from one to another. Online calculators have been developed for such purposes; they will immediately carry out an operation that could take a lot of time manually.

Units of measurement of information in computer science: table of quantities

Computers, flash drives and other storage and information processing devices differ in their memory capacity, which is usually calculated in gigabytes. It is necessary to look at the main table of quantities to see the comparability of one unit of measurement of information in computer science in ascending order with the second.

Using the maximum unit of information

Nowadays, the maximum amount of information, called a yottabyte, is planned to be used by the National Security Agency to store all audio and video materials received from public places where video cameras and microphones are installed. On this moment yottabytes - largest units measuring information in computer science. Is this the limit? It is unlikely that anyone will be able to give an exact answer now.

Amount of information

The amount of information as a measure of reducing knowledge uncertainty.
(Substantive approach to determining the amount of information)

The process of cognition of the surrounding world leads to the accumulation of information in the form of knowledge (facts, scientific theories, etc.). Receipt new information leads to an increase in knowledge or, as is sometimes said, to a decrease in the uncertainty of knowledge. If some message leads to a decrease in the uncertainty of our knowledge, then we can say that such a message contains information.

For example, after taking a test or completing a test, you are tormented by uncertainty; you do not know what grade you received. Finally, the teacher announces the results and you get one of two information messages: “pass” or “fail”, and after the test one of four information messages: “2”, “3”, “4” or “5”.

An information message about a grade for a test leads to a reduction in the uncertainty of your knowledge by half, since one of two possible information messages is received. Information message about the assessment for test results in a fourfold reduction in the uncertainty of your knowledge, since one of four possible information messages is received.

It is clear that the more uncertain the initial situation (the large quantity information messages are possible), the more new information we receive when receiving an information message (the more times the uncertainty of knowledge will decrease).

Amount of information can be considered as a measure of reducing knowledge uncertainty when receiving information messages.

The approach to information discussed above as a measure of reducing the uncertainty of knowledge allows us to quantitatively measure information. There is a formula that relates the number of possible information messages N and the amount of information I carried by the received message:

N=2 i (1.1)

Bit. To quantify any quantity, you must first determine the unit of measurement. So, to measure length, the meter is selected as the unit, to measure mass - kilogram, etc. Similarly, to determine the amount of information, you must enter a unit of measurement.

Behind unit of information quantity the amount of information that is contained in the information message is accepted, reducing the uncertainty of knowledge by half. This unit is called bit.

If we return to the receipt of an information message about the test results discussed above, then here the uncertainty is reduced by half and, therefore, the amount of information that the message carries is equal to 1 bit.

Derived units for measuring the amount of information. The smallest unit of measurement of the amount of information is a bit, and the next largest unit is a byte, and:

1 byte = 8 bits = 2 3 bits.

In computer science, the system for forming multiple units of measurement is somewhat different from that accepted in most sciences. Traditional metric systems of units, e.g. International system SI units, the coefficient 10 n is used as multipliers of multiple units, where n = 3, 6, 9, etc., which corresponds decimal prefixes"Kilo" (10 3), "Mega" (10 6), "Giga" (10 9), etc.

In a computer, information is encoded using a binary sign system, and therefore, in multiple units of measurement of the amount of information, a factor of 2 n is used

Thus, units of measurement of the amount of information that are multiples of a byte are entered as follows:

1 kilobyte (KB) = 2 10 bytes = 1024 bytes;

1 megabyte (MB) = 2 10 KB = 1024 KB;

1 gigabyte (GB) = 2 10 MB = 1024 MB.

Control questions

    1. Give examples of information messages that lead to a reduction in knowledge uncertainty.
    2. Give examples of information messages that carry 1 bit of information.

Determining the amount of information

Determining the number of information messages. Using formula (1.1), you can easily determine the number of possible information messages if the amount of information is known. For example, in an exam you take an exam card, and the teacher tells you that the visual information message about its number carries 5 bits of information. If you want to determine the number of exam tickets, then it is enough to determine the number of possible information messages about their numbers using formula (1.1):

Thus, the number of exam tickets is 32.

Determining the amount of information. On the contrary, if the possible number of information messages N is known, then to determine the amount of information carried by the message, it is necessary to solve the equation for I.

Imagine that you control the movement of a robot and can set the direction of its movement using information messages: "north", "northeast", "east", "southeast", "south", "southwest", " west" and "northwest" (Fig. 1.11). How much information will the robot receive after each message?

There are 8 possible information messages, so formula (1.1) takes the form of an equation for I:

Let's factor the number 8 on the left side of the equation and present it in power form:

8 = 2 × 2 × 2 = 2 3 .

Our equation:

Equality of left and right parts the equation is valid if the exponents of the number 2 are equal. Thus, I = 3 bits, i.e., the amount of information that each information message carries to the robot is equal to 3 bits.

Alphabetical approach to determining the amount of information

At alphabetical approach to determine the amount of information, they are distracted from the content of the information and consider the information message as a sequence of signs of a certain sign system.

Information capacity of the sign. Let's imagine that it is necessary to transmit an information message through an information transmission channel from the sender to the recipient. Let the message be encoded using a sign system whose alphabet consists of N characters (1, ..., N). In the simplest case, when the length of the message code is one character, the sender can send one of N possible messages“1”, “2”, ..., “N”, which will carry the amount of information I (Fig. 1.5).

Rice. 1.5. Transfer of information

Formula (1.1) relates the number of possible information messages N and the amount of information I carried by the received message. Then, in the situation under consideration, N is the number of signs in the alphabet of the sign system, and I is the amount of information that each sign carries:

Using this formula, you can, for example, determine the amount of information that a sign carries in the binary sign system:

N = 2 => 2 = 2 I => 2 1 = 2 I => I=1 bit.

Thus, in a binary signed system, a sign carries 1 bit of information. It is interesting that the very unit of measurement of the amount of information “bit” (bit) got its name FROM the English phrase “Binary digiT” - “binary digit”.

The information capacity of the sign of the binary sign system is 1 bit.

The greater the number of signs the alphabet of a sign system contains, the greater the amount of information carried by one sign. As an example, we will determine the amount of information carried by a letter of the Russian alphabet. The Russian alphabet includes 33 letters, but in practice, only 32 letters are often used to convey messages (the letter “ё” is excluded).

Using formula (1.1), we determine the amount of information carried by a letter of the Russian alphabet:

N = 32 => 32 = 2 I => 2 5 = 2 I => I=5 bits.

Thus, a letter of the Russian alphabet carries 5 bits of information (with an alphabetic approach to measuring the amount of information).

The amount of information a sign carries depends on the likelihood of its receipt. If the recipient knows in advance exactly what sign will come, then the amount of information received will be equal to 0. Conversely, the less likely it is to receive a sign, the more information capacity.

In Russian writing the frequency of use of letters in the text is different, so on average, per 1000 characters of a meaningful text there are 200 letters “a” and a hundred times less number of letters “f” (only 2). Thus, from the point of view of information theory, the information capacity of the characters of the Russian alphabet is different (the letter “a” has the smallest, and the letter “f” has the largest).

The amount of information in the message. A message consists of a sequence of characters, each of which carries a certain amount of information.

If the signs carry the same amount of information, then the amount of information I c in the message can be calculated by multiplying the amount of information I z carried by one sign by the code length (number of characters in the message) K:

I c = I × K

So, each digit of binary computer code carries information in 1 bit. Consequently, two digits carry information in 2 bits, three digits - in 3 bits, etc. The amount of information in bits is equal to the number of digits of the binary computer code (Table 1.1).

Table 1.1. The amount of information carried by a binary computer code

To measure length there are units such as millimeter, centimeter, meter, kilometer. It is known that mass is measured in grams, kilograms, centners and tons. The passage of time is expressed in seconds, minutes, hours, days, months, years, centuries. The computer works with information and there are also corresponding units of measurement to measure its volume.

We already know that the computer perceives all information through zeros and ones. A bit is the smallest unit of information, corresponding to a single binary digit (“0” or “1”).

A byte consists of eight bits. Using one byte, you can encode one character out of 256 possible (256 = 28). Thus, one byte is equal to one character, that is, 8 bits:

1 character = 8 bits = 1 byte.

Studying computer literacy involves consideration of other, larger units of measurement of information.

Byte table: 1 byte = 8 bits

1 KB (1 Kilobyte) = 210 bytes = 2*2*2*2*2*2*2*2*2*2 bytes =

1024 bytes (approximately 1 thousand bytes - 103 bytes)

1 MB (1 Megabyte) = 220 bytes = 1024 kilobytes (approximately 1 million bytes - 106 bytes)

1 GB (1 Gigabyte) = 230 bytes = 1024 megabytes (approximately 1 billion bytes - 109 bytes)

1 TB (1 Terabyte) = 240 bytes = 1024 gigabytes (approximately 1012 bytes). A terabyte is sometimes called a ton.

1 PB (1 Petabyte) = 250 bytes = 1024 terabytes (approximately 1015 bytes).

1 Exabyte = 260 bytes = 1024 petabytes (approximately 1018 bytes).

1 Zettabyte = 270 bytes = 1024 exabytes (approximately 1021 bytes).

1 Yottabyte = 280 bytes = 1024 zettabytes (approximately 1024 bytes).


In the above table, powers of two (2 10, 2 20, 2 30, etc.) are exact values kilobyte, megabyte, gigabyte.

The question arises: is there a continuation of the byte table? In mathematics there is a concept of infinity, which is symbolized as an inverted figure eight: ∞.

It is clear that in the byte table you can continue to add zeros, or rather, powers to the number 10 in this way: 10 27, 10 30, 10 33 and so on ad infinitum. But why is this necessary? In principle, terabytes and petabytes are enough for now. In the future, perhaps even a yottabyte will not be enough.

Finally, a couple of examples of devices that can store terabytes and gigabytes of information. There is a convenient “terabyte” - external hard disk that is connected via USB port to the computer. You can store a terabyte of information on it. Particularly convenient for laptops (where changing hard drive can be problematic) and for Reserve copy information. It's better to do it in advance backups information, and not after everything is gone.

Computer literacy exercises:

1) How many bytes (without quotes) does the phrase “Today is July 7, 2011” contain?

2) How many bytes (kilobytes) does one page of text take if there are 60 characters in one line and 40 lines on a page? What is the volume of one book consisting of 100 similar pages?

3) A terabyte is an external hard drive that connects to a computer via a USB connector and has a capacity of 1 terabyte. The instructions for its use say that this disk can fit 250 thousand. music files or 285 thousand photographs. What is the size of one music file and the size of one photo according to the manufacturers of this device?

4) How many similar music files can fit on one 700 megabyte CD?

5) How many similar photos can fit on a 4 gigabyte flash drive?

Solutions:

1) “Today” - with a space (but without quotes) 8 bytes “July 7” - with two spaces (without quotes) 7 bytes “2010” - with a space and a dot (without quotes) 7 bytes Total: 8 + 7 + 7 = 22 bytes “weighs” the phrase “Today is July 7, 2010”

2) One line contains 60 characters, which means the volume of one line is 60 bytes. There are 40 such lines on a page, each containing 60 bytes, so the volume of one page of text is 60 x 40 = 2400 bytes = 2.4 Kilobytes = 2.4 KB

The volume of one book is 2400 x 100 = 240,000 bytes = 240 Kilobytes = 240 KB

3) The size of one music file, which, according to manufacturers, can be recorded on a “terabyte”: 1,000,000,000,000: 250,000 = (we reduce three zeros in the dividend and in the divisor) 1000,000,000: 250 = 4,000,000 bytes = 4 Megabytes = 4 MB

The size of one photograph, which, according to manufacturers, can be recorded on a “terabyte”: 1,000,000,000,000: 285,000 = (we reduce three zeros in the dividend and divisor) 1,000,000,000: 285 = 3,508,771, 93 bytes = ( round up) 3.5 Megabytes = 3.5 MB

4) A 700 megabyte CD can hold 700 MB: 4 MB = 175 music files, each no larger than 4 MB. Here megabytes can be immediately divided into megabytes, but when working with different volumes of bytes, it is better to first convert everything into bytes, and then perform various arithmetic operations with them.

5) A 4 GB flash drive can hold 4,000,000,000: 3,508,771, 93 = (reduce three zeros in the dividend and divisor) = 4,000,000: 3,508 = 1,139.99 photo = (round) 1,140 photos, each of which is no more than 3.5 MB in size.

You can also calculate approximately. Then: A 4 GB flash drive can hold 4,000,000,000: 3,500,000 = (reduce five zeros in the dividend and divisor) = 40,000: 35 = 1,142.86 photos = (round down) 1,140 photos , each of which is no more than 3.5 MB in size

We are constantly measuring something - time, length, speed, mass. And for each quantity there is its own unit of measurement, and often several. Meters and kilometers, kilograms and tons, seconds and hours - all this is familiar to us. How to measure information? Also invented for information unit of measurement and named her bit.

A bit is the smallest unit of information.

One bit contains very little information. It can only take one of two values ​​(1 or 0, yes or no, true or false). Measuring information in bits is very inconvenient - the numbers turn out to be huge. After all, they don’t measure the mass of a car in grams.

For example, if we represent the capacity of a 4GB flash drive in bits, we get 34,359,738,368 bits. Imagine you came to computer store and ask the seller to give you a flash drive with a capacity of 34,359,738,368 bits. It's unlikely that he will understand you

Therefore, in computer science and in life, bit-derived units of information are used. But they all have a remarkable property - they are powers of two with a step of 10.

So, let's take the number 2 and raise it to the zero power. We get 1 (any number to the zero power is equal to 1). This will be a byte.

There are 8 bits in one byte.

Now we raise 2 to the 10th power - we get 1024. This is kilobyte(KB).

There are 1024 bytes in one kilobyte.

If we raise 2 to the 20th power, we get megabyte(MB).

1 MB = 1024 KB.

Name Symbol Degree
byte B 2 0
kilobyte kB 2 10
megabyte MB 2 20
gigabyte GB 2 30
terabyte TB 2 40
petabyte PB 2 50
exabyte EB 2 60
zettabyte ZB 2 70
yottabyte JB 2 80

Understanding this topic will allow you to successfully