Presentation of data in a computer: binary coding of information. Presentation of data on a computer Methods of presenting data on a computer

History of computer development

The first IBM PC was an analytical computer. It was designed on mechanical components. Worked in the Ada language. The next machine was Mark 1. Relays were used as memory elements, so the machine had low speed (one action at a time).

Mark 2. Worked on triggers. (1946) It performed hundreds of operations per second.

The first domestic car was developed by Lebedev. MESM is a small electronic calculating machine. Later Main Freim was coined - universal machine to solve a peaceful range of problems.

Super computer– the most expensive and fastest machines that work in real time.

Water or gas cooling is used. The Assembler language is used and the processor core runs on it.

IBM 360-390 – also built in Assembler language. It contained the idea of ​​modern microprocessors.

CPU– information processing device. Consists of many microprocessors.

Microprocessor– processor using VLSI (Very Large Scale Integrated Circuit).

Program– a sequence of commands executed in the processor.

Team- an instruction to perform a certain action.

The first microprocessor was created in 1970, it was 4-bit. It was called MP 880.

The next processor is 88.36.

Main characteristics of the microprocessor:

1) Data width– determines the amount of memory connected to the processor.

2) Clock frequency– determined by the internal speed of the processor, which also depends on clock frequency system board buses.

3) Memory cache size– installed on the microprocessor substrate.

There are two levels:

1) L1 – is located inside the main core circuits, which always operates at the maximum frequency.

2) L2 – second level memory is connected to the microprocessor core (internal bus).

4) Composition of instructions– list, type and type of commands that are automatically executed in the microprocessor.

5) Operating supply voltage(power consumption)

Design features

Price

von Neumann's law

Principles:

1) Data and commands are transmitted in binary code

2) Programs execute linearly

3) The address of the subsequent command differs from the previous one by +1

4) Memory for storing data is operational and external, connected to the user on the one hand and to the OS on the other hand.

SERIAL AND PARALLEL CODE

With a serial code, the transmission of information (n-bits) is carried out sequentially, charge by charge along one conductor. Data transfer time is proportional to the number of bits . T=t*n.

When all bits are transmitted in parallel, they are transmitted via n-conductors. The transfer time is equal to one bit, and the equipment is equal to N times.

T=t*n+t*n+…t*n

COMPUTER BLOCK DIAGRAMS

The computer includes a number of devices that can be connected to each other using highways: address, data and control. In reality, these highways are presented in the form of an interface (cables or buses). There are several ways to connect devices to each other.

The von Neumon structure is a backbone construction method or from a common bus.

Interface- a interface interface that allows devices to be connected to each other using hardware or software.

Memory- intended for storing initial data, intermediate and final results.

Control device- intended for sampling all control signals received by other computer devices when processing information in accordance with programs. The control unit and the ALU together make up the processor-information processing device.

ROM- read-only memory. Serves only for reading information and storing information without consuming energy.

RAM- takes part in the information processing process in the ALU. In which actions are performed using numbers and commands.

computer- processes information in accordance with the program.

Presentation of data in a computer

These commands are presented in a computer in binary code, that is, all information is a homogeneous medium and these commands are written in a bit grid, which is a reflection of the physical dimensions of the memory in the computer. In particular, the registers of a 32-bit computer are 32 bits. One binary digit is a bit, 8 binary digits are a byte, 4 half digits are a nibble.

Numbers can be represented in the following bit grids:

1) Half words – 2 bytes

2) Word – 4 bytes or 32 bits

3) Double word – 8 bytes or 64 bits

4) Line – the number of words can reach 2 32, that is, 4 GB

Packed format

1) 2 single words

2) 2 double words

Data in modern computers are represented in a bit grid with a physical point and a floating point.

Fixed-point numbers are processed by an integer ALU. The fixed point can be fixed at the beginning of the discharge and at the end.

Floating point numbers contain a mantissa and an exponent, each of which contains a different digit.

Memory

Designed for storing data and programs.

Main characteristics:

1) Memory capacity - the number of bits bytes of words stored simultaneously in a computer.

Kilo – 1024

Mega – 10 6

Giga – 10 9

Tera – 10 12

Peta – 10 15

2) Memory access time – the time during which memory is accessed for the purpose of writing or reading information.

3) Volatility or non-volatility when storing information

1) Non-volatile memory - FZU

2) Volatile memory - RAM, register, cache, etc.

4) Information storage time

5) Cost of storing one bit

Memory organization

It is one of the main parameters in a computer. Has two meanings.

1) The number of words multiplied by the number of digits.

2) Memory can be one-dimensional, two-dimensional and three-dimensional.

A) One-dimensional memory(D) - bits are written sequentially one after another onto the storage medium. Example: magnetic tape.

b) Two-dimensional memory- this is a matrix memory where the elements are located at the intersection of the X and Y buses.

V) Three-dimensional memory is a cube consisting of matrices. Where the number of cells N are located on the Z axis.

Modern semiconductor memory devices have a 3D organization and are located in crystalline, integrated circuits.

To represent information in computer memory (both numeric and non-numeric), a binary coding method is used.

An elementary computer memory cell is 8 bits (bytes) long. Each byte has its own number (it is called address). The largest sequence of bits that a computer can process as a single unit is called in machine words. The length of a machine word depends on the processor bit depth and can be 16, 32, 64 bits, etc.

BCD encoding

In some cases, when representing numbers in computer memory, a mixed binary-decimal “number system” is used, where each decimal place requires a nibble (4 bits) and the decimal digits from 0 to 9 are represented by the corresponding binary numbers from 0000 to 1001. For example, packed The decimal format, designed to store integers with 18 significant digits and occupying 10 bytes in memory (the most significant of which is signed), uses exactly this option.

Representing integers in two's complement

Another way to represent integers is additional code . The range of value values ​​depends on the number of memory bits allocated for their storage. For example, values ​​of type Integer (all data type names here and below are presented in the form in which they are accepted in the Turbo Pascal programming language. Other languages ​​also have such data types, but may have different names) lie in the range from -32768 ( -2 15) to 32767 (2 15 - 1) and 2 bytes (16 bits) are allocated for storing them; type LongInt - in the range from -2 31 to 2 31 - 1 and are located in 4 bytes (32 bits); Word type - in the range from 0 to 65535 (2 16 - 1) (2 bytes are used), etc.

As can be seen from the examples, the data can be interpreted as signed numbers, so unsigned. When representing a signed quantity, the leftmost (most significant) digit indicates a positive number if it contains a zero, and a negative number if it contains a one.

In general, bits are numbered from right to left, starting from 0. The numbering of bits in a two-byte machine word is shown below.

Additional code positive number coincides with its direct code. The direct code of an integer can be obtained as follows: the number is converted to the binary number system, and then its binary notation on the left is supplemented with as many insignificant zeros as required by the data type to which the number belongs.

For example, if the number 37 (10) = 100101 (2) is declared as a value of type Integer ( sixteen-bit signed), then its direct code will be 0000000000100101, and if the value is of type LongInt ( thirty-two-bit signed), then its direct code will be. For a more compact notation, hexadecimal representation of the code is often used. The resulting codes can be rewritten respectively as 0025 (16) and 00000025 (16).

A negative integer's complement can be obtained using the following algorithm:

  1. write down the direct code of the modulus of the number;
  2. invert it (replace ones with zeros, zeros with ones);
  3. add one to the inverse code.

For example, let's write down the additional code of the number -37, interpreting it as a LongInt value (signed thirty-two bits):

  1. there is a direct code for the number 37;
  2. inverse code;
  3. additional code or FFFFFFDB (16) .

When obtaining a number from its complement code, first of all it is necessary to determine its sign. If the number turns out to be positive, then simply convert its code to the decimal number system. In case of a negative number, the following algorithm must be performed.

In the physical world, any information must be represented somehow. When reading any article (book, review, note) published on the Internet or printed on paper, we perceive text and pictures. The image we see is focused on the retina of our eyes and transmitted in the form of electrical signals to the brain, which recognizes familiar symbols and thus receives information. In what form this information remains in our memory - in the form of images, logical diagrams or something else - may depend on the circumstances of its receipt, the goal set and the specific method of comprehension. Computer technology is more limited and works with a stream of zeros and ones (the so-called binary coding of information).

The notation underlying everything was chosen historically. Even in the era of the creation of the first tube computers, engineers began to think about how to encode information so that the price of the entire device would be minimal. Since a vacuum tube has two possible modes of operation - it passes current, it blocks it, the two at the base seemed the most rational. When moving to semiconductor devices, this conclusion could have been revised, but engineers followed the well-worn path, preserving binary logic for increasingly improved computers. Nevertheless, the physics of semiconductors also allows for ternary coding of information in a computer: in addition to the absence of charge (ternary zero), the presence of both positive (+1) and negative (-1) is possible, which corresponds to three possible values ​​of trit - an elementary memory cell. The same can be said about electric current: forward or reverse direction, or no current at all (also three meanings).

Choosing ternary would automatically solve the problem of encoding negative numbers, which in the binary system is solved by introducing the so-called inversion, taking into account the first bit as a sign bit. Much has been written about the intricacies of this operation for a binary system both on the Internet and in the literature on the Assember language. In case ternary logic the number could be written, for example, this way: “+00-0+0+-”. Here “+” is an economical notation for the value “+1”, “-” respectively - “-1”, but zero speaks for itself. When translated into human language, the following would be obtained: +3^8 + 0 + 0 - 3^5 + 0 + 3^3 + 0 + 3^1 - 3^0 = 6561 - 243 + 27 + 3 - 1 = 6347. The advantages of ternary logic would also appear when working with a wide variety of data: if a certain question requires a monosyllabic answer, then a binary bit can carry one of two answers (“yes” or “no”), while a ternary bit can contain one of three (“yes”, “no”, “not defined”). Experienced programmers remember how often it is necessary to store one answer out of three possible ones, so for an undefined value you have to invent something, for example, enter it into the system additional parameter(binary): whether it has been fully defined at this point in time.

Binary coding of information is also inconvenient for working with graphic images. The human eye perceives three different colors: blue, green and red, ultimately each graphic pixel is encoded by four bytes, of which three indicate the intensity of the primary colors, and the fourth is considered a reserve. This approach obviously reduces the efficiency computer graphics, but so far nothing better has been proposed.

From a mathematical point of view, a ternary computer should be the most efficient. Rigorous calculations are quite complex, but their result boils down to the following statement: the closer its native number system is to the number e (approximately equal to 2.72), the higher the efficiency of computer technology. It is easy to see that three is much closer to the number 2.72 than two. We can only hope that one day the engineers responsible for the production of electronics will turn their attention to the ternary number system. Perhaps this will be the breakthrough after which artificial intelligence will be created?

To represent information in computer memory (both numeric and non-numeric), a binary coding method is used.

An elementary computer memory cell is 8 bits (1 byte) long. Each byte has its own number (it is called an address). The largest sequence of bits that a computer can process as a single unit is called machine word. The length of a machine word depends on the processor bit depth and can be 16, 32 bits, etc.

One byte is enough to encode characters. In this case, 256 characters can be represented (with decimal codes from 0 to 255). The character set of personal computers is most often an extension of the ASCII code (American Standard Code of Information Interchange).

In some cases, when representing numbers in computer memory, a mixed binary-decimal number system is used, where each decimal digit requires a nibble (4 bits) and the decimal digits from 0 to 9 are represented by the corresponding binary numbers from 0000 to 1001. For example, packed decimal format , designed to store integers with 18 significant digits and occupying 10 bytes in memory (the most significant of which is signed), uses exactly this option.

Another way to represent integers is additional code. The range of value values ​​depends on the number of memory bits allocated for their storage. For example, Integer values ​​range from
-32768 (-2 15) to 32677 (2 15 -1) and 2 bytes are allocated for their storage: LongInt type - in the range from -2 31 to 2 31 -1 and are located in 4 bytes: Word type - in the range from 0 up to 65535 (2 16 -1) 2 bytes are used, etc.

As can be seen from the examples, the data can be interpreted as signed or unsigned numbers. When representing a signed quantity, the leftmost (most significant) digit indicates a positive number if it contains a zero, and a negative number if it contains a one.

In general, digits are numbered from right to left, starting from zero.

Additional code positive number coincides with its direct code. The direct code of an integer can be represented as follows: the number is converted to the binary number system, and then its binary notation on the left is supplemented with as many insignificant zeros as required by the data type to which the number belongs. For example, if the number 37 (10) = 100101 (2) is declared as a value of type Integer, then its direct code will be 0000000000100101, and if it is a value of type LongInt, then its direct code will be. For a more compact notation, hexadecimal code is often used. The resulting codes can be rewritten respectively as 0025 (16) and 00000025 (16).

A negative integer's complement can be obtained using the following algorithm:

1. write down the direct code of the modulus of the number;

2. invert it (replace ones with zeros, zeros with ones);

3. add one to the inversion code.

For example, let's write down the additional code of the number -37, interpreting it as a LongInt value:

1. the direct code for the number 37 is 1

2. inverse code

3. additional code or FFFFFFDB (16)

When receiving a number using an additional code, first of all, it is necessary to determine its sign. If the number turns out to be positive, then simply convert its code to the decimal number system. In case of a negative number, the following algorithm must be performed:

1. subtract 1 from code;

2. invert the code;

3. convert to the decimal number system. Write the resulting number with a minus sign.

Examples. Let's write down the numbers corresponding to the additional codes:

a. 0000000000010111.

Since the most significant bit is zero, the result will be positive. This is the code for the number 23.

b. 1111111111000000.

Here is the code for a negative number, we execute the algorithm:

1. 1111111111000000 (2) - 1 (2) = 1111111110111111 (2) ;

2. 0000000001000000;

3. 1000000 (2) = 64 (10)

A slightly different method is used to represent real numbers in the memory of a personal computer. Let us consider the representation of quantities with floating point.

Any real number can be written in standard form M*10 p , where 1 ≤ M< 10, р- целое число. Например, 120100000 = 1,201*10 8 . Поскольку каждая позиция десятичного числа отличается от соседней на степень числа 10, умножение на 10 эквивалентно сдвигу десятичной запятой на 1 позицию вправо. Аналогично деление на 10 сдвигает десятичную запятую на позицию влево. Поэтому приведенный выше пример можно продолжить: 120100000 = 1,201*10 8 = 0,1201*10 9 = 12,01*10 7 ... Десятичная запятая плавает в числе и больше не помечает абсолютное место между целой и дробной частями.

In the above notation, M is called mantissa numbers, and p is its in order. In order to maintain maximum accuracy, computers almost always store the mantissa in a normalized form, which means that the mantissa in in this case is a number lying between 1 (10) and 2 (10) (1 ≤ M< 2). Основные системы счисления здесь, как уже отмечалось выше,- 2. Способ хранения мантиссы с плавающей точкой подразумевает, что двоичная запятая находится на фиксированном месте. Фактически подразумевается, что двоичная запятая следует после первой двоичной цифры, т.е. нормализация мантиссы делает единичным первый бит, помещая тем самым значение между единицей и двойкой. Место, отводимое для числа с плавающей точкой, делится на два поля. Одно поле содержит знак и значение мантиссы, а другое содержит знак и значение порядка.



Personal Computer IBM PC with mathematical coprocessor The 8087 allows the following valid types (value range is given in absolute value):

You can notice that the most significant bit allocated for the mantissa is number 51, i.e. The mantissa occupies the lower 52 bits. The bar here indicates the position of the binary point. The comma must be preceded by a bit of the integer part of the mantissa, but since it is always equal to one, this bit is not required here and the corresponding bit is not in memory (but it is implied). The order value is not stored here as an integer represented in two's complement code. To simplify calculations and compare real numbers, the order value in the computer is stored in the form offset number, i.e. An offset is added to the current order value before writing it to memory. The offset is chosen so that minimum value order corresponded to zero. For example, for a Double, the order takes 11 bits and has a range from 2 -1023 to 2 1023, so the offset is 1023 (10) = 1111111111 (2). Finally, bit number 63 indicates the sign of the number.

Thus, from the above, the following algorithm follows for obtaining a representation of a real number in computer memory:

1. convert the modulus of a given number to the binary number system;

2. normalize the binary number, i.e. written in the form M*2 p, where M is the mantissa (its integer part is equal to 1 (2)) and R- the order written in decimal system calculus;

3. add a shift to the order and convert the shifted order to the binary number system;

4. Taking into account the sign of a given number (0 - positive; 1 - negative), write down its representation in the computer memory.

Example. Let's write down the number code -312.3125.

1. The binary notation of the modulus of this number is 100111000.0101.

2. We have 100111000.0101 = 1.001110000101*2 8 .

3. We get the shifted order 8 + 1023 = 1031. Next we have 1031 (10) = 10000000111 (2) .

All information in a computer is stored in the form of sets of bits, that is, combinations of 0 and 1. Numbers are represented by binary combinations in accordance with the number formats adopted for work in a given computer, and the symbolic code establishes the correspondence of letters and other symbols to binary combinations.

There are three number formats for numbers:

    binary fixed point;

    binary floating point;

    binary coded decimal (BCD).

In fixed-point binary format, numbers can be represented as unsigned (codes) or signed. To represent signed numbers in modern computers, complementary code is mainly used. This leads to the fact that, as shown earlier, for a given length of the bit grid one can represent one more negative numbers than positive ones. Although operations in a computer are performed on binary numbers, more convenient octal, hexadecimal and decimal representations are often used to write them in programming languages, in documentation and for display on the display screen.

In binary-coded decimal format, each decimal digit is represented as a 4-bit binary equivalent. There are two main varieties of this format: packaged and unpackaged. In the packed BCD format, a string of decimal digits is stored as a sequence of 4-bit groups. For example, the number 3904 is represented as the binary number 0011 1001 0000 0100. In the unpacked BCD format, each decimal digit is in the low digit of an 8-bit group (byte), and the contents of the high tetrad are determined by the encoding system used in the computer, and in this case immaterial. The same number 3904 in unpacked format will occupy 4 bytes and look like:

xxxx0011 xxxx1001 xxxx0000 xxxx0100 .

Floating point numbers are processed on a special coprocessor (FPU - floating point unit), which, starting with MP I486, is part of the microprocessor LSI. The data in it is stored in 80-bit registers. By controlling the coprocessor settings, you can change the range and accuracy of this type of data ( table 14.1).

Table 14.1.

Data type

Size (bits)

Range

Processing block

Unsigned integers

1 double word

Signed integers

1 double word

2147483648...+2147483647

1 quad word

Floating point numbers

real number

double precision

≈(0.18*10 309)

with increased accuracy

≈(0.12*10 4933)

Binary decimal numbers

1 byte unpacked

1 byte packed

10 bytes packed

0...(99...99) 18 digits

Organization of RAM

OP is the main memory for storing information. It is organized as a one-dimensional array of memory cells of 1 byte in size. Each byte has a unique 20 bit physical adress in the range from 00000 to FFFFFh (hereinafter, the hexadecimal number system is used to write addresses, the sign of which is the h symbol at the end of the code). Thus, the size of the OP address space is 2 20 = 1 MB. Any two contiguous bytes in memory can be treated as a 16-bit word. The low byte of a word has a lower address, and the high byte has a higher address. So the hexadecimal number 1F8Ah, which occupies a word, will be located in memory in the sequence 8Ah, 1Fh. The address of a word is the address of its low byte. Therefore, a 20-bit memory address can be considered both a byte address and a word address.

Commands, bytes and data words can be placed at any address, which saves memory due to its fuller filling. However, to save program execution time, it is advisable to place data words in memory starting from an even address, since the microprocessor transfers such words in one bus cycle. A word with an even address is said to be aligned on a word boundary. Unaligned data words with an odd address are acceptable, but their transmission requires two bus cycles, which reduces computer performance. Note that the required number of data word reading cycles is initiated automatically by the microprocessor. Keep in mind that when performing stack operations, the data words must be aligned and the stack pointer initialized to an even address, since such operations involve only data words.

The instruction stream is divided into bytes when the instruction queue inside the microprocessor is full. Therefore, command alignment has virtually no performance impact and is not used.

The address space of the OP is divided into segments. A segment consists of adjacent OP cells and is an independent and separately addressable memory unit, which in the basic architecture of a personal computer has a fixed capacity of 2 16 = 64K bytes. Each segment is assigned a starting (base) address, which is the address of the first byte of the segment in the address field of the OP. The value of the physical address of the cell is the sum of the segment address and the offset of the memory cell relative to the beginning of the segment (intra-segment offset). 16-bit words are used to store the segment address and offset values.

To obtain the 20-bit physical address, the microprocessor automatically performs the following operations. The segment base address value is multiplied by 16 (shifted 4 bits to the left) and summed with the segment offset value ( rice. 14.3). The result is a 20-bit physical address value. During summation, a carry may occur from the most significant bit, which is ignored. This leads to the fact that the OP appears to be organized according to a ring principle. The cell with the maximum address FFFFFh is followed by the cell with address 00000h.

Rice. 14.3. Scheme for obtaining a physical address

Segments are not physically tied to a specific memory address, and each memory cell can belong to several segments at the same time, since the base address of a segment can be determined by any 16-bit value. Segments may be contiguous, non-overlapping, partially or completely overlapping. At the same time, in accordance with the algorithm for calculating the physical address, the starting addresses of segments are always a multiple of 16.

Logical and arithmetic foundations and principles of computer operation

Literature: print version

Textbooks for the course

    Gurov V.V., Chukanov V.O. Fundamentals of the theory and organization of computers

    Varfolomeev V.A., Letsky E.K., Shamrov M.I., Yakovlev V.V. IBM eServer zSeries architecture and technologies Internet University of Information Technologies - INTUIT.ru, 2005

    Bogdanov A.V., Korkhov V.V., Mareev V.V., Stankova E.N. Architectures and topologies of multiprocessor computing systems Internet University of Information Technologies - INTUIT.ru, 2004

    Novikov Yu.V., Skorobogatov P.K. Fundamentals of microprocessor technology Internet University of Information Technologies - INTUIT.ru, 2006

Bibliography

    Avanesyan G.R., Levshin V.P. Integrated circuits TTL, TTLSh: Directory M.: Mechanical Engineering, 1993

    Atovmyan I.O. Computing systems architecture M.: MEPhI, 2002

    Borkovsky A. English-Russian dictionary of programming and computer science (with interpretations) M.: Russian language, 1990

    Brodin V.B., Shagurin I.I. Microprocessor i486.Architecture, programming, interface M.: DIALOG-MEPhI, 1993

    Gurov V.V. Synthesis of combinational circuits in examples M.: MEPhI, 2001

    Gurov V.V., Lensky O.D., Solovyov G.N., Chukanov V.O. Architecture, structure and organization of the computing process in a computer like IBM PC M.: MEPhI, 2002. Ed. G.N. Solovyova

    Kagan B.M. Electronic computers and systems M.: Energoatomizdat, 1991

    Kazarinov Yu.M., Nomokonov V.N., Podkletnov G.S. and etc. Microprocessor kit K1810: Structure, programming, application M.: graduate School, 1990. Ed. Yu.M. Kazarinova

    Korneev V.V., Kiselev A.V. Modern microprocessors M.: Knowledge, 1998

    Liu Yu-Zheng, Gibson G. Microprocessors of the 8086/8088 family M.: Radio and communication, 1987

    Mayorov S.A., Novikov G.I. Structure of electronic computers L.: Mechanical Engineering, Leningrad department, 1979

    Nikitin V.D., Soloviev G.N. Operating systems M.:Mir, 1989

    Savelyev A.Ya. Applied theory of digital automata M.: Higher School, 1987

    GOST 15133-77. Semiconductor devices, Terms and Definitions

    GOST 17021-75. Microelectronics, terms and definitions

Logical and arithmetic foundations and principles of computer operation

Subject index: print version

PAGE BY PAGE I A B IN D Z AND TO L M N ABOUT P R WITH T U F C H Sh E

Neumann automaton

10 (1 ),

Turing machine

10 (1 ),

2 (1 , 2 , 3 , 4 ),

In addition to ordinary algebra, there is a special one, the foundations of which were laid by the 19th century English mathematician J. Boole. This algebra deals with the so-called propositional calculus.

Its peculiarity is its applicability to describe the operation of so-called discrete devices, which include a whole class of automation and computer devices.

In this case, the algebra itself acts as a model of the device. This means that the operation of an arbitrary device of the specified type can only be described in some respect using the constructions of this algebra. An actual real device does not physically work as described by the algebra of logic. However, the application of the provisions of this theory allows us to make a number of practical generalizations.

... lecture 2, page 1 »

12 (1 ), 14 (1 , 2 ),

argument

2 (1 , 2 , 3 ),

performance

1 (1 , 2 ),

Performance is characterized by the signal propagation delay introduced by one elementary element (conjunctor, disjunctor, etc.). ... lecture 1, page 1"

address decoder

12 (1 ),

Veitch diagram

4 (3 , 4 ),

disjunction

2 (3 , 4 ),

This complex statement is true if at least one statement included in it is true. ... lecture 2, page 4"

Memory device

10 (2 ),

A storage device, or memory, is a collection of cells designed to store some code. Each cell is assigned its own number, called an address... Lecture 10, page 2"

implicant matrix

4 (2 ),

An implicant matrix is ​​compiled, the columns of which are called constituents of the unit, and the rows are called simple implicants. ... Lecture 4, page 2"

inversion

2 (3 ),

inverter

13 (1 ),

Quine-McCluskey

4 (2 ),

command encoding

11 (1 , 2 ),

conjunction

2 (3 , 4 ),

The conjunction function is true when both statements are true at the same time. ... lecture 2, page 4"

indirect addressing

11 (2 ),

mantissa

7 (2 ),

Turing machine

10 (1 ),

machine infinity

9 (3 ),

microprocessor

14 (1 , 2 ),

minimization

3 (2 , 3 ),

When minimizing FALs, one strives to obtain a form that has fewer letters than the original. In relation to DNF, this form is called abbreviated (Soc. DNF).

The meaning of the construction of Juice. DNF lies in the fact that it includes such elementary products that their units cover more than one unit original function, but several.

... Lecture 3, page 2"

incompletely defined function

5 (1 ),

An incompletely defined function is a switching function whose values ​​on some sets of arguments can be arbitrary (i.e. equal to "0" or "1"). ... Lecture 5, page 1"

return code

7 (5 ),

Reverse is a code for which “0” is written in the sign digit of a positive number, the modulus of the number in digital ones, and one in the sign digit for negative numbers, and inverted digits of the original number in digital ones. ... Lecture 7, page 5"

uniformity

6 (1 ),

12 (1 ),

relative addressing

11 (2 ),

6 (2 , 3 , 4 ),

personal computer

14 (1 , 2 ),

packing density

1 (1 ),

An important indicator is the packing density, the number of units of elements per 1 cm 3 . ... lecture 1, page 1"

7 (2 ),

direct addressing

11 (2 ),

direct code

7 (4 , 5 ),

6 (1 , 2 , 3 ),

address register

12 (1 ),

command register

12 (1 ),

register memory

14 (1 ),

divider shift

9 (1 ),

3 (1 , 2 , 3 ),

14 (1 , 2 ),

2 (1 , 2 ),

notation

6 (1 , 2 , 3 , 4 ),

The method of representing images of arbitrary numbers using a certain finite set of symbols will be called a number system. ... Lecture 6, page 1"

addressing method

11 (2 ),

adder

13 (2 ),

program counter

12 (1 ),

accuracy

7 (1 , 2 ),

control signal

12 (1 ),

physical adress

14 (2 ),

fixed comma

7 (1 , 2 , 4 ),

2 (1 , 2 , 3 , 4 ),

floating point number

14 (2 ),

Schaeffer's stroke

5 (3 ),

Neumann element

10 (1 ),

A Neumann element (NE) is a device that, at each clock cycle, is in one of a finite number of states r i R, forming its alphabet... Lecture 10, page 1"

PAGE BY PAGE I A B IN D Z AND TO L M N ABOUT P R WITH T U F C H Sh E