Охрана труда:
нормативно-правовые основы и особенности организации
Обучение по оказанию первой помощи пострадавшим
Аккредитация Минтруда (№ 10348)
Подготовьтесь к внеочередной проверке знаний по охране труда и оказанию первой помощи.
Допуск сотрудника к работе без обучения или нарушение порядка его проведения
грозит организации штрафом до 130 000 ₽ (ч. 3 статьи 5.27.1 КоАП РФ).

Свидетельство о регистрации
СМИ: ЭЛ № ФС 77-58841
от 28.07.2014

Почему стоит размещать разработки у нас?
  • Бесплатное свидетельство – подтверждайте авторство без лишних затрат.
  • Доверие профессионалов – нас выбирают тысячи педагогов и экспертов.
  • Подходит для аттестации – дополнительные баллы и документальное подтверждение вашей работы.
Свидетельство о публикации
в СМИ
свидетельство о публикации в СМИ
Дождитесь публикации материала и скачайте свидетельство о публикации в СМИ бесплатно.
Диплом за инновационную
профессиональную
деятельность
Диплом за инновационную профессиональную деятельность
Опубликует не менее 15 материалов в методической библиотеке портала и скачайте документ бесплатно.
27.09.2018

Architecture of a Computer system

Калиакпарова Лаура Ерлановна
студент Университета КазУМОиМЯ
Архитектура компьютерной системы — это фундаментальная дисциплина, изучающая принципы выбора и взаимодействия аппаратных компонентов для создания вычислительных устройств. Её цель — проектирование компьютеров, которые оптимально соответствуют заданным требованиям по функциональности, производительности и стоимости. В рамках курса рассматриваются ключевые концепции: процессоры, память, шины и периферийные устройства, а также их интеграция в единую систему. Понимание архитектуры ЭВМ необходимо для эффективного программирования, анализа производительности и работы с современными вычислительными технологиями. Этот материал является основой для дальнейшего углубленного изучения компьютерных наук и инженерии.

Содержимое разработки

Architecture of a Computer system

Computer Architecture is the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.” - WWW Computer Architecture Page.

The term architecture as applied to computer design, was first used in 1964 by Gene Amdahl, G. Anne Blaauw, and Frederick Brooks, Jr., the designers of the IBM System/360. They coined the term to refer to those aspects of the instruction set available to programmers, independent of the hardware on which the instruction set was implemented. The System/360 marked the introduction of families of computers, that is, a range of hardware systems all executing essentially the same basic machine instructions

Instruction-set architecture In the late 1970s, statisticians often had to be skilled FORTRAN programmers. Many were also sufficiently conversant with assembly language programming for a particular computer that they wrote subprograms directly using the computer’s basic instruction set. The Digital VAX 11/780 was a typical scientific computer of the era. The VAX had over 300 different machine-level instructions, ranging in size from 2 to 57 bytes in length, and 22 different addressing modes. Machines such as the VAX, the Intel 80x86 family of processors (the processors on which the IBM PC and its successors are based), and the Motorola 680x0 processors (on which the Apple Macintosh is based) all had multiple addressing modes, variable-length instructions, and large instruction sets. By the middle of the 1980s, such machines were described as “complex instruction-set computers” (CISC). These architectures did have the advantage that each instruction/addressing-mode combination performed its special task efficiently, making it possible to fine-tune performance on large tasks with very different characteristics and computing requirements.

Hardware and Machine Organization Up to this point we have talked largely about instruction-set aspects of computer architecture. The architectural advances of primary interest to statisticians today involve hardware and machine organization. The hardware architecture consists of low-level details of a machine, such as timing requirements of components, layouts of circuit boards, logic design, power requirements, and the like. Fortunately, few of these details affect the day-to-day work of statisticians aside from their consequences — processors continue to get smaller and faster, and memory continues to get larger and less expensive. At the level of machine organization, the computers we use are built of inter-dependent systems, of which the processor itself is just one. Others include memory and memory management systems, specialized instruction processors, busses for communication within and 2 between systems and with peripheral devices, and input/output controllers. In multiprocessing architectures, the protocols for interaction between multiple processors (the multiprocessing control system) is included as well.Floating-point computation From the earliest days of digital computers, statistical computation has been dominated by floating-point calculations. “Scientific computers” are often defined as those which deliver high performance for floating-point operations. The aspects of computer design that make these operations possible have followed the same evolutionary path as the rest of computer architecture. Early computer designs incorporated no floating-point instructions. Since all numbers were treated as integers, programmers working with nonintegers had to represent a number using one integer to hold the significant digits coupled to a second integer to record a scaling factor. In effect, each programmer had to devise his or her own floating-point representation. By the 1960s, some designs introduced instructions for floating-point operations, but many had none. (The IBM 1620 computer, for example, had fixed-precision integer arithmetic, but the programmer could control the number of digits of precision.) By the mid-1970s, virtually all scientific computers had floating-point instructions. Unfortunately for the practical statistician, the representation of floating-point numbers, the meaning of floating point operations, and the results of an operation differed substantially from one machine to the 3 next. The burden of knowing numerical analysis and good numerical algorithms fell heavily on the data analyst’s shoulders. In the early 1980s the Institute of Electrical and Electronics Engineers (IEEE) developed a standard for floating-point arithmetic [1]. To implement these standards, computer architects of that time developed a floating-point architecture separate from that of the principal processor — in effect, moving the floating-point issues from the level of machine-specific definition of arithmetic to a common set of operations (an “instruction set”) whose output could be strictly defined. Examples include the Motorola 6888x and the Intel 80x87 floatingpoint processors (FPPs), which were designed in parallel with the 680x0 and 80x86 central processors, respectively.In later RISC-based architectures, floating point processes are tightly coupled to the central instruction processor. The UltraSPARC-1 design incorporates an IEEE-compliant FPP which performs all floating-point operations (including multiplies, divides, and square roots). The PowerPC family also includes an integrated IEEE-compliant FPP. These processors illustrate the increased role of hardware and machine organization. The FPPs are logically external to the basic instruction processor. Therefore the computer design must include channels for communication of operands and results, and must incorporate machine-level protocols to guarantee that the mix of floating-point and non-floating-point instructions are completed in the correct order. Today, many of these FPPs are part of the same integratedcircuit package as the main instruction processor. This keeps the lines of communication very short and achieves additional effective processor speed. Parallel and Vector Architectures The discussion above concerns the predominant computer architecture used by practicing statisticians, one based on the sequential single processor. The notion that speed and reliability could be enhanced by coupling multiple processors in a way that enabled them to share work is an obvious extension, and one that has been explored actively since at least the 1950s. Vector computers include instructions (and hardware!) that make it possible to execute a single instruction (such as an add) simultaneously on a vector of operands. In a standard scalar computer, computing the inner product of two vectors x and y of length p requires a loop within which the products xiyi are calculated. The time required is that of p multiplications, together with the overhead of the loop. On a vector machine, a single instruction would calculate all p products at once. The highest performance scientific computers available since 1975 incorporate vector architecture. Examples of early machines with vector capabilities include the CRAY-1 machine and its successors from Cray Research and the CYBER-STAR computers from CDC. Vector processors are special cases of parallel architecture, in which multiple processors cooperatively perform computations. Vector processors are examples of machines which can execute a single instruction on multiple data streams (SIMD computers). In computers with these architectures, there is a single queue of instructions which are executed in parallel (that is, simultaneously) by multiple processors, each of which has its own data memory cache. Except for special purposes (such as array processing), the SIMD model is neither sufficiently flexible nor economically competitive for general-purpose designs

Computers with multiple processors having individual data memory, and which fetch and 4 execute their instructions independently (MIMD computers), are more flexible than SIMD machines and can be built by taking regular scalar microprocessors and organizing them to operate in parallel. Such multiprocessors are rapidly approaching the capabilities of the fastest vector machines, and for many applications already have supplanted them.

Implementation

Main article: Implementation

Once an instruction set and micro-architecture are designed, a practical machine must be developed. This design process is called the implementation. Implementation is usually not considered architectural design, but rather hardware design engineering. Implementation can be further broken down into several steps:

Logic Implementation designs the circuits required at a logic gate level

Circuit Implementation does transistor-level designs of basic elements (gates, multiplexers, latches etc.) as well as of some larger blocks (ALUs, caches etc.) that may be implemented at the log gate level, or even at the physical level if the design calls for it.

Physical Implementation draws physical circuits. The different circuit components are placed in a chip floorplan or on a board and the wires connecting them are created.

Design Validation tests the computer as a whole to see if it works in all situations and all timings. Once the design validation process starts, the design at the logic level are tested using logic emulators. However, this is usually too slow to run realistic test. So, after making corrections based on the first test, prototypes are constructed using Field-Programmable Gate-Arrays (FPGAs). Most hobby projects stop at this stage. The final step is to test prototype integrated circuits. Integrated circuits may require several redesigns to fix problems.

For CPUs, the entire implementation process is organized differently and is often referred to as CPU design.

Адрес публикации: https://www.prodlenka.org/metodicheskie-razrabotki/324374-architecture-of-a-computer-system

Свидетельство участника экспертной комиссии
Рецензия на методическую разработку
Опубликуйте материал и закажите рецензию на методическую разработку.
Также вас может заинтересовать
Свидетельство участника экспертной комиссии
Свидетельство участника экспертной комиссии
Оставляйте комментарии к работам коллег и получите документ
БЕСПЛАТНО!
У вас недостаточно прав для добавления комментариев.

Чтобы оставлять комментарии, вам необходимо авторизоваться на сайте. Если у вас еще нет учетной записи на нашем сайте, предлагаем зарегистрироваться. Это займет не более 5 минут.

 

Для скачивания материалов с сайта необходимо авторизоваться на сайте (войти под своим логином и паролем)

Если Вы не регистрировались ранее, Вы можете зарегистрироваться.
После авторизации/регистрации на сайте Вы сможете скачивать необходимый в работе материал.

Рекомендуем Вам курсы повышения квалификации и переподготовки