Basic Computer Organization

In a computer system, the various parts of the computer are configured in such a way that they help to perform different tasks to accomplish a given specific task. A computer is a kind of electric brain that can solve mathematical problems with the help of movement, give the necessary information or issue control commands. more skill. A computer is a high-speed electronic machine that processes data. Computers are widely used to perform arithmetic calculations. However, modern computers serve many other purposes.

Computer Organization
Computer Organization

There are five main tasks.

  1. Inputting
  2. Outputting
  3. . Control unit
  4. Storage unit
  5. Processing

Operational flow chart

The functional inflow map in computer association generally involves three main stages cost, Execute, and Instruction Cycle. Then is an overview of each stage and how they fit together :

Fetch stage

Fetch the coming instruction from memory: The program counter( PC) holds the address of the coming instruction to be brought. The PC value is loaded into the memory address register( scar), and the instruction is brought from memory into the memory data register( MDR).

proliferation of the program counter: After costing the instruction, the PC is incremented to point to the coming instruction in sequence.

Decode and Execute Stage

Decode the instruction: The brought instruction is passed to the instruction decoder, which determines the type of instruction and the operands involved.

Perform the instruction: Grounded on the instruction type and operands, the applicable operation is performed. This may involve computation or logical operations, memory access, control inflow changes, or other conduct.

Instruction Cycle

Repeat the cost and execute stages: After executing an instruction, the process reprises by going back to the cost stage. The program counter is streamlined with the address of the coming instruction, and the cycle continues.

CPU Organization

The Central Processing Unit (CPU) is the primary component of a computer responsible for executing instructions and performing calculations. It is often referred to as the “brain” of the computer. The organization of a CPU typically consists of the following components:

  1. Control Unit (CU),
  2. Arithmetic Logic Unit (ALU).
  3. Registers.
  4. Cache.
  5. Bus Interface Unit (BIU,
  6. Memory Management Unit (MMU).

Hardwired Control Unit

Programmed control is a mechanism that uses a finite state machine (FSM) to generate control signals. It is designed as a sequential logic circuit. The final circuit is constructed by physically connecting components such as gates, flip-flops and reels. Therefore, it is called a wired controller.

The figure shows a 2-bit sequence counter used to generate control signals. The output signals obtained from these signals are decoded to produce the required signals in serial order.

The main goal of implementing hardware control is to minimize circuit cost and increase efficiency in speed. Here are some suggested ways to develop hardware control logic.

  • Sequence Counter Method – The sequence counter method is the most convenient method used to develop medium complexity controllers.
  • Delay Element Method – This method uses synchronized delay elements to generate a series of control signals.
  • State table method – This method incorporates the traditional algorithmic approach to Notes controller development using the classic state table method.

Microprogrammed Control Unit

A control unit whose binary control values ​​are stored as words in memory is called a firmware control unit.

The controller establishes a specific set of signals on each clock cycle of the system clock to cause an instruction to be executed. Each of these outputs produces one micro-operation, including register transfers. Thus, a set of control signals for a specific micro-operation is formed that can be stored in memory.

Each bit forming a microinstruction is associated with one control signal. When the bit is set, the control signal is activated. When cleared, the control signal is disabled. These micro-instructions can be stored sequentially in internal “control” memory. The control unit of the micro program control computer is the computer inside the computer.

Stack Organization

Stacks are also known as last-in-first-out (LIFO) lists. This is the most important feature of the processor. It stores data in such a way that the last item stored is retrieved first. A stack is a block of memory with address registers. This register affects the stack address known as the stack pointer (SP). The stack pointer continuously affects the address of the element on top of the stack.

You can push elements onto the stack or remove elements from the stack. An insert operation is called a push operation, and a remove operation is called an extract operation. On the computer stack, these operations are simulated by incrementing or decrementing the SP register.

Register Stack

A stack can consist of a word of memory or a collection of registers. Consider a 64-word register stack arranged as shown. The stack pointer register contains a binary number representing the address of the element on top of the stack. The three elements A, B and C are stacked.

General Register Organization

A set of flip-flops form a register. Registers are unique, high-speed memory regions of the CPU. It contains combinational circuits that implement data processing. Information is always defined in registers before being processed. Registers speed up program execution.

Registers perform two important functions in CPU operation:

  • Temporary storage space can be supported. This supports the implementation of your own programs to provide quick access to data when needed.
  • You can store data about the state of the CPU and programs you implement yourself.

Single Organization

A single organization refers to the architectural design and structure of a computer system where a single central processing unit (CPU) performs all the processing tasks. This is in contrast to a multiple organization, where multiple processors or cores work together to execute instructions.

In a single organization, the CPU is responsible for fetching instructions from memory, decoding them, executing the necessary operations, and storing the results back in memory. It follows the von Neumann architecture, which is the foundation for most modern computer systems.

The single organization typically includes the following components:

  1. Central Processing Unit (CPU).
  2. Memory.
  3. Input/Output (I/O) Devices.
  4. System Bus.

Addressing Mode

An addressing mode is a way of specifying the operands of an instruction. A microprocessor’s job is to execute a series of instructions stored in memory to perform a specific task.

The task requires:

  • An operator or opcode that specifies the action to be performed.
  • Operands that define the data to be used in the operation.

Types of Addressing Modes

  1. Immediate
  2. Direct Addressing
  3. Register Addressing

Instruction Formats

The computer does its job according to the instructions provided. Computer training is organized into groups called fields. This field contains other information. Since everything is 0’s and 1s in the case of a computer, each field has a different value depending on what the CPU decides to do. Most common fields:

  • The action field specifies the action to take, such as add.
  • An address field containing the location of an operand, that is a register or memory location.
  • A mode field that specifies how the operand is based.

Instructions are of variable length depending on the number of addresses they contain. In general, CPU configurations are of three types according to the number of address fields.

  • single battery configuration
  • Organization of the General Register
  • stack configuration

Data Transfer Instructions

A data transfer command moves information from one location on your PC to another without changing its contents. The most well-known transfers occur between memory and processor registers, between processor registers and information or results, and between processor registers themselves.

Data Manipulation Instructions

Data processing instructions perform information processing and provide computing power to the computer. The data processing instructions of a typical computer are generally divided into three main types.

1. Arithmetic instructions
2. Logical and bit manipulation instructions
3. Shift instructions

Input/Output Subsystem

A computer’s I/O subsystem provides an efficient method of communication between the central system and the external environment. It handles all I/O operations of the computer system.

Peripheral Devices

Input or output devices connected to a computer are called peripherals. These devices are designed to read information into and out of blocks of memory under instructions from the CPU and are considered part of the computer system. These devices are also referred to as peripherals.

Example: Keyboards, display devices, and printers are common peripherals.

There are three types of peripherals.

1. Input peripherals
2. Output peripherals
3. Input-Output peripherals

Bus Structure

A system bus typically has 50 to hundreds of individual lines, each dedicated to a specific function. These lines can be divided into three functional groups: data lines, address lines and control lines. Let’s discuss them one by one.

1. Data Lines
2. Address Lines
3. Control Lines

Programming Registers

Programming registers involves penetrating and manipulating the data stored in the registers using assembly or machine language instructions. The specific registers available and their functionality depend on the processor armature. Then are some common types of registers set up in the utmost processors

  1. General-Purpose Registers.
  2. Accumulator.
  3. Program Counter (PC).
  4. Stack Pointer (SP).
  5. Instruction Register (IR).
  6. Status Register/Flags.

Memory Organisation

Memory is made up of cells, and each cell can be identified by a unique number called an address. Each cell can recognize control signals such as “read” and “write” generated by the processor when it is about to read or write an address. Every time the CPU executes the program, the program is available in memory, so instructions must be passed from memory to the CPU. To access an instruction, the CPU generates a memory request.

  • Memory Request.
  • Word Size.

Memory Hierarchy

Memory devices are an essential component of any digital computer as they are essential for storing programs and data.

In general, memory blocks can be divided into two categories.

1. The memory unit that communicates directly with the CPU is called main memory. RAM is often referred to as RAM.

2. A block of memory that provides backup storage is called secondary storage. For example, magnetic disks and magnetic tapes are the most commonly used secondary storage devices.

In addition to the basic classification of storage devices, the memory hierarchy includes all storage devices available in a computer system, from slow but large secondary memory to relatively fast main memory.

Main Memory

Main memory serves as the central storage device in a computer system. This is a relatively large and fast memory used to store programs and data while working.

The main technology used for main memory is based on semiconductor integrated circuits. The integrated circuit for main memory is divided into two main blocks.

1. Random Access Memory (RAM) integrated circuit chip
2. ROM (Read Only Memory) Integrated Circuit Chip

Auxiliary Memory

Secondary memory is said to be the cheapest, most space-consuming, and slowest storage device in a computer system. This is where programs and information are stored for long-term storage or when not directly in use. The most common secondary storage devices used in computer systems are magnetic disks and tapes.

Associative Memory

Associative memory can be thought of as a block of memory in which stored data can be identified for access by the content of the data itself rather than by its address or memory location.

Associative memory is often referred to as content address memory (CAM).

When a write operation is performed in associative memory, a word is not assigned an address or location in memory. Memory itself can find unused empty space to store words.

In associative memory, on the other hand, when a word needs to be read, the content of the word or parts of it are displayed. The words corresponding to the given content are stored in memory and displayed for reading.

Cache Memory

Cache memory is one of the fastest. This is more expensive than main memory, but more useful than registers. Cache memory basically acts as a buffer between the main memory and the processor. It is also synchronized with the speed of the processor. Also, since the CPU stores more frequently used data and instructions, it does not need to constantly access main memory. This reduces the average access time to the main memory.

Levels of Cache Memory

  1. Level 1 (L1) or Registers
  2. Level 2 (L2) or Cache Memory
  3. Level 3 (L3) or Main Memory
  4. Level 4 (L4) or Secondary Memory

Virtual Memory

The concept of virtual memory (VM) is similar to that of cache memory. While cache determines memory access speed requirements on the CPU side, virtual memory determines main memory (MM) capacity requirements along with secondary memory mapping, i.e. the hard drive. Both cache and virtual memory are based on the local reference principle. Virtual memory creates the illusion of unlimited memory available to the process/programmer.

In a virtual machine implementation, processes see resources from a logical perspective, and CPUs view resources from a physical or physical resource perspective. Each program or process starts at the starting address “0” (logical expression). However, there is only one real zero address in the main memory. Also, many processes are in the main memory (physical view) at any given moment. Memory management hardware provides a mapping between logical and physical representations.

Memory Management

Memory is an important part of a computer used to store data. Taking care of your computer system is very important because the amount of RAM available to your computer system is very limited. At any given moment, many processes are competing for this. Also, multiple processes run concurrently to improve performance. This requires maintaining multiple processes in the main memory, so managing them efficiently is even more important.

Role of Memory Management

Here are the important roles of memory management in computer systems:

The memory manager is used to track the state of memory cells whether they are empty or allocated. Provides an abstraction to access main memory so that software can recognize that a large amount of memory has been allocated.

A memory manager allows computers with small amounts of main memory to run programs that are larger than the size or amount of available memory. This is achieved by moving information back and forth between primary and secondary memory using the concept of paging.

The memory manager is responsible for protecting the memory allocated to each process from being corrupted by other processes. If this is not guaranteed, the system may exhibit unpredictable behaviour.

The memory manager must ensure that memory space is shared between processes. So two programs can reside in the same memory location, but at different times.

Hit/Miss Ratio

The hit/miss ratio refers to the ratio of cache hits to cache misses in a cache memory system. The cache is a small, fast memory component that stores frequently accessed data or instructions from a larger, slower main memory.

When the processor needs to access data, it first checks the cache. If the data is found in the cache (cache hit), it can be accessed much faster than if it had to be retrieved from the main memory. However, if the data is not present in the cache (cache miss), the processor must fetch it from the main memory, resulting in a longer access time.

Magnetic Disk Performance

A magnetic disk, also known as a hard disk drive (HDD), is a type of non-volatile storage device used in computer systems for long-term data storage. It consists of one or more rotating disks coated with a magnetic material, along with read/write heads that move over the surface of the disks to access and modify data.

In terms of performance, magnetic disks have several characteristics that impact their overall speed and efficiency:

  1. Capacity.
  2. Sequential and Random Access.
  3. Data Transfer Rate.
  4. Seek Time.
  5. Latency.
  6. Fragmentation.
  7. Cache.

Magnetic Tape

Tape transport includes robotic, mechanical, and electronic components that support the structure and control methods of the tape device. Tape is a layer of plastic applied to a magnetic document carrier.

The beats are listed as magnetic points on the tape along the various tracks. Seven or nine bits are written together to form a character along with parity bits. A read/write head is installed on each track so that this information can be written and read as a series of characters. You can stop a block of tape and start moving it forward or backwards, and you can also reverse it. However, it cannot start or stop fast enough between individual characters. For this reason, data is written in blocks defined as records. Gaps of unrecorded tape are added between recordings which can freeze the tape.

The tape starts running at intervals and reaches a constant speed until it moves to the next record. Each tape record has a start and end recognition bit. By reading the bit design at startup, the tape control recognizes the data number.

I/O Organization

In computer organization, I/O (Input/Output) organization refers to the design and management of input and output operations within a computer system. It involves the hardware and software components responsible for transferring data between the computer and external devices such as keyboards, displays, storage devices, and networks.

The primary goals of the I/O organization are:

  1. Device Independence.
  2. Efficient Data Transfer.
  3. I/O Controllers and Interfaces.
  4. Device Drivers.
  5. I/O Scheduling.

Peripherals Devices

A peripheral is defined as a device that provides input/output capabilities to a computer and acts as a secondary computing device without resource-intensive functions.

However, in general, peripherals are not required for a computer to perform its basic tasks and can be viewed as enhancing the user experience. A peripheral device is a device that is connected to a computer system but is not part of the computer system’s basic architecture. In general, more and more people are using the term “peripheral” in a broader sense to refer to devices outside the computer case.

Classification of Peripheral devices:

1. Input Devices
2. Output Devices
3. Storage Devices

I/O Interface

The method used to transfer information between internal memory and an external I/O device is called an I/O interface. The central processing unit interacts using special communication channels with peripherals connected to all computer systems. These communication channels are used to bridge the gap between the CPU and peripherals. Between the CPU and peripherals, there is a special hardware component called an interface module that controls and synchronizes all input and output transfers.

Mode of Transfer:

Binary information received from an external device is usually stored in a memory block. Information transferred from the CPU to external devices comes from memory blocks. The CPU simply processes the information, but the source and destination are always blocks of memory. Data transfer between CPU and I/O devices can be done in various modes.

Data transfer to and from a peripheral device can be done in one of three possible ways.

  • Programmed I/O.
  • Interrupt- initiated I/O.
  • Direct memory access( DMA).

Transfer Modes

In computer organization, data transfer can occur through various modes depending on the system architecture and the devices involved. Here are some common modes of data transfer:

  1. Programmed I/O (PIO).
  2. Interrupt-Driven I/O.
  3. Direct Memory Access (DMA).
  4. Programmed I/O with Interrupts.
  5. Burst Mode.

Priority Interrupt

A priority interrupt is a mechanism used to handle multiple interrupts that occur simultaneously or in rapid succession. It allows the system to determine which interrupt request should be serviced first based on a predetermined priority level assigned to each interrupt source.

When an interrupt occurs, the interrupt controller compares the priority of the new interrupt request with the priority of the currently executing interrupt or process. If the new interrupt has a higher priority, the controller interrupts the current process and transfers control to the higher-priority interrupt handler. This ensures that critical or time-sensitive tasks are given precedence over lower-priority tasks.

The priority level of each interrupt source is typically determined by hardware or software settings. Higher-priority interrupts are often assigned lower numerical values, with “0” representing the highest priority. Interrupt sources with the same priority level can be serviced in a predetermined order, such as based on their position in the interrupt request register.

Direct memory access

DMA stands for Direct Memory Access and is a method of transferring data from the computer’s RAM to other parts of the computer without processing by the CPU. Most data entering or output from a computer is processed by the CPU, but some data does not require processing or can be processed by other devices.

In these situations, DMA can save processing time and is a more efficient way to move data from computer memory to another device. A device must be assigned to a DMA channel to use DMA. Each type of port on your computer has a set of DMA channels that can be assigned to each attached device. For example, PCI controllers and hard disk controllers each have their own set of DMA channels.

DMA Transfer Types

  1. Memory To Memory Transfer
  2. Auto initialize
  3. DMA Controller

Input-Output Processor

DMA data transfer mode reduces the load on the CPU when handling I/O operations. It also allows for parallelism in CPU and I/O operations. This parallelism is necessary to avoid wasting valuable CPU time when working with I/O devices that are much slower than the CPU. The concept of a DMA operation can be extended so that the CPU does not participate in I/O operations. This led to the development of special purpose processors called input/output processors (IOPs) or I/O channels.

An input/output processor (IOP) is like a CPU that handles the details of I/O operations. It has more features than a typical DMA controller. IOPs can retrieve and execute their own commands specifically to describe I/O operations. In addition to I/O-related operations, it can perform other processing operations such as arithmetic, logic, branching, and transcoding. The main memory block plays an important role. It communicates with the processor through direct memory access.

Advantages

The I/O devices can directly access the main memory without the intervention by the processor in I/O processor based systems.
It is used to address the problems that are arises in Direct memory access method.

Serial communication

Serial communication is a popular method for exchanging data between computers and peripheral devices. Serial transmission between sender and receiver follows rigorous protocols that ensure safety and reliability and ensure longevity. Many devices, from personal computers to mobile devices, use serial communication. Let’s take a closer look at the basics.

Serial communication uses serial digital binary communication method. It uses a variety of serial communication interfaces and protocols including RS232, RS485, SPI and I2C.

I/O Controller

An I/O controller is a set of ICs that help transfer data between the CPU and the motherboard. The main purpose of this system is to support the interaction of control units (CUs) with peripherals. Simply put, an I/O controller helps connect and control various peripherals that are input/output devices. It is usually installed on the computer motherboard. However, you can also use it as a replacement accessory or add more peripherals to your computer.

I/O controllers are also referred to as channel I/O, direct memory access controllers, peripheral processors, or I/O processors.

Techopedia Explains I/O Controller (IOC)

As CPU speeds increased, faster data transfer between peripherals and control units was required. I/O controllers work by receiving commands from the CPU and then sending commands to the intended device. I/O controllers also control the transfer of data from peripherals. In this way, the I/O controller saves wasted CPU processing power when transferring data.

Faster I/O controllers allow faster communication with the CPU, resulting in faster processing speeds. I/O controllers are usually pre-installed on the computer motherboard. However, these devices may only work with some generic devices. Some unique devices may have separate I/O controllers. These devices must be connected to the computer through an expansion slot.

Asynchronous Data Transfer

The internal operations of the individual blocks of the digital system are synchronized using clock pulses. This means that the clock pulse is applied to all registers within the block. And all data transfers between the internal registers happen simultaneously during clock pulses. Now suppose that the two blocks of the digital system are designed independently of each other, such as the CPU and I/O interfaces.

Transfers between two modules are said to be synchronous if the registers of the I/O interface share the clock with the CPU registers. However, in most cases, each block’s internal timing is independent of each other, so each block uses its own private clock in its own internal registers. At this time, the two blocks are said to be asynchronous to each other, and if there is a data transfer between the two blocks, such data transfer is called an asynchronous data transfer.

However, asynchronous data transfer between two independent blocks requires that a control signal be passed between the interacting blocks so that they can specify when to send the data.

Asynchronous Data Transfer Methods

These two methods can provide an asynchronous way to pass data.

  1. Strobe control.
  2. Handshaking.

Strobe Control

Strobe control refers to a signal or control mechanism used to coordinate the transfer of data between devices. It is typically used in situations where the sender and receiver need to agree on when data is valid and can be latched or captured. The strobe signal acts as a synchronization mechanism, allowing the receiver to know when to sample the data being sent.

Handshaking

Handshaking is a communication protocol that allows devices to establish and maintain synchronization during data transfer. It involves a series of predefined signals or control lines exchanged between the sender and receiver to coordinate the flow of data.