AI News, Difference between revisions of "Computer architecture"

Difference between revisions of "Computer architecture"

Discuss the notion of the modern computer as a Turing Machine in order to understand what an instruction is in the next section.

Data required for doing any job, in case of adding two numbers, is fetched either from input devices like keyboard, serial port or from memory itself.

In case of adding two numbers we move our input to registers in processor and call adding opcode and move the result to certain memory unit.

Pipelining is a technique of decomposing a sequential process into sub operations with each sub operation being executed in a special dedicated segment that operates concurrently with all other segments.

Simple way of viewing the pipeline structure is to imagine that each segment consists of an input register followed by a combinational circuit.

achieve the best performance, allow memory to keep up with the processor and have a reasonable memory cost compared to other components.

Dynamic random-access memory

Dynamic random-access memory (DRAM) is a type of random access semiconductor memory that stores each bit of data in a separate tiny capacitor within an integrated circuit.

To prevent this, DRAM requires an external memory refresh circuit which periodically rewrites the data in the capacitors, restoring them to their original charge.

One of the largest applications for DRAM is the main memory (colloquially called the 'RAM') in modern computers and graphics cards (where the 'main memory' is called the graphics memory).

The advantage of DRAM is the structural simplicity of its memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM.

DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% percent jump in 1988, while in recent years the price has been going down.[3]

The store used a large bank of capacitors, which were either charged or not, a charged capacitor representing cross (1) and an uncharged capacitor dot (0).

In 1965, Benjamin Agusta and his team at IBM created a 16-bit silicon memory chip based on the Farber-Schlig cell, with 80 transistors, 64 resistors, and 4 diodes.

This addressing scheme uses the same address pins to receive the low half and the high half of the address of the memory cell being referenced, switching between the two halves on alternating bus cycles.

This was a radical advance, effectively halving the number of address lines required, which enabled it to fit into packages with fewer pins, a cost advantage that grew with every jump in memory size.

To store data, a row is opened and a given column's sense amplifier is temporarily forced to the desired high or low voltage state, thus causing the bit-line to charge or discharge the cell storage capacitor to the desired value.

During a write to a particular cell, all the columns in a row are sensed simultaneously just as during reading, so although only a single column's storage-cell capacitor charge is changed, the entire row is refreshed (written back in), as illustrated in the figure to the right.[11]

For example, a system with 213 = 8,192 rows would require a staggered refresh rate of one row every 7.8 µs which is 64 ms divided by 8,192 rows.

A few real-time systems refresh a portion of memory at a time determined by an external timer function that governs the operation of the rest of a system, such as the vertical blanking interval that occurs every 10–20 ms in video equipment.

a 10 ns clock), the 50 ns DRAM can perform the first read in five clock cycles, and additional reads within the same page every two clock cycles.

Minimum random access time has improved from tRAC = 50 ns to tRCD + tCL = 22.5 ns, and even the premium 20 ns variety is only 2.5 times better compared to the typical case (~2.22 times better).

C

C

−

V

C

C

In the 2000s, manufacturers were sharply divided by the type of capacitor used by their DRAMs, and the relative cost and long-term scalability of both designs has been the subject of extensive debate.

The capacitor is constructed from an oxide-nitride-oxide (ONO) dielectric sandwiched in between two layers of polysilicon plates (the top plate is shared by all DRAM cells in an IC), and its shape can be a rectangle, a cylinder, or some other more complex shape.

In a former variation, the capacitor is underneath the bitline, which is usually made of metal, and the bitline has a polysilicon contact that extends downwards to connect it to the access transistor's source terminal.

The advantage the COB variant possesses is the ease of fabricating the contact between the bitline and the access transistor's source as it is physically close to the substrate surface.

However, this requires the active area to be laid out at a 45-degree angle when viewed from above, which makes it difficult to ensure that the capacitor contact does not touch the bitline.

CUB cells avoid this, but suffer from difficulties in inserting contacts in between bitlines, since the size of features this close to the surface are at or near the minimum feature size of the process technology (Kenner, pp. 33–42).

Since the capacitor is buried in the bulk of the substrate instead of lying on its surface, the area it occupies can be minimized to what is required to connect it to the access transistor's drain terminal without decreasing the capacitor's size, and thus capacitance (Jacob, pp. 356–357).

Another advantage of the trench capacitor is that its structure is under the layers of metal interconnect, allowing them to be more easily made planar, which enables it to be integrated in a logic-optimized process technology, which have many levels of interconnect above the substrate.

Disadvantages of trench capacitors are difficulties in reliably constructing the capacitor's structures within deep holes and in connecting the capacitor to the access transistor's drain terminal (Kenner, pg.

By the second-generation, the requirement to increase density by fitting more bits in a given area, or the requirement to reduce cost by fitting the same amount of bits in a smaller area, lead to the almost universal adoption of the 1T1C DRAM cell, although a couple of devices with 4 and 16 Kbit capacities continued to use the 3T1C cell for performance reasons (Kenner, p. 6).

These performance advantages included, most significantly, the ability to read the state stored by the capacitor without discharging it, avoiding the need to write back what was read out (non-destructive read).

the memory controller can exploit this feature to perform atomic read-modify-writes, where a value is read, modified, and then written back as a single, indivisible operation (Jacob, p. 459).

1T DRAM is a different way of constructing the basic DRAM memory cell, distinct from the classic one-transistor/one-capacitor (1T/1C) DRAM cell, which is also sometimes referred to as '1T DRAM', particularly in comparison to the 3T and 4T DRAM which it replaced in the 1970s.

This gives 1T DRAM cells the greatest density as well as allowing easier integration with high-performance logic circuits, since they are constructed with the same silicon on insulator process technologies.

The physical layout of the DRAM cells in an array is typically designed so that two adjacent DRAM cells in a column share a single bitline contact to reduce their area.

DRAM cell area is given as n F2, where n is a number derived from the DRAM cell design, and F is the smallest feature size of a given process technology.

The bitline length is limited by its capacitance (which increases with length), which must be kept within a range for proper sensing (as DRAMs operate by sensing the charge of the capacitor released onto the bitline).

Besides ensuring that the lengths of the bitlines and the number of attached DRAM cells attached to them are equal, two basic architectures to array design have emerged to provide for the requirements of the sense amplifiers: open and folded bitline arrays.

Because the sense amplifiers are placed between bitline segments, to route their outputs outside the array, an additional layer of interconnect placed above those used to construct the wordlines and bitlines is required.

The folded array architecture appears to remove DRAM cells in alternate pairs (because two DRAM cells share a single bitline contact) from a column, then move the DRAM cells from an adjacent column into the voids.

As process technology improves to reduce minimum feature sizes, the signal to noise problem worsens, since coupling between adjacent metal wires is inversely proportional to their pitch.

The majority of one-off ('soft') errors in DRAM chips occur as a result of background radiation, chiefly neutrons from cosmic ray secondaries, which may change the contents of one or more memory cells or interfere with the circuitry used to read/write them.

The most common error-correcting code, a SECDED Hamming code, allows a single-bit error to be corrected and, in the usual configuration, with an extra parity bit, double-bit errors to be detected.[22]

Recent studies give widely varying error rates with over seven orders of magnitude difference, ranging from 10−10−10−17 error/bit·h, roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory.[23][24][25]

2009 study reported a 32% chance that a given computer in their study would suffer from at least one correctable error per year, and provided evidence that most such errors are intermittent hard rather than soft errors.[26]

Large scale studies on non-ECC main memory in PCs and laptops suggest that undetected memory errors account for a substantial number of system failures: the study reported a 1-in-1700 chance per 1.5% of memory tested (extrapolating to an approximately 26% chance for total memory) that a computer would have a memory error every eight months.[28]

Although dynamic memory is only specified and guaranteed to retain its contents when supplied with power and refreshed every short period of time (often 64 ms), the memory cell capacitors often retain their values for significantly longer time, particularly at low temperatures.[29]

In particular, there is a risk that some charge can leak between nearby cells, causing the refresh or read of one row to cause a disturbance error in an adjacent or even nearby row.

Despite the mitigation techniques employed by manufacturers, commercial researchers proved in a 2014 analysis that commercially available DDR3 DRAM chips manufactured in 2012 and 2013 are susceptible to disturbance errors.[31]

For convenience in handling, several dynamic RAM integrated circuits may be mounted on a single memory module, allowing installation of 16-bit, 32-bit or 64-bit wide memory in a single unit, without the reguirement for the installer to insert multiple individual integrated circuits.

Laptop computers, game consoles, and specialized devices may have their own formats of memory modules not interchangeable with standard desktop parts for packaging or proprietary reasons.

DRAM that is integrated into an integrated circuit designed in a logic-optimized process (such as an application-specific integrated circuit, microprocessor, or an entire system on a chip) is called embedded DRAM (eDRAM).

Embedded DRAM requires DRAM cell designs that can be fabricated without preventing the fabrication of fast-switching transistors used in high-performance logic, and modification of the basic logic-optimized process technology to accommodate the process steps required to build DRAM cell structures.

If the CAS line is driven low before RAS (normally an illegal operation), then the DRAM ignores the address inputs and uses an internal counter to select the row to open.

Page mode DRAM is a minor modification to the first-generation DRAM IC interface which improved the performance of reads and writes to a row by avoiding the inefficiency of precharging and opening the same row repeatedly to access a different column.

In Page mode DRAM, after a row was opened by holding RAS low, the row could be kept open, and multiple reads or writes could be performed to any of the columns in the row.

Static column is a variant of fast page mode in which the column address does not need to be stored in, but rather, the address inputs may be changed with CAS held low, and the data output will be updated accordingly a few nanoseconds later.[36]

EDO DRAM, sometimes referred to as Hyper Page Mode enabled DRAM, is similar to Fast Page Mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active.

An evolution of EDO DRAM, Burst EDO DRAM, could process four memory addresses in one burst, for a maximum of 5‐1‐1‐1, saving an additional three clocks over optimally designed EDO memory.

Using a few bits of 'bank address' which accompany each command, a second bank can be activated and begin reading data while a read from the first bank is in progress.

DDR SDRAM internally performs double-width accesses at the clock rate, and uses a double data rate interface to transfer one half on each clock edge.

DDR2 and DDR3 increased this factor to 4× and 8×, respectively, delivering 4-word and 8-word bursts over 2 and 4 clock cycles, respectively.

Reduced Latency DRAM is a high performance double data rate (DDR) SDRAM that combines fast, random access with high bandwidth, mainly intended for networking and caching applications.

It is constructed from small memory banks of 256 KB, which are operated in an interleaved fashion, providing bandwidths suitable for graphics cards at a lower cost to memories such as SRAM.

It adds functions such as bit masking (writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colour).

It is provided primarily to allow a system to suspend operation of its DRAM controller to save power without losing data stored in DRAM, rather not to allow operation without a separate DRAM controller as is the case with PSRAM.

Intel 8086

The Intel 8088, released July 1, 1979[3], is a slightly modified chip with an external 8-bit data bus (allowing the use of cheaper and fewer supporting ICs[note 1]), and is notable as the processor used in the original IBM PC design, including the widespread version called IBM PC XT.

The device needed several additional ICs to produce a functional computer, in part due to it being packaged in a small 18-pin 'memory package', which ruled out the use of a separate address bus (Intel was primarily a DRAM manufacturer at the time).

The 8080 device, was eventually replaced by the depletion-load-based 8085 (1977), which sufficed with a single +5 V power supply instead of the three different operating voltages of earlier chips.[note 4]

It was an attempt to draw attention from the less-delayed 16- and 32-bit processors of other manufacturers (such as Motorola, Zilog, and National Semiconductor) and at the same time to counter the threat from the Zilog Z80 (designed by former Intel employees), which became very successful.

Both the architecture and the physical chip were therefore developed rather quickly by a small group of people, and using the same basic microarchitecture elements and physical implementation techniques as employed for the slightly older 8085 (and for which the 8086 also would function as a continuation).

Marketed as source compatible, the 8086 was designed to allow assembly language for the 8008, 8080, or 8085 to be automatically converted into equivalent (suboptimal) 8086 source code, with little or no hand-editing.

Other enhancements included microcoded multiply and divide instructions and a bus structure better adapted to future coprocessors (such as 8087 and 8089) and multiprocessor systems.

this difficulty existed until the 80386 architecture introduced wider (32-bit) registers (the memory management hardware in the 80286 did not help in this regard, as its registers are still only 16 bits wide).

A single memory location can also often be used as both source and destination which, among other factors, further contributes to a code density comparable to (and often better than) most eight-bit machines at the time.

While perfectly sensible for the assembly programmer, this makes register allocation for compilers more complicated compared to more orthogonal 16-bit and 32-bit processors of the time such as the PDP-11, VAX, 68000, 32016 etc.

On the other hand, being more regular than the rather minimalistic but ubiquitous 8-bit microprocessors such as the 6502, 6800, 6809, 8085, MCS-48, 8051, and other contemporary accumulator based machines, it is significantly easier to construct an efficient code generator for the 8086 architecture.

Nine of these condition code flags are active, and indicate the current state of the processor: Carry flag (CF), Parity flag (PF), Auxiliary carry flag (AF), Zero flag (ZF), Sign flag (SF), Trap flag (TF), Interrupt flag (IF), Direction flag (DF), and Overflow flag (OF). Also

Rather than concatenating the segment register with the address register, as in most processors whose address space exceeds their register size, the 8086 shifts the 16-bit segment only four bits left before adding it to the 16-bit offset (16×segment + offset), therefore producing a 20-bit external (or effective or physical) address from the 32-bit segment:offset pair.

Near pointers are 16-bit offsets implicitly associated with the program's code or data segment and so can be used only within parts of a program small enough to fit in one segment.

Some compilers also support huge pointers, which are like far pointers except that pointer arithmetic on a huge pointer treats it as a linear 20-bit pointer, while pointer arithmetic on a far pointer wraps around within its 16-bit offset without touching the segment part of the address.

To avoid the need to specify near and far on numerous pointers, data structures, and functions, compilers also support 'memory models' which specify default pointer sizes.

The tiny model means that code and data are shared in a single segment, just as in most 8-bit based processors, and can be used to build .com files for instance.

In principle, the address space of the x86 series could have been extended in later processors by increasing the shift value, as long as applications obtained their segments from the operating system and did not make assumptions about the equivalence of different segment:offset pairs.[note 12]

In practice the use of 'huge' pointers and similar mechanisms was widespread and the flat 32-bit addressing made possible with the 32-bit offset registers in the 80386 eventually extended the limited addressing range in a more general way (see below).

If the 8086 is to retain 8-bit object codes and hence the efficient memory use of the 8080, then it cannot guarantee that (16-bit) opcodes and data will lie on an even-odd byte address boundary.

The code above uses the BP (base pointer) register to establish a call frame, an area on the stack that contains all of the parameters and local variables for the execution of the subroutine.

transfers of 16-bit or 8-bit quantities are done in a four-clock memory access cycle, which is faster on 16-bit, although slower on 8-bit quantities, compared to many contemporary 8-bit based CPUs.

As instructions vary from one to six bytes, fetch and execution are made concurrent and decoupled into separate units (as it remains in today's x86 processors): The bus interface unit feeds the instruction stream to the execution unit through a 6-byte prefetch queue (a form of loosely coupled pipelining), speeding up operations on registers and immediates, while memory operations became slower (four years later, this performance problem was fixed with the 80186 and 80286).

However, the full (instead of partial) 16-bit architecture with a full width ALU meant that 16-bit arithmetic instructions could now be performed with a single ALU cycle (instead of two, via internal carry, as in the 8080 and 8085), speeding up such instructions considerably.

Combined with orthogonalizations of operations versus operand types and addressing modes, as well as other enhancements, this made the performance gain over the 8080 or 8085 fairly significant, despite cases where the older chips may be faster (see below).

For example, the NEC V20 and NEC V30 pair were hardware-compatible with the 8088 and 8086 even though NEC made original Intel clones μPD8088D and μPD8086D respectively, but incorporated the instruction set of the 80186 along with some (but not all) of the 80186 speed enhancements, providing a drop-in capability to upgrade both instruction set and processing speed without manufacturers having to modify their designs.

Microprocessor - I/O Interfacing Overview

Microprocessor is a controlling unit of a micro-computer, fabricated on a small chip capable of performing ALU (Arithmetic Logical Unit) operations and communicating with the other devices connected to it.

ALU performs arithmetical and logical operations on the data received from the memory or an input device.

The microprocessor fetches those instructions from the memory, then decodes it and executes those instructions till STOP instruction is reached.

Using RISC processors, each instruction requires only one clock cycle to execute results in uniform execution time.

It is designed to minimize the number of instructions per program, ignoring the number of cycles per instruction.

The compiler has to do very little work to translate a high-level language into assembly level language/machine code because the length of the code is relatively short, so very little RAM is required to store the instructions.

Its architecture is designed to decrease the memory cost because more storage is needed in larger programs resulting in higher memory cost.

To resolve this, the number of instructions per program can be reduced by embedding the number of operations in a single instruction.

coprocessor is a specially designed microprocessor, which can handle its particular function many times faster than the ordinary microprocessor.

It is a specially designed microprocessor having a local memory of its own, which is used to control I/O devices with minimum CPU involvement.

transputer is a specially designed microprocessor with its own local memory and having links to connect one transputer to another transputer for inter-processor communications.

transputer can be used as a single processor system or can be connected to external links, which reduces the construction cost and increases the performance.

This is done by sampling the voltage level at regular time intervals and converting the voltage at that instant into a digital form.

These registers can work in pair to hold 16-bit data and their pairing combination is like B-C, D-E &

Microprocessor increments the program whenever an instruction is being executed, so that the program counter points to the memory address of the next instruction that is going to be executed.

It is an 8-bit register having five 1-bit flip-flops, which holds either 0 or 1 depending upon the result stored in the accumulator.

When a microprocessor is executing a main program and whenever an interrupt occurs, the microprocessor shifts the control from the main program to process the incoming request.

It controls the serial data communication by using these two instructions: SID (Serial input data) and SOD (Serial output data).

The content stored in the stack pointer and program counter is loaded into the address buffer and address-data buffer to communicate with the CPU.

These are the instructions used to transfer the data from one register to another register, from the memory to the register, and from the register to the memory without any alteration in the content.

Maximum mode is suitable for system having multiple processors and Minimum mode is suitable for system having a single processor.

EU has no direct connection with system buses as shown in the above figure, it performs operations over data through BIU.

These registers can be used individually to store 8-bit data and can be used in pairs to store 16bit data.

It is a 16-bit register, which holds the address from the start of the segment to the memory location, where a word was most recently stored on the stack.

BIU takes care of all data and addresses transfers on the buses for the EU like sending addresses, fetching instructions from the memory, reading data from the ports and the memory as well as writing data to the ports and the memory.

Power supply and frequency signals It uses 5V DC supply at VCC pin 40, and uses ground at VSS pin 1 and 20 for its operation.

AD0-AD7 carries low order byte data and AD8AD15 carries higher order byte data.

It is available at pin 34 and used to indicate the transfer of data using data bus D8-D15.

It is an interrupt request signal, which is sampled during the last clock cycle of each instruction to determine if the processor considered this as an interrupt or not.

S0, S1, S2 These are the status signals that provide the status of operation, which is used by the Bus Controller 8288 to generate memory &

Interrupt is the method of creating a temporary halt during program execution and allows peripheral devices to access the microprocessor.

It is a single non-maskable interrupt pin (NMI) having higher priority than the maskable interrupt request pin (INTR)and it is of type 2 interrupt.

The INTR is a maskable interrupt because the microprocessor will be interrupted only if interrupts are enabled using set interrupt flag instruction.

If the interrupt is enabled and NMI is disabled, then the microprocessor first completes the current execution and sends ‘0’ on INTA pin twice.

The first ‘0’ means INTA informs the external device to get ready and during the second ‘0’ the microprocessor receives the 8 bit, say X, from the programmable interrupt controller.

The interrupts from Type 5 to Type 31 are reserved for other advanced microprocessors, and interrupts from 32 to Type 255 are available for hardware and software interrupts.

These instructions are inserted into the program so that when the processor reaches there, then it stops the normal execution of program and follows the break-point procedure.

it is active only when the overflow flag is set to 1 and branches to the interrupt handler whose interrupt type number is 4.

The addressing mode in which the data operand is a part of the instruction itself is known as immediate addressing mode.

This addressing mode allows data to be addressed at any memory location through an offset address held in any of the following registers: BP, BX, DI &

In this addressing mode, the offset address of the operand is given by the sum of contents of the BX/BP registers and 8-bit/16-bit displacement.

In this addressing mode, the operands offset address is found by adding the contents of SI or DI register and 8-bit/16-bit displacements.

In this addressing mode, the offset address of the operand is computed by summing the base register to the contents of an Index register.

Coprocessor is a specially designed circuit on microprocessor chip which can perform the same task very quickly, which the microprocessor performs.

The 8086 and 8088 can perform most of the operations but their instruction set is not able to perform complex mathematical operations, so in these cases the microprocessor requires the math coprocessor like Intel 8087 math coprocessor, which can easily perform these operations very quickly.

both share the same memory, I/O system bus, control logic, and control generator with the host processor.

Loosely coupled configuration consists of the number of modules of the microprocessor based systems, which are connected through a common system bus.

8087 numeric data processor is also known as Math co-processor, Numeric processor extension and Floating point unit.

When we are executing any instruction, we need the microprocessor to access the memory for reading instruction codes and the data stored in the memory.

The interfacing process includes some key factors to match with the memory requirements and microprocessor signals.

The interfacing circuit therefore should be designed in such a way that it matches the memory signal requirements with the signals of the microprocessor.

In this type of communication, the interface gets a single byte of data from the microprocessor and sends it bit by bit to the other system serially and vice-a-versa.

In this type of communication, the interface gets a byte of data from the microprocessor and sends it bit by bit to the other systems in simultaneous (or) parallel fashion and vice-a-versa.

In the Interrupt mode, the processor is requested service only if any key is pressed, otherwise the CPU will continue with its main task.

In the Polled mode, the CPU periodically reads an internal flag of 8279 to check whether any key is pressed or not with key pressure.

If a FIFO contains a valid key entry, then the CPU is interrupted in an interrupt mode else the CPU checks the status in polling to read the entry.

Once the CPU reads a key entry, then FIFO is updated, and the key entry is pushed out of the FIFO to generate space for new entries.

In the encoded mode, the counter provides the binary count that is to be externally decoded to provide the scan lines for the keyboard and display.

In the decoded scan mode, the counter internally decodes the least significant 2 bits and provides a decoded 1 out of 4 scan on SL0-SL3.

This unit first scans the key closure row-wise, if found then the keyboard debounce unit debounces the key entry.

This unit acts as 8-byte first-in-first-out (FIFO) RAM where the key code of every pressed key is entered into the RAM as per their sequence.

The status logic generates an interrupt request after each FIFO read operation till the FIFO gets empty.

In the scanned sensor matrix mode, this unit acts as sensor RAM where its each row is loaded with the status of their corresponding row of sensors into the matrix.

This unit consists of display address registers which holds the addresses of the word currently read/written by the CPU to/from the display RAM.

When this pin is set to low, it allows read/write operations, else this pin should be set to high.

Till it is pulled low with a key closure, it is pulled up internally to keep it high In the keyboard mode, this line is used as a control input and stored in FIFO on a key closure.

This mode deals with the input given by the keyboard and this mode is further classified into 3 modes.

Using a DMA controller, the device requests the CPU to hold its data, address and control bus, so the device is free to transfer data directly to/from the memory.

These are the four individual channel DMA request inputs, which are used by the peripheral devices for using DMA services.

These are bidirectional, data lines which are used to interface the system bus with the internal data bus of DMA controller.

In the master mode, these lines are used to send higher byte of the generated address to the latch.

It is an active-low bidirectional tri-state input line, which is used by the CPU to read internal registers of 8257 in the Slave mode.

In the master mode, it is used to read data from the peripheral devices during a memory write cycle.

It is an active low bi-direction tri-state line, which is used to load the contents of the data bus to the 8-bit mode register or upper/lower byte of a 16-bit DMA address register or terminal count register.

In the master mode, it is used to load the data to the peripheral devices during DMA memory read cycle.

In the master mode, they are the four least significant memory address output lines generated by 8257.

It is the hold acknowledgement signal which indicates the DMA controller that the bus has been granted to the requesting peripheral by the CPU when it is set to 1.

It is the low memory read signal, which is used to read the data from the addressed memory locations during DMA read cycles.

It is the active-low three state signal which is used to write the data to the addressed memory location during DMA write operation.

This signal is used to convert the higher byte of the memory address generated by the DMA controller into the latches.

microcontroller is a small and low-cost microcomputer, which is designed to perform the specific tasks of embedded systems like displaying microwave’s information, receiving remote signals, etc.

It is built with 40 pins DIP (dual inline package), 4kb of ROM storage and 128 bytes of RAM storage, 2 16-bit timers.

The system bus consists of an 8-bit data bus, a 16-bit address bus and bus control signals.

All other devices like program memory, ports, data memory, serial interface, interrupt control, timers, and the CPU are all interfaced together through the system bus.

By applying logic 0 to a port bit, the appropriate pin will be connected to ground (0V), and applying logic 1, the external output will keep on “floating”.

In order to apply logic 1 (5V) on this output pin, it is necessary to build an external pullup resistor.

In this port, functions are similar to other ports except that the logic 1 must be applied to appropriate bit of the P3 register.

Interrupts are the events that temporarily suspend the main program, pass the control to the external sources and execute their task.

Each interrupt can be enabled or disabled by setting bits of the IE register and the whole interrupt system can be disabled by clearing the EA bit of the same register.

We can change the priority levels of the interrupts by changing the corresponding bit in the Interrupt Priority (IP) register as shown in the following figure.

The 8255A is a general purpose programmable I/O device designed to transfer the data from I/O to interrupt I/O under certain conditions as required.

the first mode is named as mode 0, the second mode is named as Mode 1 and the third mode is named as Mode 2.

It accepts the input from the CPU address and control buses, and in turn issues command to both the control groups.

The Intel 8253 and 8254 are Programmable Interval Timers (PTIs) designed for microprocessors to perform timing and counting functions using three 16-bit registers.

On command, it begins to decrement the count until it reaches 0, then it generates a pulse that can be used to interrupt the CPU.

In the above figure, there are three counters, a data bus buffer, Read/Write control logic, and a control register.

It is a tri-state, bi-directional, 8-bit buffer, which is used to interface the 8253/54 to the system data bus.

It is used to write a command word, which specifies the counter to be used, its mode, and either a read or write operation.

September Event 2018 — Apple

Apple Special Event. September 12, 2018. Big news all around. Take a look at the all-new Apple Watch Series 4, iPhone XS and iPhone XS Max, and iPhone ...

Lecture 15 - Inroduction to memory system

Lecture Series on Computer Organization by Prof.S. Raman, Department of Computer Science and Engineering, IIT Madras. For More details on NPTEL visit ...

Lecture 13: Booting Process

This short video explains ARM Cortex-M booting process. Visit here for more information:

STM32 Arduino Tutorial - How to use the STM32F103C8T6 board with the Arduino IDE

In this Arduino Tutorial, we are going to take a first look at the STM32 Arduino Compatible Board. This board is powerful and inexpensive. Let's see what it has to ...

Screens & 2D Graphics: Crash Course Computer Science #23

Today we begin our discussion of computer graphics. So we ended last episode with the proliferation of command line (or text) interfaces, which sometimes ...

Dissecting the CD Player: How to Turn Shiny Plastic into Music

You can support this channel on Patreon! Link below Did you ever want to know a little more about the nuts and bolts inside a CD player? No? Well I'm afraid ...

How to fix a garage door opener Board Repair - Remote not working

If you just want to get a new board and forget the repair, you can get a new one on amazon → In my last garage door repair video (a ..

Arduino Tutorial: C# to Arduino Communication. Send data and commands from Computer to an Arduino.

Dear friends welcome to another Arduino Tutorial! Today we are going to build a C# windows application to send data and commands to an Arduino. Let's get ...

Lecture - 7 Data Path Controller : Micro Programmed

Lecture Series on Computer Organization by Prof.S. Raman, Department of Computer Science and Engineering, IIT Madras. For More details on NPTEL visit ...

Computing

Table of content: 00:05 Introduction 00:25 Binary versus decimal numeral sytem 01:55 Bit and Byte 02:31 NOR SR latch 05:36 Real NOR SR latch 06:16 NAND ...