6:48 PM | Posted in

====Memory hierarchy====

[[Image:NEC 128mb ram.JPG|right|thumb|One module of 128MB NEC SD-RAM.]]

Many computer systems have a memory hierarchy consisting of [[CPU register]]s, on-die [[Static random access memory|SRAM]] caches,

external [[cache]]s, [[DRAM]], [[paging]] systems, and [[virtual memory]] or [[swap space]] on a hard drive.

This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different [[access time]]s,

violating the original concept behind the ''random access'' term in RAM. Even within a hierarchy level such as DRAM,

the specific row, column, bank, rank, channel, or [[interleave]] organization of the components make the access time variable,

although not to the extent that rotating [[storage media]] or a tape is variable. (Generally,

the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom.)

In most modern personal computers,

the RAM comes in easily upgraded form of modules called '''[[DIMM|memory module]]s''' or '''[[DIMM|DRAM module]]s'''

about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or too small for current purposes.

As suggested above, ''smaller'' amounts of RAM (mostly SRAM) are also integrated in the [[CPU]] and other [[IC]]s on the [[motherboard]],

as well as in hard-drives, [[CD-ROM]]s, and several other parts of the computer system.


If a computer becomes low on RAM during intensive application cycles, the computer can resort to [[swapping]]. In this case,

the computer temporarily uses [[hard drive]] space as additional memory.

Constantly relying on this type of backup memory is called [[Thrash (computer science)|thrashing]],

which is generally undesirable because it lowers overall system performance. In order to reduce the dependency on swapping,

more RAM can be installed.

The notion of read-only data can also refer to file system permissions.

Computer memory types




* Upcoming



* Historical

o Williams tube

o Delay line memory







* Flash memory

* Upcoming







o Racetrack memory


* Historical

o Drum memory

o Magnetic core memory

o Plated wire memory

o Bubble memory

o Twistor memory

Read-only memory (usually known by its acronym, ROM) is a class of storage media used in computers and other electronic devices.

Because data stored in ROM cannot be modified (at least not very quickly or easily),

it is mainly used to distribute firmware (software that is very closely tied to specific hardware, and unlikely to require frequent updates).

Modern semiconductor ROM chips are not immediately distinguishable from similar chips like RAM modules, except by the part numbers printed on the package.

In its strictest sense, ROM refers only to mask ROM (the oldest type of solid state ROM),

which is fabricated with the desired data permanently stored in it, and thus can never be modified.

However, more modern types such as EPROM and flash EEPROM can be erased and re-programmed multiple times;

they are still described as "read-only memory" because the reprogramming process is generally infrequent, comparatively slow,

and often does not permit random access writes to individual memory locations, which are possible when reading a ROM.

Despite the simplicity of mask ROM, economies of scale and field-programmability often make reprogrammable technologies more flexible and inexpensive,

so that mask ROM is rarely used in new products as of 2007.

A typical PROM comes with all bits reading as 1. Burning a fuse during programming causes its bit to read as 0.

The memory can be programmed just once after manufacturing by "blowing" the fuses (using a PROM blower), which is an irreversible process.

Blowing a fuse opens a connection while blowing an antifuse closes a connection (hence the name).

Programming is done by applying high-voltage pulses which are not encountered during normal operation (typically 12 to 21 volts).

Read-only means that, unlike the case with conventional memory, the programming cannot be changed (at least not by the end user).


* Reliability

* Stores data permanently

* Moderate price

* Built using integrated circuits, rather than discrete components.

* Fast: speed is between 35ns and 60ns.

EEPROM (also written E2PROM and pronounced e-e-prom or simply e-squared), which stands for Electrically Erasable Programmable Read-Only Memory,

is a type of non-volatile memory used in computers and other electronic devices to store small amounts of data that must be saved when power is removed,

e.g., calibration tables or device configuration.

When larger amounts of static data are to be stored (such as in USB flash drives) a specific type of EEPROM such as flash memory is more economical than

traditional EEPROM devices.

"Cache memory" redirects here. For the general use, see cache.

Diagram of a CPU memory cache

Diagram of a CPU memory cache

A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory.

The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations.

As long as most memory accesses are to cached memory locations,

the average latency of memory accesses will be closer to the cache latency than to the latency of main memory.

When the processor needs to read from or write to a location in main memory,

it first checks whether a copy of that data is in the cache. If so, the processor immediately reads from or writes to the cache,

which is much faster than reading from or writing to main memory.

A hard disk drive (HDD), commonly referred to as a hard drive, hard disk, or fixed disk drive,

[1] is a non-volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces.

Strictly speaking, "drive" refers to a device distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk.

Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.[2]

An HDD is a rigid-disk drive, although it is probably never referred to as such. By way of comparison,

a so-called "floppy" drive (more formally, a diskette drive) has a disc that is flexible. Originally, the term "hard" was temporary slang,

substituting "hard" for "rigid", before these drives had an established and universally-agreed-upon name.

Some time ago, IBM's internal company term for a HDD was "file".[citation needed]

A floppy disk is a data storage medium that is composed of a disk of thin,

flexible ("floppy") magnetic storage medium encased in a square or rectangular plastic shell.

Floppy disks are read and written by a floppy disk drive or FDD, the initials of which should not be confused with "fixed disk drive",

which is another term for a hard disk drive. Invented by IBM, floppy disks in 8-inch (200 mm), 5¼-inch (133⅓ mm),

and the newest and most common 3½-inch (90 mm) formats enjoyed many years as a popular and ubiquitous form of data storage and exchange,

from the mid-1970s to the late 1990s. However, they have now been largely superseded by flash and optical storage devices.

CD-ROM (an abbreviation of "Compact Disc read-only memory") is a Compact Disc that contains data accessible by a computer.

While the Compact Disc format was originally designed for music storage and playback,

the format was later adapted to hold any form of binary data. CD-ROMs are popularly used to distribute computer software,

including games and multimedia applications, though any data can be stored (up to the capacity limit of a disc).

Some CDs hold both computer data and audio with the latter capable of being played on a CD player,

whilst data (such as software or digital video) is only usable on a computer (such as PC CD-ROMs). These are called Enhanced CDs.

Although many people use lowercase letters in this acronym, proper presentation is in all capital letters with a hyphen between CD and ROM.

It was also suggested by some, especially soon after the technology was first released, that CD-ROM was an acronym for "Compact Disc read-only-media",

or that it was a more 'correct' definition. This was not the intention of the original team who developed the CD-ROM,

and common acceptance of the 'memory' definition is now almost universal.

This is probably in no small part due to the widespread use of other 'ROM' acronyms such as Flash-ROMs and EEPROMs where 'memory' is usually the correct term.

Access time is the time delay or latency between a request to an electronic system, and the access being completed or the requested data returned.

* In a telecommunications system, access time is the delay between the start of an access attempt and successful access.

Access time values are measured only on access attempts that result in successful access.

* In a computer, it is the time interval between the instant at which an instruction control unit initiates a call for data or a request to store data,

and the instant at which delivery of the data is completed or the storage is started.

* In disk drives, disk access time is the time required for a computer to process data from the processor and then retrieve the required data from a storage device,

such as a hard drive. For hard drives, disk access time is determined by a sum of the seek time, rotational delay and transfer time.

o Seek time - is the time for the access arm to reach the desired disk track.

o Rotational delay - the delay for the rotation of the disk to bring the required disk sector under the read-write mechanism.

It greatly depends on rotational speed of a disk, measured in revolutions per minute (RPM).

o Transfer time - time during which data is actually read or written to medium, with a certain throughput.

Theoretical averages of the access time or latency are shown in the table below,

based on the empirical relation that the average latency in milliseconds for such a drive is about 30000/RPM:

Seek Time : It is the amount of time between when the CPU requests a file and when the first byte of the file is sent to CPU.

Times between 10 to 20 milliseconds are common.

In computing, memory latency is the time between initiating a request for a byte or word in memory until it is retrieved.

If the data are not in the processor's cache,

it takes longer to obtain them, as the processor will have to communicate with the external memory cells.

Latency is therefore a fundamental measure of the speed of memory: the less the latency, the faster the reading operation.

However, memory latency should not be confused with memory bandwidth, which measures the throughput of memory.

It is possible that an advance in memory technology increases bandwidth (an apparent increase in performance),

and yet latency increases (an apparent decrease in performance). For example, DDR memory has been superseded by DDR2,

and yet DDR2 has significantly greater latency when both DDR and DDR2 have the same clock frequency. DDR2 can be clocked faster,

however, increasing its bandwidth; only when its clock is significantly greater than that of DDR will DDR2 have lower latency than DDR.

Memory latency is also the time between initiating a request for data and the beginning of the actual data transfer. On a disk,

latency is the time it takes for the selected sector to come around and be positioned under the read/write head.

In computer technology, an opcode (operation code) is the portion of a machine language instruction that specifies the operation to be performed.

Their specification and format are laid out in the instruction set architecture of the processor in question

(which may be a general CPU or a more specialized processing unit). Apart from the opcode itself,

an instruction normally also has one or more specifiers for operands (i.e. data) on which the operation should act,

although some operations may have implicit operands, or none at all. There are instruction sets with nearly uniform fields for opcode and operand specifiers,

as well as others (the x86 architecture for instance) with a more complicated, varied length structure. [1]

In computer programming languages, the definitions of operator and operand are almost the same as in mathematics.

Additionally, in assembly language, an operand is a value (an argument) on which the instruction, named by mnemonic,

operates. The operand may be a processor register, a memory address, a literal constant, or a label.

A simple example (in the PC architecture) is


where the value in register operand 'AX' is to be moved into register 'DS'. Depending on the instruction, there may be zero, one, two, or more operands.

execution of instruction

The context in which execution takes place is crucial. Very few programs are executed on a bare machine.

Programs usually contain implicit and explicit assumptions about resources available at the time of execution.

Most programs are executed with the support of an operating system and run-time libraries specific to the source language that provide crucial

services not supplied directly by the computer itself. This supportive environment, for instance,

usually decouples a program from direct manipulation of the computer peripherals, providing more general, abstract services instead.

Pipelining redirects here. For HTTP pipelining, see HTTP pipelining.

Basic five-stage pipeline in a RISC machine (IF = Instruction Fetch, ID = Instruction Decode, EX = Execute, MEM = Memory access, WB = Register write back)

Basic five-stage pipeline in a RISC machine (IF = Instruction Fetch, ID = Instruction Decode, EX = Execute, MEM = Memory access, WB = Register write back)

An instruction pipeline is a technique used in the design of computers and other digital electronic devices to increase their instruction throughput

(the number of instructions that can be executed in a unit of time).

Pipelining assumes that with a single instruction (SISD) concept successive instructions in a program sequence will overlap in execution,

as suggested in the next diagram (vertical 'i' instructions, horizontal 't' time).

Most modern CPUs are driven by a clock. The CPU consists internally of logic and flip flops. When the clock signal arrives,

the flip flops take their new value and the logic then requires a period of time to decode the new values.

Then the next clock pulse arrives and the flip flops again take their new values,

and so on. By breaking the logic into smaller pieces and inserting flip flops between the pieces of logic,

the delay before the logic gives valid outputs is reduced. In this way the clock period can be reduced.

For example, the RISC pipeline is broken into five stages with a set of flip flops between each stage.

1. Instruction fetch

2. Instruction decode and register fetch

3. Execute

4. Memory access

5. Register write back

Hazards: When a programmer (or compiler) writes assembly code,

they make the assumption that each instruction is executed before execution of the subsequent instruction is begun.

This assumption is invalidated by pipelining. When this causes a program to behave incorrectly, the situation is known as a hazard.

Various techniques for resolving hazards such as forwarding and stalling exist.

A non-pipeline architecture is inefficient because some CPU components (modules) are idle while another module is active during the instruction cycle.

Pipelining does not completely cancel out idle time in a CPU but making those modules work in parallel improves program execution significantly.

Processors with pipelining are organized inside into stages which can semi-independently work on separate jobs.

Each stage is organized and linked into a 'chain' so each stage's output is inputted to another stage until the job is done.

This organization of the processor allows overall processing time to be significantly reduced.

Unfortunately, not all instructions are independent. In a simple pipeline, completing an instruction may require 5 stages.

To operate at full performance, this pipeline will need to run 4 subsequent independent instructions while the first is completing.

If 4 instructions that do not depend on the output of the first instruction are not available,

the pipeline control logic must insert a stall or wasted clock cycle into the pipeline until the dependency is resolved.

Fortunately, techniques such as forwarding can significantly reduce the cases where stalling is required.

While pipelining can in theory increase performance over an unpipelined core by a factor of the number of stages

(assuming the clock frequency also scales with the number of stages), in reality, most code does not allow for ideal execution.

Depending on architecture, the operands may be register values, values in the stack, other memory values,

I/O ports, etc, specified and accessed using more or less complex addressing modes. The types of operations include arithmetics, data copying,

logical operations, and program control, as well as special instructions (such as CPUID and others).

An instruction cycle (also called fetch-and-execute cycle,

fetch-decode-execute cycle (FDX) is the time period during which a computer reads and processes a machine language instruction from its memory or

the sequence of actions that the central processing unit (CPU) performs to execute each machine code instruction in a program.

The name fetch-and-execute cycle is commonly used. The instruction must be fetched from main memory,

and then executed by the CPU. This is fundamentally how a computer operates, with its CPU reading and executing a series of instructions

written in its machine language. From this arise all functions of a computer familiar from the user's end.

The fundamental operation of most CPUs, regardless of the physical form they take,

is to execute a sequence of stored instructions called a program.The program is represented by a series of numbers that are kept in

some kind of computer memory. There are four steps that nearly all von Neumann CPUs use in their operation: fetch, decode, execute, and writeback.

Abbreviation for central processing unit, and pronounced as separate letters.

The CPU is the brains of the computer. Sometimes referred to simply as the central processor,but more commonly called processor,

the CPU is where most calculations take place. In terms of computing power, the CPU is the most important element of a computer system.

On large machines, CPUs require one or more printed circuit boards. On personal computers and small workstations,

the CPU is housed in a single chip called a microprocessor.

Since the 1970's the microprocessor class of CPUs has almost completely overtaken all other CPU implementations.

The CPU itself is an internal component of the computer.

Modern CPUs are small and square and contain multiple metallic connectors or pins on the underside.

The CPU is inserted directly into a CPU socket, pin side down, on the motherboard.

Each motherboard will support only a specific type or range of CPU so you must check the motherboard manufacturer's specifications

before attempting to replace or upgrade a CPU. Modern CPUs also have an attached heat sink and small fan that go directly on top of the CPU to help dissipate heat.

Two typical components of a CPU are the following:

* The arithmetic logic unit (ALU), which performs arithmetic and logical operations.

* The control unit (CU), which extracts instructions from memory and decodes and executes them, calling on the ALU when necessary.

Abbreviation of arithmetic logic unit, the part of a computer that performs all arithmetic computations, such as addition and multiplication, and all comparison operations.

The ALU is one component of the CPU (central processing unit).

Short for control unit, it is a typical component of the CPU that implements the microprocessor instruction set.

It extracts instructions from memory and decodes and executes them, and sends the necessary signals to the ALU to perform the operation needed. Control Units

are either hardwired (instruction register is hardwired to rest of the microprocessor) or micro-programmed.

In computing, "word" is a term for the natural unit of data used by a particular computer design.

A word is simply a fixed-sized group of bits that are handled together by the machine.

The number of bits in a word (the word size or word length) is an important characteristic of a computer architecture.

The size of a word is reflected in many aspects of a computer's structure and operation.

The majority of the registers in the computer are usually word-sized. The typical numeric value manipulated by the computer is probably word sized.

The amount of data transferred between the processing part of the computer and the memory system is most often a word.

An address used to designate a location in memory often fits in a word.

Also called clock rate, the speed at which a microprocessor executes instructions.

Every computer contains an internal clock that regulates the rate at which instructions are executed and synchronizes all the various computer components.

The CPU requires a fixed number of clock ticks (or clock cycles) to execute each instruction. The faster the clock,

the more instructions the CPU can execute per second.

Clock speeds are expressed in megahertz (MHz) or gigahertz ((GHz).

The internal architecture of a CPU has as much to do with a CPU's performance as the clock speed,

so two CPUs with the same clock speed will not necessarily perform equally. Whereas an Intel 80286 microprocessor requires 20 cycles to multiply two numbers,

an Intel 80486 or later processor can perform the same calculation in a single clock tick. (Note that clock tick here refers to the system's clock,

which runs at 66 MHz for all PCs.) These newer processors, therefore, would be 20 times faster than the older processors even if their clock speeds were the same.

In addition, some microprocessors are superscalar, which means that they can execute more than one instruction per clock cycle.

Like CPUs, expansion buses also have clock speeds. Ideally, the CPU clock speed and the bus clock speed should be the same so that neither component

slows down the other. In practice, the bus clock speed is often slower than the CPU clock speed, which creates a bottleneck.

This is why new local buses, such as AGP, have been developed.

(n) A, special, high-speed storage area within the CPU. All data must be represented in a register before it can be processed.

For example, if two numbers are to be multiplied, both numbers must be in registers, and the result is also placed in a register.

(The register can contain the address of a memory location where data is stored rather than the actual data itself.)

The number of registers that a CPU has and the size of each (number of bits) help determine the power and speed of a CPU.

For example a 32-bit CPU is one in which each register is 32 bits wide. Therefore, each CPU instruction can manipulate 32 bits of data.

Usually, the movement of data in and out of registers is completely transparent to users, and even to programmers.

Only assembly language programs can manipulate registers. In high-level languages,

the compiler is responsible for translating high-level operations into low-level operations that access registers.



0 responses to "A B C of computing"