Google
 

Thursday, December 21, 2006

LFSR - Linear Feedback Shift Registers

Linear Feedback Shift Register (LFSR), in which the output from a standard shift register is cunningly manipulated and fed back into its input in such a way as to cause the function to endlessly cycle through a sequence of patterns.

Many-to-one implementations
LFSRs are simple to construct and are useful for a wide variety of applications, but are often sadly neglected by designers. One of the more common forms of LFSR is formed from a simple shift register with feedback from two or more points, or taps, in the register chain (Fig 1).


1. LFSR with XOR feedback path.

The taps in this example are at bit 0 and bit 2, and can be referenced as [0,2]. All of the register elements share a common clock input, which is omitted from the symbol for reasons of clarity. The data input to the LFSR is generated by XOR-ing or XNOR-ing the tap bits; the remaining bits function as a standard shift register. The sequence of values generated by an LFSR is determined by its feedback function (XOR versus XNOR) and tap selection. For example, consider two 3-bit XOR based LFSRs with different tap selections (Fig 2).


2. Comparison of alternative tap selections.

Both LFSRs start with the same initial value but, due to the different taps, their sequences rapidly diverge as clock pulses are applied. In some cases an LFSR will end up cycling round a loop comprising a limited number of values. However, both of the LFSRs shown in Fig 2 are said to be of maximal length because they sequence through every possible value (excluding all of the bits being 0) before returning to their initial values.

A binary field with 'n' bits can assume 2^n unique values, but a maximal-length LFSR with 'n' register bits will only sequence through (2^n – 1) values. This is because LFSRs with XOR feedback paths will not sequence through the value where all the bits are 0, while their XNOR equivalents will not sequence through the value where all the bits are 1 (Fig 3).


3. Comparison of XOR versus XNOR feedback paths.


Monday, December 18, 2006

Verilog Free Simulator and Viewer

Icarus Verilog : This is best Free Verilog simulator out there, it is simulation and synthesis tool. It operates as a compiler, compiling source code written in Verilog (IEEE-1364) into some target format. For batch simulation, the compiler can generate an intermediate form called vvp assembly. This intermediate form is executed by the ``vvp'' command. Icarus continues to get better and better. Icarus is being used for real design work by companies now as a simulator, and is starting to be useful as a synthesizer for a Xilinx FPGA flow as well.
Dinotrace : Freeware VCD viewer from veritools

I have tested above combination on linux. Both works fine...and is good to start with.

Sample Design:
^^^^^^^^^^^^
//-----------------------------------------------------
module encoder_using_if(
binary_out , // 4 bit binary output
encoder_in , // 16-bit input
enable // Enable for the encoder
);
//-----------Output Ports---------------
output [3:0] binary_out ;
//-----------Input Ports---------------
input enable ;
input [15:0] encoder_in ;
//------------Internal Variables--------
reg [3:0] binary_out ;
//-------------Code Start-----------------
always @ (enable or encoder_in)
begin
binary_out = 0;
if (enable) begin
if (encoder_in == 16'h0002) begin
binary_out = 1;
end if (encoder_in == 16'h0004) begin
binary_out = 2;
end if (encoder_in == 16'h0008) begin
binary_out = 3;
end if (encoder_in == 16'h0010) begin
binary_out = 4;
end if (encoder_in == 16'h0020) begin
binary_out = 5;
end if (encoder_in == 16'h0040) begin
binary_out = 6;
end if (encoder_in == 16'h0080) begin
binary_out = 7;
end if (encoder_in == 16'h0100) begin
binary_out = 8;
end if (encoder_in == 16'h0200) begin
binary_out = 9;
end if (encoder_in == 16'h0400) begin
binary_out = 10;
end if (encoder_in == 16'h0800) begin
binary_out = 11;
end if (encoder_in == 16'h1000) begin
binary_out = 12;
end if (encoder_in == 16'h2000) begin
binary_out = 13;
end if (encoder_in == 16'h4000) begin
binary_out = 14;
end if (encoder_in == 16'h8000) begin
binary_out = 15;
end
end
end
endmodule

TestBench:
`timescale 1ns/1ps
`include "encoder_using_if.v"
module encoder_test;
wire [3:0] binary_out;
reg enable;
reg [15:0] encoder_in;

encoder_using_if encode(.binary_out (binary_out), .encoder_in (encoder_in) , .enable (enable));

initial begin
#1 enable = 0;
encoder_in = 16'h0;

#2 enable = 1;
encoder_in = 16'h0001;

#3
encoder_in = 16'h0010;
#1 $finish;
end
initial begin
$monitor("Encoder out = %h \n",binary_out);
end
//always @ (enable or encoder_in)
initial
begin
$dumpfile( "foo.vcd" );
$dumpvars( 0, encoder_test );
end
endmodule

Command Used:
^^^^^^^^^^^^^

iverilog testbench.v
./a.out
./dinotrace foo.vcd

Keywords: Verilog, free simulator, waveform viewer, Icarus Verilog, Dinotrace

Latex

What is TeX

TeX (pronounced "Tech") is mainly a low level programming language aimed to typesetting documents. It is very powerful but, since you have to take care of everything, it is difficult and time-consuming to use it for long documents.

What is LaTeX

LaTeX (pronounced either "Lah-tech" or "Lay-tech") is a macro package created by Leslie Lamport based on the TeX typesetting language of Donald Knuth. Its purpose is to produce professional looking and correctly typeset documents, in particular those with mathematical formulae. It is currently maintained by the LaTeX3 project. Numerous authors have contributed extensions, called packages or styles, to LaTeX. A number of these is usually bundled with a TeX/LaTeX software distribution or can be found in the Comprehensive TeX Archive Network (CTAN).

Since LaTeX comprises a group of TeX commands, LaTeX document processing follows a programming perspective. One creates a text file in LaTeX markup, which is then to be read by the LaTeX macro, which produces the final document.

Obviously, this has its disadvantages, compared with a WYSIWYG (What You See Is What You Get) program such as Openoffice.org Writer or Microsoft Word.

  • One can't see the final result straight away.
  • One needs to know the necessary commands for LaTeX markup.
  • It can sometimes be difficult to obtain a certain 'look'.

On the other hand, there are certain advantages to the markup language approach:

  • The layout, fonts, tables, etc. is consistent throughout.
  • Mathematical formulae can be easily typeset.
  • Indexes, footnotes, references, etc., are generated easily.
  • It encourages correctly structured documents.

The LaTeX-like approach can be called WYSIWYM, i.e. What You See Is What You Mean: you can't see how the final version will look like while typing, but you can see only the logical structure of the document, LaTeX will take care of the formatting for you.

The LaTeX document is a plain text file containing the content of the document, with additional markup. When the source file is processed by the macro package, it can produce documents in several formats such as DVI, PDF or PostScript.

Prerequisites

At a minimum, you'll need the following programs to edit LaTeX:

  • An editor (You can use a basic text editor like notepad, but a dedicated LaTeX editor will be more useful).
    • On Windows, TeXnicCenter([1]) is a popular free and open source LaTeX editor.
    • On *nix (including Mac OS X) systems, Emacsen and gvim provide powerful TeX enviroments for the tech-savvy, while Texmaker and Kile [2] provide more user-friendly development environments.
  • The LaTeX binaries and style sheets - e.g. MiKTeX [3] for Windows, teTeX [4] for Linux and teTeX for Mac OS X [5].
  • A DVI viewer to view and print the final result. Usually, a DVI viewer is included in the editor or is available with the binary distribution.

A distribution of LaTeX, with many packages, add-ins, editors and viewers for Unix, Linux, Mac and Windows can be obtained from the TeX users group at http://www.tug.org/texlive/.

Applications within a distribution

Here are the main programs you expect to find in any (La)TeX distribution:

  • tex: the simplest compiler, it gets a TeX file and creates DVI
  • pdftex: it gets a TeX file, but creates a PDF file
  • latex: the most used one: it gets a LaTeX file and creates a DVI
  • pdflatex: from a LaTeX creates a PDF
  • dvi2ps: converts the DVI file to PostScript
  • dvipdfm: converts the DVI file to PDF

When LaTeX was created, the only format it could create was DVI; then the PDF support was added by pdflatex, even if several people still don't use it. As it is clear from this short list, PDF files can be created with both pdflatex and dvipdfm; anyway, the output of pdflatex is much better than the other. DVI is an old format, and it does not support hyperlinks for example, while PDF does, so passing through DVI you will bring all the bad points of that format to PDF. Moreover the general output will be better using only pdflatex.

Strictly speaking, the document you are writing should be sightly different according to the compiler you are using (latex or pdflatex), but as we will see later, it is possible to add a sort of abstraction layer so to ignore what you are using, the applications will do everything by themselves.

Note that, since LaTeX is just a collection of macros for TeX, if you compile a plain TeX document with a LaTeX compiler (such as pdflatex) it will work, while the opposite is not true: if you try to compile a LaTeX source with a TeX compiler you will get only a lot of errors.

The following diagram shows the relationships between the (La)TeX source code and all the formats you can create from it:

The boxed red text represents the file formats, the blue text on the arrows represents the commands you have to use, the small dark green text under the boxes represents the image formats that are supported. Anytime you pass through an arrow you lose some information, that might decrease the quality of your document. Therefore, in order to achieve the highest quality in your output file, you should choose the shortest route to reach your target format. This is probably the most convenient way to obtain an output in your desired format anyway. Starting from a LaTeX source, the best way is to use only latex for a DVI output or pdflatex for a PDF output, converting to PostScript only when it is necessary to do it to print the document.

Most of the programs should be already within your LaTeX distribution, the others come with Ghostscript, that is a free and multi-platform software as well.

Reference: http://en.wikibooks.org/wiki/LaTeX/Introduction

Script use for compilation, and dvi to PDF file conversion

run.pl:

#!/usr/bin/perl
my $text1 = shift @ARGV or die print "No tex file is given as input\n";
#print "Inputed argument is $text1\n";
chop;
my @fname = split(/\./,$text1);
#print "Input array is @fname \n";
#print "Input text is $fname[0] . $fname[1] \n";

system("latex $text1");
system("dvips -Ppdf $fname[0].dvi");
system("ps2pdf $fname[0].ps $fname[0].pdf");
system("xpdf $fname[0].pdf");

Usage: ./run.pl filename.tex - Gives you PDF file as output directly

Keywords: Latex, Tex, PDF writer, dvi to pdf conversion

Thursday, December 14, 2006

RISC Vs CISC

RISC

The concept was developed by John Cocke of IBM Research during 1974. His argument was based upon the notion that a computer uses only 20% of the instructions, making the other 80% superfluous to requirement. A processor based upon this concept would use few instructions, which would require fewer transistors, and make them cheaper to manufacture. By reducing the number of transistors and instructions to only those most frequently used, the computer would get more done in a shorter amount of time. The term 'RISC' (short for Reduced Instruction Set Computer) was later coined by David Patterson, a teacher at the University of California in Berkeley.

The RISC concept was used to simplify the design of the IBM PC/XT, and was later used in the IBM RISC System/6000 and Sun Microsystems' SPARC microprocessors. The latter CPU led to the founding of MIPS Technologies, who developed the M.I.P.S. RISC microprocessor (Microprocessor without Interlocked Pipe Stages). Many of the MIPS architects also played an instrumental role in the creation of the Motorola 68000, as used in the first Amigas (MIPS Technologies were later bought by Silicon Graphics).. The MIPS processor has continued development, remaining a popular choice in embedded and low-end market. At one time, it was suspected the Amiga MCC would use this CPU to reduce the cost of manufacture. However, the consumer desktop market is limited, only the PowerPC processor remains popular in the choice of RISC alternatives. This is mainly due to Apple's continuous use of the series for its PowerMac range.

CISC

CISC (Complex Instruction Set Computer) is a retroactive definition that was introduced to distinguish the design from RISC microprocessors. In contrast to RISC, CISC chips have a large amount of different and complex instruction. The argument for its continued use indicates that the chip designers should make life easier for the programmer by reducing the amount of instructions required to program the CPU. Due to the high cost of memory and storage CISC microprocessors were considered superior due to the requirements for small, fast code. In an age of dwindling memory hard disk prices, code size has become a non-issue (MS Windows, hello?). However, CISC-based systems still cover the vast majority of the consumer desktop market. The majority of these systems are based upon the x86 architecture or a variant. The Amiga, Atari, and pre-1994 Macintosh systems also use a CISC microprocessor.

RISC Vs. CISC

The argument over which concept is better has been repeated over the past few years. Macintosh owners have elevated the argument to a pseudo religious level in support of their RISC-based God (the PowerPC sits next to the Steve Jobs statue on every Mac altar). Both positions have been blurred by the argument that we have entered a Post-RISC stage.
RISC: For and Against
RISC supporters argue that it the way of the future, producing faster and cheaper processors - an Apple Mac G3 offers a significant performance advantage over its Intel equivalent. Instructions are executed over 4x faster providing a significant performance boost! However, RISC chips require more lines of code to produce the same results and are increasingly complex. This will increase the size of the application and the amount of overhead required. RISC developers have also failed to remain in competition with CISC alternatives. The Macintosh market has been damaged by several problems that have affected the availability of 500MHz+ PowerPC chips. In contrast, the PC compatible market has stormed ahead and has broken the 1GHz barrier. Despite the speed advantages of the RISC processor, it cannot compete with a CISC CPU that boasts twice the number of clock cycles.

CISC: For and Against
As discussed above, CISC microprocessors are more expensive to make than their RISC cousins. However, the average Macintosh is more expensive than the WIntel PC. This is caused by one factor that the RISC manufacturers have no influence over - market factors. In particular, the WIntel market has become the definition of personal computing, creating a demand from people who have not used a computer previous. The x86 market has been opened by the development of several competing processors, from the likes of AMD, Cyrix, and Intel. This has continually reduced the price of a CPU of many months. In contrast, the PowerPC Macintosh market is dictated by Apple. This reduces the cost of x86 - based microprocessors, while the PowerPC market remains stagnant.

Post-RISC

As the world enters the 21st century the CISC Vs. RISC arguments have been swept aside by the recognition that neither terms are accurate in their description. The definition of 'Reduced' and 'Complex' instructions has begun to blur, RISC chips have increased in their complexity (compare the PPC 601 to the G4 as an example) and CISC chips have become more efficient. The result are processors that are defined as RISC or CISC only by their ancestry. The PowerPC 601, for example, supports more instructions than the Pentium. Yet the Pentium is a CISC chip, while the 601 is considered to be RISC. CISC chips have also gained techniques associated with RISC processors. Intel describe the Pentium II as a CRISC processor, while AMD use a RISC architecture but remain compatible with the dominant x86 CISC processors. Thus it is no longer important which camp the processor comes from, the emphasis has once-again been placed upon the operating system and the speed that it can execute instructions.

EPIC

In the aftermath of the CISC-RISC conflict, a new enemy has appeared to threaten the peace. EPIC (Explicitly Parallel Instruction Computing) was developed by Intel for the server market, thought it will undoubtedly appear in desktops over the next few years. The first EPIC processor will be the 64-bit Merced, due for release sometime during 2001 (or 2002, 2003, etc.). The market may be divided between combined CISC-RISC systems in the low-end and EPIC in the high-end.

Famous RISC microprocessors

ARM

One of the most well known RISC developers is Cambridge based Advanced Research Machines (originally Acorn Research Machines). Their ARM and StrongARM chips power the old Acorn Archimedes and the Apple Newton handwriting recognition systems. Since the unbundling of ARM from Acorn, Intel have invested a considerable amount of money in the company and have utilized the technology in their processor design.

Keywords: RISC, CISC, RISC Vs CISC, ARM, DSP Processor

Wednesday, December 13, 2006

IEEE 754 - Floating Point number representation at a Glance

IEEE 754 at a Glance
A floating-point number representation on a computer uses something similar to a scientific notation with a base and an exponent. A scientific representation of 30,064,771 is 3.0064771 x 107, whereas 1.001 can be written as 1.001 x 100.

In the first example, 3.0064771 is called the mantissa, 10 the exponent base, and 7 the exponent.

IEEE standard 754 specifies a common format for representing floating-point numbers in a computer. Two grades of precision are defined: single precision and double precision. The representations use 32 and 64 bits, respectively. This is shown in Figure 2.


(Click to enlarge)

Figure 2: IEEE floating-point formats

In IEEE 754 floating-point representation, each number comprises three basic components: the sign, the exponent, and the mantissa. To maximize the range of possible numbers, the mantissa is divided into a fraction and leading digit. As I'll explain, the latter is implicit and left out of the representation.

The sign bit simply defines the polarity of the number. A value of zero means that the number is positive, whereas a 1 denotes a negative number.

The exponent represents a range of numbers, positive and negative; thus a bias value must be subtracted from the stored exponent to yield the actual exponent. The single precision bias is 127, and the double precision bias is 1,023. This means that a stored value of 100 indicates a single-precision exponent of -27. The exponent base is always 2, and this implicit value is not stored.

For both representations, exponent representations of all 0s and all 1s are reserved and indicate special numbers:

  • Zero: all digits set to 0, sign bit can be either 0 or 1
  • ±∞: exponent all 1s, fraction all 0s
  • Not a Number (NaN): exponent all 1s, non-zero fraction. Two versions of NaN are used to signal the result of invalid operations such as dividing by zero, and indeterminate results such as operations with non-initialized operand(s).

The mantissa represents the number to be multiplied by 2 raised to the power of the exponent. Numbers are always normalized; that is, represented with one non-zero leading digit in front of the radix point. In binary math, there is only one non-zero number, 1. Thus the leading digit is always 1, allowing us to leave it out and use all the mantissa bits to represent the fraction (the decimals).

Following the previous number examples, here is what the single precision representation of the decimal value 30,064,771 will look like:

The binary integer representation of 30,064,771 is 1 1100 1010 1100 0000 1000 0011. This can be written as 1.110010101100000010000011 x 224. The leading digit is omitted, and the fraction—the string of digits following the radix point—is 1100 1010 1100 0000 1000 0011. The sign is positive and the exponent is 24 decimal. Adding the bias of 127 and converting to binary yields an IEEE 754 exponent of 1001 0111.

Putting all of the pieces together, the single representation for 30,064,771 is shown in Figure 3.


Figure 3: 30,064,771 represented in IEEE 754 single-precision format

Gain Some, Lose Some
Notice that you lose the least significant bit (LSB) of value 1 from the 32-bit integer representation—this is because of the limited precision for this format.

The range of numbers that can be represented with single precision IEEE 754 representation is ±(2-2-23) x 2127, or approximately ±1038.53. This range is astronomical compared to the maximum range of 32-bit integer numbers, which by comparison is limited to around ±2.15 x 109. Also, whereas the integer representation cannot represent values between 0 and 1, single-precision floating-point can represent values down to ±2-149, or ±~10-44.85. And we are still using only 32 bits—so this has to be a much more convenient way to represent numbers, right?

The answer depends on the requirements.

  • Yes, because in our example of multiplying 30,064,771 by 1.001, we can simply multiply the two numbers and the result will be extremely accurate.
  • No, because as in the preceding example the number 30,064,771 is not represented with full precision. In fact, 30,064,771 and 30,064,770 are represented by the exact same 32-bit bit pattern, meaning that a software algorithm will treat the numbers as identical. Worse yet, if you increment either number by 1 a billion times, none of them will change. By using 64 bits and representing the numbers in double precision format, that particular example could be made to work, but even double-precision representation will face the same limitations once the numbers get big—or small enough.
  • No, because most embedded processor cores ALUs (arithmetic logic units) only support integer operations, which leaves floating-point operations to be emulated in software. This severely affects processor performance. A 32-bit CPU can add two 32-bit integers with one machine code instruction; however, a library routine including bit manipulations and multiple arithmetic operations is needed to add two IEEE single-precision floating-point values. With multiplication and division, the performance gap just increases; thus for many applications, software floating-point emulation is not practical.
Keywords: IEEE 754, Floating point representaion

Tuesday, December 12, 2006

Memory Types

Memory Types
There are several different technologies when it comes to memory.

ROM

This is read-only memory, memory that can only be read, but cannot be written to. ROM is used in situations where the data must be held permanently. This is due to the fact that it is non-volatile memory. This means the data is "hard-wired" into the ROM chip.You can store the chip forever and the data will always be there. Besides, the data is very secure. The BIOS is stored on ROM because the user cannot disrupt the information.

There are different types of ROM, too:

  • Programmable ROM(PROM). This is basically a blank ROM chip that can be written to, but only once. It is much like a CD-R drive that burns the data into the CD. Some companies use special machinery to write PROMs for special purposes.
  • Erasable Programmable ROM (EPROM). This is just like PROM, except that you can erase the ROM by shining a special ultra-violet light into a sensor atop the ROM chip for a certain amount of time. Doing this wipes the data out, allowing it to be rewritten.
  • Electrically Erasable Programmable ROM (EEPROM). Also called flash BIOS. This ROM can be rewritten through the use of a special software program. Flash BIOS operates this way, allowing users to upgrade their BIOS.

ROM is slower than RAM, which is why some try to shadow it to increase speed.

RAM
Random Access Memory (RAM) is what most of us think of when we hear the word memory associated with computer. It is volatile memory, meaning all data is lost when power is turned off. The RAM is used for temporary storage of program data, allowing performance to be optimum.

Like ROM, there are different types of RAM:

  • Static RAM (SRAM) This RAM will maintain it's data as long as power is provided to the memory chips. It does not need to be re-written periodically. In fact, the only time the data on the memory is refreshed or changed is when an actual write command is executed. SRAM is very fast, but is much more expensive than DRAM. SRAM is often used as cache memory due to its speed.

    There are a few types of SRAM:

  • Async SRAM. An older type of SRAM used in many PC's for L2 cache. It is asynchronous, meaning that it works independently of the system clock. This means that the CPU found itself waiting for info from the L2 cache.
  • Sync SRAM. This type of SRAM is synchronous, meaning it is synchronized with the system clock. While this speeds it up, it makes it rather expensive at the same time.
  • Pipeline Burst SRAM. Commonly used. SRAM requests are pipelined, meaning larger packets of data re sent to the memory at once, and acted on very quickly. This breed of SRAM can operate at bus speeds higher than 66MHz, so is often used.
  • Dynamic RAM (DRAM). DRAM, unlike SRAM, must be continually re-written in order for it to maintain its data. This is done by placing the memory on a refresh circuit that re-writes the data several hundred time per second. DRAM is used for most system memory because it is cheap and small.

    There are several types of DRAM, complicating the memory scene even more:

  • Fast Page Mode DRAM (FPM DRAM). FPM DRAM is only slightly faster than regular DRAM. Before there was EDO RAM, FPM RAM was the main type used in PC's. It is pretty slow stuff, with an access time of 120 ns. It was eventually tweaked to 60 ns, but FPM was still too slow to work on the 66MHz system bus. For this reason, FPM RAM was replaced by EDO RAM. FPM RAM is not much used today due to its slow speed, but is almost universally supported.
  • Extended Data Out DRAM (EDO DRAM). EDO memory incorporates yet another tweak in the method of access. It allows one access to begin while another is being completed. While this might sound ingenious, the performance increase over FPM DRAM is only around 30%. EDO DRAM must be properly supported by the chipset. EDO RAM comes on a SIMM. EDO RAM cannot operate on a bus speed faster than 66MHz, so, with the increasing use of higher bus speeds, EDO RAM has taken the path of FPM RAM.
  • Burst EDO DRAM (BEDO DRAM). Original EDO RAM was too slow for the newer systems coming out at the time. Therefore, a new method of memory access had to be developed to speed up the memory. Bursting was the method devised. This means that larger blocks of data were sent to the memory at a time, and each "block" of data not only carried the memory address of the immediate page, but info on the next several pages. Therefore, the next few accesses would not experience any delays due to the preceding memory requests. This technology increases EDO RAM speed up to around 10 ns, but it did not give it the ability to operate stably at bus speeds over 66MHz. BEDO RAM was an effort to make EDO RAM compete with SDRAM.
  • Synchronous DRAM (SDRAM). SDRAM became the new standard after EDO bit the dust. Its speed is synchronous, meaning that it is directly dependent on the clock speed of the entire system. Standard SDRAM can handle higher bus speeds. In theory, it could operate at up to 100MHz, although it was found that many other variable factors went into whether or not it could stabily do so. The actual speed capacity of the module depended on the actual memory chips as well as design factors in the memory PCB itself.

    Do get around the variability, Intel created the PC100 standard. The PC100 standard ensures compatibility of SDRAM subsystems with Intel's 100MHz FSB processors. The new design, production, and test requirements created challenges for semiconductor companies and memory module suppliers. Each PC100 SDRAM module required key attributes to guarantee full compliance, such as the use of 8ns DRAM components (chips) that are capable of operating at 125MHz. This provided a margin of safety in ensuring that that the memory module could run at PC100 speeds. Additionally, SDRAM chips must be used in conjunction with a correctly programmed EEPROM on a properly designed printed circuit board. The shorter the distance the signal needs to travel, the faster it runs. For this reason, there were additional layers of internal circuitry on PC100 modules.

    As PC speeds increased, the same problem was encountered for the 133 MHz bus, so the PC133 standard was developed.
  • RAMBus DRAM (RDRAM). Developed by Rambus, Inc. and endorsed by Intel as the chosen successor to SDRAM. RDRAM narrows the memory bus to 16-bit and runs at up to 800 MHz. Since this narrow bus takes up less space on the board, systems can get more speed by running multiple channels in parallel. Despite the speed, RDRAM has had a tough time taking off in the market because of compatibility and timing issues. Heat is also an issue, but RDRAM has heatsinks to dissipate this. Cost is a major issue with RDRAM, with manufacturers needing to make major facility changes to make it and the product cost to consumers being too high for people to swallow.
  • DDR-SDRAM. This type of memory is the natural evolution from SDRAM and most manufacturers prefer this to Rambus because not much needs to be changed to make it. Also, memory makers are free to manufacture it because it is an open standard, whereas they would have to pay license fees to Rambus, Inc. in order make RDRAM. DDR stands for Double Data Rate. DDR shuffles data over the bus over both the rise and fall of the clock cycle, effectively doubling the speed over that of standard SDRAM. Of course, chipset support is necessary. DDR-DRAM is now the new standard in PC memory, with ever increasing speed support coming out, even so that it can compete quite well up against Rambus.
Ref: http://www.pcmech.com/show/memory/110/


Keywords: Memory, SSRAM, DDR SDRAM, Rambus,