difference between cpu types

ukeitaro

Baseband Member
Messages
42
what's the difference among intel's difference cpu types such as pentium 4, celeron, pentium M, pentium D, etc. Is there a structural difference between pentium 1,2,3, and 4 other than the speed?

Same question with amd processors like sempron, duron, xp, 64 (what does it mean for a cpu to be 64-bit?), etc.

which processors are better for gaming?
 
AMD hands down. Antholn 64 OWNS intel in every aspect of the word own. and IDK that diff between a P1 and p4 but I do know a p4 is much better then a P1 and a Pentium M is for laptops na dushc. Go with AMD.
 
The Intel cache is lower trhan an AMD. And the 64-bit Intel CPU is a patch over an actual 32-bit CPU

AMD 64 are 100% 64-bit CPUs ad that's why you get lots of performance...without mentioning the cache
 
First off... the difference is just the clock speed, i.e. 3.4 ghz as opposed to 2.14ghz. The pent.M, however, runs slower, but actually is light on battery usage and becuause it runs slower, its cooler, and doesn't have to slow to cool down. The pent. d is dual core, supporting two cpu's. Twice the cpu, twice the processing. It's just clock speed, really. There is a differnce in the structure, but it's really it's architecture in fsb and clock speeds.... As for amd, they sport a lower clock speed, but run fast, because they don't heat up so much. 64 bit computing is:

In computer science, 64-bit is an adjective used to describe integers, memory addresses or other data units that are at most 64 bits (8 octets) wide, or to describe CPU and ALU architectures based on registers, address buses, or data buses of that size.

As of 2004, 64-bit CPUs are common in servers, and have recently been introduced to the (previously 32-bit) mainstream personal computer arena in the form of the AMD64, EM64T, and PowerPC 970 (or "G5") processor architectures.

Though a CPU may be 64-bit internally, its external data bus or address bus may have a different size, either larger or smaller, and the term is often used to describe the size of these buses as well. For instance, many current machines with 32-bit processors use 64-bit buses, and may occasionally be referred to as "64-bit" for this reason. The term may also refer to the size of an instruction in the computer's instruction set or to any other item of data. Without further qualification, however, a computer architecture described as "64-bit" generally has integer registers that are 64 bits wide and thus directly supports dealing both internally and externally with 64-bit "chunks" of data.

Registers in a processor are generally divided into three groups: integer, floating point, and other. In all common general purpose processors, only the integer registers are capable of storing pointer values (that is, an address of some data in memory). The non-integer registers cannot be used to store pointers for the purpose of reading or writing to memory, and therefore cannot be used to bypass any memory restrictions imposed by the size of the integer registers.

Nearly all common general purpose processors (with the notable exception of the ARM and most 32-bit MIPS implementations) have integrated floating point hardware, which may or may not use 64 bit registers to hold data for processing. For example, the AMD64 architecture defines a SSE unit which includes 16 128-bit wide registers, and the traditional x87 floating point unit defines 8 64-bit registers in a stack configuration. By contrast, the 64-bit Alpha family of processors defines 32 64-bit wide floating point registers in addition to its 32 64-bit wide integer registers.


Most CPUs are currently (c. 2003) designed so that the contents of a single integer register can store the address (location) of any datum in the computer's virtual memory. Therefore, the total number of addresses in the virtual memory — the total amount of data the computer can keep in its working area — is determined by the width of these registers. Beginning in the 1960s with the IBM System 360, then (amongst many others) the DEC VAX minicomputer in the 1970s, and then with the Intel 80386 in the mid-1980s, a de facto consensus developed that 32 bits was a convenient register size. A 32-bit register meant that 232 addresses, or 4 gigabytes of RAM memory, could be referenced. At the time these architectures were devised, 4 gigabytes of memory was so far beyond the typical quantities available in installations that this was considered to be enough "headroom" for addressing. 4-gigabyte addresses were considered an appropriate size to work with for another important reason: 4 billion integers are enough to assign unique references to most physically countable things in applications like databases.

However, with the march of time and the continual reductions in the cost of memory (see Moore's Law), by the early 1990s installations with quantities of RAM approaching 4 gigabytes began to appear, and the use of virtual memory spaces greater than the four gigabyte limit became desirable for handling certain types of problems. In response, a number of companies began releasing new families of chips with 64-bit architectures, initially for supercomputers and high-end server machines. 64-bit computing has gradually drifted down to the personal computer desktop, with Apple Computer's PowerMac desktop line as of 2003 and its iMac home computer line (as of 2004) both using 64-bit processors (Apple calls it the G5 chip), and AMD's "AMD64" architecture (and Intel's "EM64T") becoming common in high-end PCs.

32 vs 64 bit

A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most operating systems must be extensively modified to take advantage of the new architecture. Other software must also be ported to use the new capabilities; older software is usually supported through either a hardware compatibility mode (in which the new processors support an older 32-bit instruction set as well as the new modes), through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor die (as with the Itanium2 processors from Intel). One significant exception to this is the AS/400, whose software runs on a virtual ISA which is implemented in low-level software. This software, called TIMI, is all that has to be rewritten to move the entire OS and all software to a new platform, such as when IBM transitioned their line from 32-bit POWER to 64-bit POWER.

While 64-bit architectures indisputably make working with huge data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate as to whether they or their 32-bit compatibility modes will be faster than comparably-priced 32-bit systems for other tasks.

Theoretically, some programs could well be faster in 32-bit mode. Instructions for 64-bit computing take up more storage space than the earlier 32-bit ones, so it is possible that some 32-bit programs will fit into the CPU's high-speed cache while equivalent 64-bit programs will not. However, in applications like scientific computing, the data being processed often fits naturally in 64-bit chunks, and will be faster on a 64-bit architecture because the CPU will be designed to process such information directly rather than requiring the program to perform multiple steps. Such assessments are complicated by the fact that in the process of designing the new 64-bit architectures, the instruction set designers have also taken the opportunity to make other changes that address some of the deficiencies in older instruction sets by adding new performance-enhancing facilities (such as the extra registers in the AMD64 design).

Pros and cons

A common misconception is that 64-bit architectures are no better than 32-bit architectures unless the computer has more than 4 GB of memory. This is not entirely true:

* Some operating systems reserve portions of each process' address space for OS use, effectively reducing the total address space available for mapping memory for user programs. For instance, Windows XP DLLs and userland OS components are mapped into each process' address space, leaving only 2 or 3 GB (depending on the settings) address space available under Windows XP, even if the computer has 4 GB of RAM. This restriction is not present in 64-bit Windows.
* Memory mapping of files is becoming more dangerous with 32-bit architectures, especially with the introduction of relatively cheap recordable DVD technology. A 4 GB file is no longer uncommon, and such large files cannot be memory mapped easily to 32-bit architectures. This is an issue, as memory mapping remains one of the most efficient disk-to-memory methods, when properly implemented by the OS.

The main disadvantage of 64-bit architectures is that relative to 32-bit architectures the same data occupies slightly more space in memory (due to swollen pointers and possibly other types and alignment padding). This increases the memory requirements of a given process, and can have implications for efficient processor cache utilisation. Maintaining a partial 32-bit data model is one way to handle this, and is in general reasonably effective.

Converting application software written in a high-level language from a 32-bit architecture to a 64-bit architecture varies in difficulty. One common recurring problem is that some programmers assume that pointers (variables that store memory addresses) have the same length as some other data type. Programmers assume they can transfer quantities between these data types without losing information. Those assumptions happen to be true on some 32 bit machines (and even some 16 bit machines), but they are no longer true on 64 bit machines. The C programming language and its descendant C++ make it particularly easy to make this sort of mistake.

To avoid this mistake in C and C++, the sizeof() operator can be used to determine the size of these primitive types if decisions based on their size need to be made at run time. Also, limits.h in the C99 standard and climits in the C++ standard give more helpful info; sizeof() only returns the number of bytes, which is sometimes misleading, because the size of a byte is also not well defined in C or C++. One needs to be careful to use the ptrdiff_t type (in the standard header <stddef.h>) when doing pointer arithmetic; too much code incorrectly uses "int" or "long" instead.

Hope this helps... duron is for servers, xp is the obselete amd cpu, and the new 64's are the faster ones. the best processors for gaming are amd's, just because of the processing capabilites, faster fsb speeds, and 64 bit architecture... yeha.
 
ok, i'm understanding the differences better now, thx man.

what does the 3500+ on an amd64 mean? i think i've seen like 3100 or 3200 or something. is the higher the number the better?
 
Basically yeah, ranges from AMD 64 3000+ to 4000+, then you've got your AMD 64 FX-55 and 57 and then the dual core processors start, going from 4200 to 4800, might go further I'm unsure.
 
Yes, they are performance ratings (PR's). The PR of Athlon 64's are supposed to relate to the speed in MHz of Pentium 4's. i.e. an Athlon 64 3200+ is equal to a 3.2GHz P4, although of course, that's not a good way to compare, as P4's have shown to be better in video-encoding and editing, whilst the 64 has the clear advantage in gaming.
 
kenlo said:
what's the difference among intel's difference cpu types such as pentium 4, celeron, pentium M, pentium D, etc. Is there a structural difference between pentium 1,2,3, and 4 other than the speed?

Same question with amd processors like sempron, duron, xp, 64 (what does it mean for a cpu to be 64-bit?), etc.

which processors are better for gaming?

Intel Celeron and AMD Duron are the cheapest CPUs to get because of lower FSB speeds to adapt to the lowered L2 cache. It's a nice CPU set if you plan on upgrading a lot, but no performance when it comes to the likes of the Intel Pentium 4 or the AMD Athlon 64.

Then, you will see the XP and Sempron line which are processors with higher clock rates. That's it. Athlon 64 is the leader in AMDs lineup while Intel Pentium 4 is their mainstream CPU. The Pentium D is new and supports dual core processing and support for 64 bit. Intels strategy to compete with AMDs line of advanced 64 bit CPU.

Pentium M is a mobile processor for laptops and is currently matched up with AMDs Turion 64. You will also hear Pentium M as INtel Centrino because it is based on that technology. Intel Pentium M with Centrino technology is the best so far for mobile processing (notebooks). The CPU uses less power and has higher performance output.

That's it in a nutshell. If you want more specifics, just post it. Oh, Athlon 64 for gaming...
 
The Duron, Celeron and Athlon XP's are old CPU's which you shouldn't get. The old Celeron has been replaced by the much better Celeron D, as Intel's value CPU range, whilst the Duron and Athlon XP have been replaced by the Sempron and Athlon 64, respectively.

All the AMD chips consume less energy and run cooler than Intel chips.

And just to let you know, VIA's new C7 CPU looks to be a good budget chip, as it consumes a maximum of 20W, but 0.1W on average (if i remember correctly).
 
Back
Top Bottom