DirectX 10 + 1900XT

Shantei

In Runtime
Messages
413
Quick and simple question with a quick and simple answer, unless required explanation :D Will the x1900XT support DX10?
 
There are no DirectX 10 capable video cards out. You'll have to wait for the ATI R600 or the NVIDIA G80 (8800 series) to launch. NVIDIA is aiming for a November or December 2006 launch, and ATI is aiming for a December 2006 or January 2007 launch.
 
Trivium Nate said:
Well when Dx 10 comes out we will know wont we? Its to early to ask

It's not to early to ask...It's simple: All currently available GPUs are DX9. No, they do not suport DX10.
 
lhuser said:
Yeah, that R600 is going to suck 250W of your PSU alone...so imagine Crossfire! :eek:

I don't think 250W as there is a specification of power draw for PCI-E that caps it off at 200W (or maybe lower) for the 6 pin plug and from the connector itself.
 
lhuser said:
Yeah, that R600 is going to suck 250W of your PSU alone...so imagine Crossfire! :eek:

Actually both the R600 and G80 will use lots of power. Up to 150-300W per card. Mostly due to the massive amounts of transistors (the G80's GPU die is about 20x20mm and has nearly 700 million transistors), and faster components. They use more power and produce more heat.

The 8800 series will have two cards at launch: the 8800 GTX and the 8800 GT. The GT variant will have a traditional fan cooled heatsink due to dumbed down specs. However, the GTX will use a closed-circuit water-cooling loop and fan hybrid. Much like the ThermalTake Tidewater if you think about it. That's how hot the 8800 GTX runs. The G80 will still be 90nm because NVIDIA wants to avoid risks. Core clock speeds are estimated to be around the 700-850 Mhz range, but it is rumored that the core can reach up to 1.5Ghz on future processes.

The R600 (no titling as been announced by ATI) is rumored to be delayed until early 2007, so NVIDIA might gain a lead on the DirectX 10 race. The R600 will be THE largest GPU ever made and will be based on the 80nm process, but the 65nm process isn't out of the question yet. It will have 64 unified shader pipelines, 32 ROP's and 32 TMU's. :D It's desgined very similar to the Xbox 360's Xenos GPU, but obviously without the handy embedded DRAM framebuffer daughter-die. It'll have 512mb of GDDR4 rumored to be running faster than the X1950 XTX's clock speed of 1.0Ghz (2.0Ghz effective), so expect fast speeds.

Before anyone goes ape shit crazy over the R600's 64 unified shader pipelines (some apparently think more pipelines = better), let me explain the whole concept of ATI's Unified Shader Architecture.

The tradtional GPU has pixel and vertex shaders, each independently doing it's own work. However, this can be very inefficient at times, because some games require more pixel processing than vertex processing, or vice versa. When that happens, the pipelines that aren't being used just hang there doing nothing. This is where the Unified Shader Architecture comes in. What it does is have a large number (like the Xbox 360 Xenos GPU's 48 unified shader pipelines or the R600's 64 unifed shader pipelines) of shader processors and they can do both pixel and vertex processing. What this does is increase the efficiency of the GPU, because it's working at all times. So if a certain game focuses more on vertex processing, the GPU can use more vertex shaders and less pixel shaders or vice versa. ATI has proudly embraced this architecture, and I'm glad they have because GPU's can then fluctuate and adjust like games are programmed. Very neat, indeed. :)
 
Back
Top Bottom