Brute force is the name of the game, and nVidia is the main player behind this game. Since their incorporation in 1993 nVidia has been the primary force behind revolutionizing the graphics card industry. Even though nVidia released their GeForce 4 GPU only 2 months ago, it is claimed that they had already been ready to launch the NV25 in December 2001. However due to the tremendous success of their GeForce 3 line of GPU's delayed the initial launch of the GeForce 4. The only notable threat to nVidia today is posed by ATI technologies and for some reason they seem to be managing to go head to head with nVidia in almost every aspect.
nVidia released their new GeForce 4 line of GPU's in two flavours, the Titanium line for the high-end performance seekers, the MX series for the budget conscious gamers. When the GeForce 3 GPU was released, it opened up a whole new chapter in the gaming industry with their programmable infiniteFX engine. So has the new GeForce 4 GPU (code named NV25) being incorporated with to replace the Geforce 3? Lets find out…
One of the primary improvements with the GeForce 4 GPU is the inclusion of two vertex shaders. Fact is most vertex shader operations work parallel to each other. As a result having an additional vertex shader will only increase the overall performance. Of course at present we really don't see much games utilizing the full capabilities of the GeForce 4, nevertheless this does not mean game developers will simply sit on their asses and do nothing. So we can expect to see some really cool games which will make full use of this GPU.
For those who are unaware as to what exactly a vertex shader is here's an extract from our GeForce 3 article.
"A 3D image in a scene is brought to life through a complex formation of triangles which in turn makes the 3D object. A triangle is composed of three vertices. A Vertex is where two edges of the triangle meet. A Vertex shader basically adds effects to each vertex adding to the 3D effect of the scene. This procedure involves millions of mathematical calculations and thus is very complex, it needs very fast processors to be able to compute such scenes thus realism in this aspect was limited until the G Force 3 GPU Came along. It incorporates a fully programmable Vertex shader (Embedded in the NFinite FX Engine) with the massive power of the G Force 3 GPU and its 57 million transistors this allows for unprecedented flexibility for game developers who can now develop realistic 3D images, render them real time, and also add specific effect to each vertex using the Nfinite FX Engine. This allows for more complex scenes to rendered as well as to add never before scene affects to 3D scenes, and bring 3D a step closer to reality. You will see examples as you read ahead with our review"
Since the Chaitech card we're reviewing today is a GeForce 4MX 440, we should mention that the GeForce 4 MX chip does not include this second vertex unit, and it's only offered in the Titanium series.
There has been no improvements or any add-ons to the pixel shader, and it remains as it was with the original launch of the GeForce 3. However note the fact that both the Pixel shader and the Vertex shader operate at a higher clock speed than that of the GeForce 3. A pixel needs no introduction. It's basically the smallest element that makes up an image. Pixels determine the look of the image. NVIDIA introduced per pixel shading with the G Force 2 GPU, and the G Force 3 GPU took it to another level. The G Force 3 has a programmable pixel shader which lets the game developer determine how a pixel should be shaded and not the GPU as in traditionally would by selecting the shading from a pre set palette. This sets the stage for images to achieve realism as never before. Now games are limited only by the creative minds of the game developers as opposed to what it was a few years back.
The biggest improvement with regards to the GeForce 4 GPU's performance is obtained by the new Lightening Speed Memory Architecture II. nVidia seems to have optimized their memory architecture to obtain every ounce of memory bandwidth they can get. Here's an extract from the nVidia site to highlight what new improvements have been integrated into the memory architecture.