You money here!!!

Thursday, January 23, 2014

XFX Radeon R9 290X Boost Edition review: a high-end graphics card with lower power consumption

After several months of retooling older technology as modern-day finely-specified pixel-shufflers, AMD has finally unleashed its brand new headliner, the R9 290X - or Hawaii, as it's more memorably called. As with the other new cards, this will be an extra performance 'X'-rated version, with a standard non-X chip quickly following. The first glance at the 290X shows it to be something of a monster on paper, packing a rather substantial 6.2 billion transistors. The Radeon 7970, by comparison, used just 4.31 billion, and even the truly heavyweight nVidia Titan offers only slightly more, at 7.1 billion. For all its hardware, though, the 290X isn't as gargantuan as you might think, and the size increase is relatively small given the additional firepower. See all graphics card reviews.

So what do you get for the extra kit? Well, a 512-bit memory interface, for a start, a move that comfortably puts the GTX 780's 384bit version in the shade. The 290X doesn't stop there, ratcheting up the quantity of Raster Operations from the 780's 48 to a sizeable 64, while the 4GB of GDDR5 RAM is even more comprehensive than the 780's healthy 3GB - admittedly the extortionately priced Titan goes one better still, offering an eye-popping 6GB of RAM. The 290X does lose a little ground in terms of texture units, and its 176-strong complement is eclipsed by the 780's 192 - while the Titan packs a mighty 224. Having said that, 176 is still a very substantial number of texture units, adding to some very strong specifications overall. See also Group test: what's the best graphics card?

With some impressive hardware behind it, the R9 290X doesn't have to push the clock speeds too much. It's rated at 1GHz, although that is taking into account the Boost capabilities - AMD has been strangely reticent about revealing the standard figure. The RAM is itself clocked at 5GHz (or 1.25GHz before the quadrupling nature of the RAM is taken into account). This combines beautifully with that heady 512-bit interface to create a sterling memory bandwidth figure of 320GB/s. This obviously isn't as high as it might have been. Even the humble 270X, for instance, has a memory clock of 5.6GHz, while the 280X makes 6GHz. Nonetheless, not many cards can reach a memory bandwidth of 320GB/s - even the GTX 780 and Titan have to make do with 288.4GB/s. It also does very well on texture fill rates, and while it may be inferior to the nVidia cards in terms of the quantity of texture units, it compensates with a better core speed. Its headline figure of 176GT/s is significantly ahead of the GTX 780's 165.7GT/s. The Titan does stretch ahead with 187.44GT/s, but given that the latter card has a massive 224 texture units (to the 290X's 176), the difference is relatively small. Particularly so given the very steep price tag on the Titan. In terms of both specifications and fill/bandwidth rates, then, the 290X is stunning.

Also notable is the doubling of the number of geometry engines (from two to four), and the GCN (Graphics Core Next) 1.1 architecture has been tailored towards producing superior performance in high-intensity Compute applications. This was the area where Titan destroyed the competition, and the 290X is now also extremely well-endowed in this area, essentially matching the Titan much of the way. This should serve it well in the future. For many current-day gamers, though, it'll be some of the other features brought in with GCN 1.1 that make the real difference. Three-dimensional sound is significantly improved through the inclusion of TrueAudio, while the new-look CrossFire XDMA works through the PCIe bus, allowing AMD to dispense with the need to fit bridges between the cards, and enhancing its ability to handle multiple monitors. There's also the question of Mantle. Only time will tell whether this can replace OpenGL or Direct3D as the main API. It gets good support in this card and, if AMD is successful, we could see a fragmentation of the games market for the first time in many years, with choosing nVidia over AMD serving to restrict you in your choice of game. This isn't necessarily welcome for players, and we wait with interest (and not a little trepidation) to see whether Mantle can secure the dominant position AMD hopes it will.

In terms of figures, the R9 290X is marginally ahead of even the Titan, making it the fastest single-chip GPU. In Crysis 3, it scores 49.5fps and 29.6fps (at 1900x1200 and 2560x1600) to the Titan's 49.2fps and 29.5fps respectively. The GTX 780 is only slightly further back, on 48.4 and 28.7fps. In the more straightforward Stalker: Call of Pripyat, the lead is slightly larger at the lower resolution - 122.2fps against 121.7fps for the Titan. However, the Titan does win by a single 0.1fps at the 2560x1600 resolution - the 290X gets 90.9fps to the nVidia's 91.0. The GTX 780 is quite some distance behind here, on 114.3fps and 82.2fps respectively. In Bioshock Infinite Rage, the 290X is very much the better card, recording figures of 93.1fps and 60.0fps as against the Titan's 91.9fps and 58.8fps. The GTX 780 is a few frames down again, this time with 89.5fps and 55fps. We also tried a resolution of 3840x2160 here, finding the R9 290X to be better again, but only by a single 0.1fps - 36.7fps to the Titan's 36.3fps. Only slightly faster than the Titan overall, the R9 290X is a hefty distance ahead of its closely-priced rival, the GTX 780.

Not that the R9 290X is a perfect card. It runs hotter than any other card we've tested to this point, and consumes rather more power than its TDP of 250 watts might suggest - consuming an average of 31 watts more than the 780, despite both cards having the same TDP. It's not a quiet card either, and despite XFX's best efforts, it's three decibels louder than the GTX 780 in testing.

View the original article here

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...