Showing posts with label GRAPHICS CARDS. Show all posts
Showing posts with label GRAPHICS CARDS. Show all posts

Friday, October 9, 2009

ATI Radeon HD 5800 Series DirectX 11

Do you remember that crazy, 6x30-inch monitor rig by AMD? Well their upcoming ATI Radeon HD 5800 series graphics cards are what drive the uber display.

The two new cards, the ATI Radeon HD 5870 and the ATI Radeon HD 5850, are the first video cards in the industry to fully support DirectX 11. Beyond that tidbit, they're capable of producing 2.72 TeraFLOPS of computing power and are equipped with 1GB GDDR5 memory.

And yes, each is capable of driving six 30-inch monitors at once�what AMD refers to as "Eyefinity" technology.

Friday, September 11, 2009

AMD Eyefinity Graphics Card Drives Six 30-Inch Monitors At Once

via Gizmodo by Sean Fallon

Good Lord�that is badass. What you are seeing here is the product of AMD's next-gen DirectX 11 graphics cards with an Eyefinity feature that allows you to use multiple monitors as a single display.

Specifics on the technology are being kept close to the vest, but a recent demonstration revealed, amazingly, that it runs on only one GPU. it also features several DisplayPort connectors�In this case, six 30-inch Dell displays were configured to run as a single 7680x4800 monitor.

Eyefinity is enabled through a combination of hardware and software being developed by AMD. On the hardware front, AMD's upcoming Radeons will sport between 3 and 6 display outputs of various types, DisplayPort, DVI, HDMI, etc. And those outputs will be managed by software currently dubbed SLS, or Single Large Surface. Using the SLS tool, users are able to configure a group of monitors to work with Eyefinity and essentially act as a single, large display.

Maximum PC witnessed XPlane 9 and Far Cry 2 running at full resolution on Eyefinity at 12-20 frames per second. HotHardware notes that an upcoming DX11 racing game, Dirt 2, was played at 7680 x 3200 with "perfectly acceptable frame rates" (although 12 fps is not what many would consider "acceptable"). They also claim that there are plans to integrate CrossFire support down the line and that AMD has partnered with manufacturers to create ultra-thin bezel displays specifically designed for use with Eyefinity. How long we will have to wait and how insanely expensive all this will be has yet to be determined.

Tuesday, June 2, 2009

ASUS Mars GPU may be the world's fastest

via�DVICE�by Kevin Hall on 6/1/09
ASUS Mars GPU may be the world's fastest, definitely the best looking

ASUS decided to skip all the incremental one-upmanship that's a graphics card industry standard and knock it out of the park with its Mars 295 Limited Edition GPU. The�gorgeous�pair boast a performance bump of 21% more power than the standard dual-core GTX 295 from NVIDIA, while housed in a sweet looking�cooling�sleeve. ASUS will only roll out a limited number of them and, if you can't see it up there in the corner, this one reads 1/1000.

All told, the new card�boastsall 240 shader processors on each GPU, a full 512-bit GDDR3 memory interface, 32 memory chips for 4GB total (2GB accessible per GPU), and the same core/shader/memory clockspeeds as the GTX 285 (648/1476/2400 MHz). By comparison, a traditional GTX 295 sports 896MB of GDDR3 per GPU on a 448-bit memory bus with core/shader/memory clockspeeds checking in at 576/1242/2000 MHz.

Engadget, via�VizWorld, via�MaximumPC

Wednesday, May 13, 2009

Giz Explains: GPGPU Computing, and Why It'll Melt Your Face Off

via�Gizmodo�by matt buchanan on 5/13/09

No, I didn't stutter: GPGPU�general-purpose computing on graphics processor units�is what's going to bring hot screaming gaming GPUs to the mainstream, withWindows 7�and�Snow Leopard. Finally, everbody's face melts! Here's how.

What a Difference a Letter Makes
GPU sounds�and looks�a lot like CPU, but they're pretty different, and not just 'cause dedicated GPUs like the Radeon HD 4870 here can be massive. GPU stands for graphics processing unit, while CPU stands for central processing unit. Spelled out, you can already see the big differences between the two, but it takes some experts from Nvidia and AMD/ATI to get to the heart of what makes them so distinct.

Traditionally, a GPU does basically one thing, speed up the processing of image data that you end up seeing on your screen. As AMD Stream Computing Director Patricia Harrell told me, they're essentially chains of special purpose hardware designed to accelerate each stage of the geometry pipeline, the process of matching image data or a computer model to the pixels on your screen.

GPUs�have a pretty long history�you could go all the way back to the Commodore Amiga, if you wanted to�but we're going to stick to the fairly present. That is, the last 10 years, when Nvidia's Sanford Russell says GPUs starting adding cores to distribute the workload across multiple cores. See, graphics calculations�the calculations needed to figure out what pixels to display your screen as you snipe someone's head off in Team Fortress 2�are particularly suited to being handled in parallel.

An example Nvidia's Russell gave to think about the difference between a traditional CPU and a GPU is this: If you were looking for a word in a book, and handed the task to a CPU, it would start at page 1 and read it all the way to the end, because it's a "serial" processor. It would be fast, but would take time because it has to go in order. A GPU, which is a "parallel" processor, "would tear [the book] into a thousand pieces" and read it all at the same time. Even if each individual word is read more slowly, the book may be read in its entirety quicker, because words are read simultaneously.

All those cores in a GPU�800 stream processors in ATI's Radeon 4870�make it really good at performing the same calculation over and over on a whole bunch of data. (Hence a common GPU spec is�flops, or floating point operations per second, measured in current hardware in terms of gigaflops and teraflops.) The general-purpose CPU is better at some stuff though, as AMD's Harrell said: general programming, accessing memory randomly, executing steps in order, everyday stuff. It's true, though, that CPUs are sprouting cores, looking more and more like GPUs in some respects, as�retiring Intel Chairman Craig Barrett told me.

Explosions Are Cool, But Where's the General Part?
Okay, so the thing about�parallel processing�using tons of cores to break stuff up and crunch it all at once�is that applications have to be programmed to take advantage of it. It's not easy, which is why Intel at this point hires more software engineers than hardware ones. So even if the hardware's there, you still need the software to get there, and it's a whole different kind of programming.

Which brings us to OpenCL (Open Computing Language) and, to a lesser extent, CUDA. They're frameworks that make it way easier to use�graphics cards�for kinds of computing that aren't related to making zombie guts fly in Left 4 Dead.�OpenCL is�the "open standard for parallel programming of heterogeneous systems" standardized by the Khronos Group�AMD, Apple, IBM, Intel, Nvidia, Samsung and a bunch of others are involved, so it's pretty much an industry-wide thing. In semi-English, it's a cross-platform standard for parallel programming across different kinds of hardware�using both CPU and GPU�that anyone can use for free. CUDA is�Nvidia's own architecturefor parallel programming on its graphics cards.

OpenCL�is a big part of Snow Leopard. Windows 7 will use some graphics card acceleration too (though we're really�looking forward to DirectX 11). So graphics card acceleration is going to be a big part of future OSes.

So Uh, What's It Going to Do for Me?
Parallel processing is�pretty great for scientists. But what about those regular people? Does it make their stuff go faster. Not everything, and to start, it's not going too far from graphics, since that's still the easiest to parallelize. But converting, decoding and creating videos�stuff you're probably using now more than you did a couple years ago�will improve dramatically soon. Say bye-bye 20-minute renders. Ditto for image editing; there'll be less waiting for effects to propagate with giant images (Photoshop CS4 already uses GPU acceleration). In gaming, beyond straight-up graphical improvements, physics engines can get more complicated and realistic.

If you're just Twittering or checking email, no, GPGPU computing is not going to melt your stone-cold face. But anyone with anything cool on their computer is going to feel the melt eventually.


Super Sport Car Evolution