Showing posts with label Intel. Show all posts
Showing posts with label Intel. Show all posts

Friday, October 9, 2009

Alienware Quad-Core M15x Laptop

Alienware quad-core m15x laptop called \'world\'s fastest\'

Intel's Core i7 processor kicks ass, as we already know. Alienware is busting out an update to its M15x laptop today with the new mobile version of i7, which is to be officially unveiled today at Intel's Developer Conference.

The 15-inch Alienware M15x, which was introduced back in January, is a mobile gamer's paradise. You now have a choice of the speediest Intel mobile processors, including a 1.6 GHz Intel Core i7 720QM, a 1.73GHz Intel Core i7 820QM, and finally the world's fastest mobile processor, a 2GHz Intel Core i7 920XM. The illuminating wonder of a notebook will be configurable with a NVIDIA GeForce GTX 260M GPU (with 1GB of RAM) and up to 8GB of DDR3 RAM to take on the harshest of games.

As for aesthetics the main chassis appears the same, though you can now get it in metallic red, silver or black. And for those that love to game after dark the entire body lights up customizable color accents.

Shockingly the price is actually pretty reasonable for a high end gaming system. It will start at $1,500 and it will be configurable on Dell.com. It will ship for now with Vista and with Windows 7 come October 22.

Intel Light Peak Optical Tech 10Gb/s

Light Peak

Today at IDF, Intel unveiled Light Peak technology, a plan for an extremely high-speed optical cable they hope will land on consumer products in 2010. Imagine transferring an entire Blu-Ray disk in 30 seconds. And that's just the beginning.

In Intel's words:

Existing electrical cable technology in mainstream computing devices is approaching practical limits for speed and length, due to electro-magnetic interference (EMI) and other issues. However, optical technology, used extensively in data centers and telecom communications, does not have these limitations since it transmits data using light instead of electricity. Light Peak brings this optical technology to mainstream computing and consumer electronic devices in a cost-effective manner.

Light Peak delivers 10Gb/s speeds right now, and could conceivably go as fast as 100Gb/s within a decade or so. Those kinds of speeds are even sustained over a 100-meter distance, which is really impressive. Intel is currently working with hardware manufacturers (computers, handhelds, etc) to try to get the optical tech onto devices sometime in 2010.

Thursday, July 23, 2009

Intel 34nm X25-M SATA SSD (80GB, 160GB)

via Gizmodo by John Herrman on 7/21/09

It's been about a year since Intel's quick-but-pricey Intel's X-series SSDs started the market, so it's about time for refresh. And hey, look: It's a fresh pair of 34nm X25-M drives! (Spoiler: They're almost exactly like the last ones.)

Intel's got a lengthy spiel about how performance has been improved, albeit slightly, by the new fabrication process (they claim a 25% decrease in latency and slightly higher read/write performance), but the core of this upgrade, and the main benefit of switching to 34nm, is a lower price.

Looking again to Intel's claims, there's been a 60% decrease in price for the 80GB and 160GB models compared to original launch prices, which is strictly speakingcorrect. Thing is, neither of the drives have sold for anything near their initial prices for some time now, so although the new versions, priced at around $225 for the 80GB and $440 for the 160GB, will be more affordable than their predecessors, they won't be budget drives by any means.

It's been a year, so a capacity hike would've been nice. Without that, this feels like a transitional product�a necessary manifestation of solid-state storage's slow crawl toward affordability, if not something many people will be ready to buy. Accordingly, I expect the second generation of 34nm drives to be awesome, so please, be awesome.

Sunday, July 19, 2009

How Do they Make Modern Processors?

via Gizmodo by Jesus Diaz on 7/19/09

I knew that processors�like castles�are made of sand. But I didn't know they required stuff like ion implantation at more than 185,000mph, electroplating, and the creation of up to 20 metal layers of transistor connections in 500nm.

Thankfully, Intel has put together a slide show to tell how the little things are made, from sand grains to the final packaging, going through all the dicing, the slicing, and the dancing.


Friday, June 19, 2009

Intel's Platform Power Management

via Gizmodo by Brian Lam on 6/19/09

Intel Research showed me a demo of their Platform Power Management system. Essentially, they're applying the smart, quick, hardware level idling you find on a CPU to many system parts. The result: systems that idle at 10x less juice.

The tech is applied to things like USB ports, which in 3.0, will go from polling (clock based, always checking) devices to being managed via events, so they can sleep whenever not being used. And graphics, when the page isn't changing, can be run out of a frame buffer so the GPU and video RAM can sleep. When I say more sleep, I mean for additional milliseconds or longer. This adds up, over the course of a day when people stop to read or step away from their computers. In the past, the OS controlled the power savings, and that required power to process in turn, so you were using the system's power to manage power, keeping those other components from ever really turning off. By doing power management with more granularity, in hardware and software together, you can switching things on/off fast enough to fit in lots of "naps" and you can also do it with less processing overhead.

I'm excited for this tech to go everywhere where there's a chip.


Friday, June 12, 2009

Six-Core Nehalem Processors Might Arrive This Year

via Gizmodo by Sean Fallon on 6/11/09

nehalem.jpg

According to bit-tech, Intel is planning to release a six-core Nehalem processor sometime this year.

Rumor also has it that most board manufacturers have already added support, so those of you with Nehalem rigs can probably upgrade to the new chip with a BIOS update. Saving a little money is definitely a good thing in this situation, because if and when a six-core Nehalem is released, expect prices to be in excess of $1000.

Thursday, June 4, 2009

Intel Core i7, world's fastest desktop chip

via�DVICE�by CharlieWhite on 6/3/09
Intel flaunts its latest Core i7, world's fastest desktop chip

Time marches on. PC processors get faster. But this one is unusually quick, especially when overclocking. The cool-running 3.3GHz Intel Core i7-975 Extreme Edition is a $1000�quad-core�chip currently aimed at gamers who can afford to spend $8000 on a PC, but don't let that discourage you. This will be�an average processor�a couple of years from now.

It starts getting geek-gasmically astonishing when you hear tales of the guys at Hot Hardware Review overclocking this sucker, revving it up beyond 4.1 GHz. They reported the chip wasn't even breathing hard at that much-higher speed, and only noticed a "small voltage bump" when the chip reached its 50�C maximum temperature. And that was with a normal heatsink, no fancy liquid cooling required.

Get one of these babies, and you'll be living the future. For a short while.

Hot Hardware Review�and�PC Perspective, via�Engadget

Saturday, May 23, 2009

Tiny UMID mbook M1

via�Gizmodo�by Dan Nosowitz on 5/23/09

We spotted Korean manufacturer UMID's new MID�back in November, but now it's finally seeing release, with a few changed specs and a $599 pricetag. But it probably won't change MID-haters' minds.

Occupying that perennially awkward space between a smartphone and a netbook, the mbook M1, like the�Viliv S5, packs standard netbook components into a teeny space while remaining too large to be pocketable. It's a nice enough design, and the price is fair, but the sacrifices made to keep the gadget small are sure to annoy owners. Everything's been miniaturized: The headphone jack is a 2.5mm rather than the standard 3.5mm, and it includes only a mini-USB port, so you'll need an adapter for both audio and hardware input. Even the expansion slot has been miniaturized from the cheap and ubiquitous SDHC to micro-SDHC.�The 16GB version will run you $599, and doubling your storage will cost an extra $150.

It includes the standard Windows XP, Intel Atom 1.33MHz proc, a 16/32GB SSD, and 512MB of memory, with a 4.8" WVGA touchscreen at a reasonable 1024x600 resolution. In short, it's just about exactly the same guts as the�Viliv S5, except with a keyboard and without the standard-size ports. Tiny, yes, but if you're not already pro-MID, the mbook M1 isn't going to convince you. [Dynamism]

Wednesday, May 13, 2009

Giz Explains: GPGPU Computing, and Why It'll Melt Your Face Off

via�Gizmodo�by matt buchanan on 5/13/09

No, I didn't stutter: GPGPU�general-purpose computing on graphics processor units�is what's going to bring hot screaming gaming GPUs to the mainstream, withWindows 7�and�Snow Leopard. Finally, everbody's face melts! Here's how.

What a Difference a Letter Makes
GPU sounds�and looks�a lot like CPU, but they're pretty different, and not just 'cause dedicated GPUs like the Radeon HD 4870 here can be massive. GPU stands for graphics processing unit, while CPU stands for central processing unit. Spelled out, you can already see the big differences between the two, but it takes some experts from Nvidia and AMD/ATI to get to the heart of what makes them so distinct.

Traditionally, a GPU does basically one thing, speed up the processing of image data that you end up seeing on your screen. As AMD Stream Computing Director Patricia Harrell told me, they're essentially chains of special purpose hardware designed to accelerate each stage of the geometry pipeline, the process of matching image data or a computer model to the pixels on your screen.

GPUs�have a pretty long history�you could go all the way back to the Commodore Amiga, if you wanted to�but we're going to stick to the fairly present. That is, the last 10 years, when Nvidia's Sanford Russell says GPUs starting adding cores to distribute the workload across multiple cores. See, graphics calculations�the calculations needed to figure out what pixels to display your screen as you snipe someone's head off in Team Fortress 2�are particularly suited to being handled in parallel.

An example Nvidia's Russell gave to think about the difference between a traditional CPU and a GPU is this: If you were looking for a word in a book, and handed the task to a CPU, it would start at page 1 and read it all the way to the end, because it's a "serial" processor. It would be fast, but would take time because it has to go in order. A GPU, which is a "parallel" processor, "would tear [the book] into a thousand pieces" and read it all at the same time. Even if each individual word is read more slowly, the book may be read in its entirety quicker, because words are read simultaneously.

All those cores in a GPU�800 stream processors in ATI's Radeon 4870�make it really good at performing the same calculation over and over on a whole bunch of data. (Hence a common GPU spec is�flops, or floating point operations per second, measured in current hardware in terms of gigaflops and teraflops.) The general-purpose CPU is better at some stuff though, as AMD's Harrell said: general programming, accessing memory randomly, executing steps in order, everyday stuff. It's true, though, that CPUs are sprouting cores, looking more and more like GPUs in some respects, as�retiring Intel Chairman Craig Barrett told me.

Explosions Are Cool, But Where's the General Part?
Okay, so the thing about�parallel processing�using tons of cores to break stuff up and crunch it all at once�is that applications have to be programmed to take advantage of it. It's not easy, which is why Intel at this point hires more software engineers than hardware ones. So even if the hardware's there, you still need the software to get there, and it's a whole different kind of programming.

Which brings us to OpenCL (Open Computing Language) and, to a lesser extent, CUDA. They're frameworks that make it way easier to use�graphics cards�for kinds of computing that aren't related to making zombie guts fly in Left 4 Dead.�OpenCL is�the "open standard for parallel programming of heterogeneous systems" standardized by the Khronos Group�AMD, Apple, IBM, Intel, Nvidia, Samsung and a bunch of others are involved, so it's pretty much an industry-wide thing. In semi-English, it's a cross-platform standard for parallel programming across different kinds of hardware�using both CPU and GPU�that anyone can use for free. CUDA is�Nvidia's own architecturefor parallel programming on its graphics cards.

OpenCL�is a big part of Snow Leopard. Windows 7 will use some graphics card acceleration too (though we're really�looking forward to DirectX 11). So graphics card acceleration is going to be a big part of future OSes.

So Uh, What's It Going to Do for Me?
Parallel processing is�pretty great for scientists. But what about those regular people? Does it make their stuff go faster. Not everything, and to start, it's not going too far from graphics, since that's still the easiest to parallelize. But converting, decoding and creating videos�stuff you're probably using now more than you did a couple years ago�will improve dramatically soon. Say bye-bye 20-minute renders. Ditto for image editing; there'll be less waiting for effects to propagate with giant images (Photoshop CS4 already uses GPU acceleration). In gaming, beyond straight-up graphical improvements, physics engines can get more complicated and realistic.

If you're just Twittering or checking email, no, GPGPU computing is not going to melt your stone-cold face. But anyone with anything cool on their computer is going to feel the melt eventually.


Friday, May 8, 2009

Super Sport Car Evolution