Sa repet ?
Goodbye NVIDIA!
Bine ati venit in lumea CGPU sau altfel spus proiectul Larrabee de la Intel.Multi se indoiau daca Intel va putea sa lupte cu Nvidia sau ATI/AMD in domeniul graficii.Modul in care au abordat problema insa inginerii Intel este revolutionar si mai mult ca sigur va pune mari probleme unei companii , in speta NVIDIA.Practic au schimbat regulile jocului.
Ce este Larrabee ?
Intel decided to talk about Larrabee last week to VR-Zone (nice catch guys), so I guess that makes it open season on info. VRZ got it almost dead on, the target is 16 cores in the early 2009 time frame, but that is not a fixed number. Due to the architecture, that can go down in an ATI x900/x600/x300 fashion, maybe 16/8/4 cores respectively, but technically speaking it can also go up by quite a bit.
What are those cores? They are not GPUs, they are x86 'mini-cores', basically small dumb in order cores with a staggeringly short pipeline. They also have four threads per core, so a total of 64 threads per "CGPU". To make this work as a GPU, you need instructions, vector instructions, so there is a hugely wide vector unit strapped on to it. The instruction set, an x86 extension for those paying attention, will have a lot of the functionality of a GPU.
What you end up with is a ton of threads running a super-wide vector unit with the controls in x86. You use the same tools to program the GPU as you do the CPU, using the same mnemonics, and the same everything. It also makes things a snap to use the GPU as an extension to the main CPU.
Rather than making the traditional 3D pipeline of putting points in space, connecting them, painting the resultant triangles, and then twiddling them simply faster, Intel is throwing that out the window. Instead you get the tools to do things any way you want, if you can build a better mousetrap, you are more than welcome to do so. Intel will support you there.
Those are the cores, but how are they connected? That one is easy, a hugely wide bi-directional ring bus. Think four not three digits of bit width and Tbps not Gbps of bandwidth. It should be 'enough' for the average user, if you need more, well now is the time to contact your friendly Intel exec and ask.
As you can see, the architecture is stupidly scalable, if you want more CPUs, just plop them on. If you want less, delete nodes, not a big deal. That is why we said 16 but it could change on more or less on a whim. The biggest problem is bandwidth usage as a limiter to scalability. 20 and 24 core variants seem quite doable.
In any case, the whole idea of a GPU as a separate chip is a thing of the past. The first step is a GPU on a CPU like AMD's Fusion, but this is transitional. Both sides will pull the functionality into the core itself, and GPUs will cease to be. Now do you see why Nvidia is dead?
So, in two years, the first steps to GPUs going away will hit the market. From there, it is a matter of shrinking and adding features, but there is no turning back. Welcome the CGPU. Now do you understand why AMD had to buy ATI to survive?
Cred ca ii destul de clar articolul...va puteti inchipui Terabytes pe secunda latime da banda ?
Si un mic quote de la Vrzone
..The performance? How about 16x performance of any fastest graphics card out there now [referring to G80] as claimed.
Totul aici :
http://theinquirer.net/default.aspx?article=37548