Keresés

Új hozzászólás Aktív témák

  • S_x96x_S

    addikt

    válasz fLeSs #48077 üzenetére

    > Épp ezaz ami miatt érdekelne, hogy az AMD merre fejleszt tovább.

    e cikk:
    "A closer look at Intel and AMD's different approaches to gluing together CPUs"
    - Epycs or Xeons, more cores = more silicon, and it only gets more complex from here
    - While Intel eventually saw the wisdom of AMD's chiplet strategy, its approaches couldn't be more different.
    https://www.theregister.com/2024/10/24/intel_amd_packaging/

    szerint az (AMD) jövő olyan lehet mint az MI300 ,
    vagyis moduláris APU alapra - legózni az UDNA és a CCD-s gyorsítókkal.
    Ha minden UDNA - akkor RTX 5090 rivális
    Ha minden CCD - akkor Xeon rivális.

    """
    Alternatively, the chipmaker could also pack more cores onto a smaller die. However, we suspect that AMD's sixth-gen Epycs could actually end up looking a lot more like its Instinct MI300-series accelerators.
    As you may recall, launched alongside the MI300X GPU was an APU that swapped two of the chip's CDNA3 tiles for a trio of CCDs with 24 Zen 4 cores between them. These compute tiles are stacked atop four I/O dies and are connected to a bank of eight HBM3 modules.
    Now, again, this is just speculation, but it's not hard to imagine AMD doing something similar, switching out all that memory and GPU dies for additional CCDs instead. Such a design would conceivably benefit from higher bandwidth and lower latencies for die-to-die communications too.
    Whether this will actually play out, only time will tell. We don't expect AMD's 6th-gen Epycs to arrive until late 2026
    """

    vs. Intel

    """
    "Intel's I/O dies are also quite a bit skinnier and house a combination of PCIe, CXL, and UPI links for communications with storage, peripherals, and other sockets. Alongside these, we also find a host of accelerators for direct stream (DSA), in-memory analytics (IAA), encrypt/decrypt (QAT), and load balancing.
    We're told that the placement of accelerators on the I/O die was done in part to place them closer to the data as it streams in and out of the chip."
    "Going off the renderings intel showed off earlier this year, Clearwater Forest could use up to 12 compute dies per package. The use of silicon interposers is by no means new and offers a number of benefits including higher chip-to-chip bandwidth and lower latencies than you'd typically see in an organic substrate. That's quite the departure from the pair of 144-core compute dies found on Intel's highest core count Sierra Forest parts.
    """

    Az AMD oldalon amire figyelni kell még:
    - Strix HALO új IO - die -a.
    - És a zen5/Turin - dupla GMI -je.
    """
    Looking a bit closer at these results, you’ll notice that the 9575F has significantly higher bandwidth to a CCD compared to the desktop Zen 5 parts. And the reason for this is the 9575F has GMI3-W which means that it has 2 GMI links to the IO die instead of the single GMI link that the 9950X gets.
    And this is not only the only change to the GMI links on server. The GMI write link is now 32B per GMI link instead of the 16B per GMI link that you’d see on desktop Zen 5.
    """

  • gejala

    őstag

    válasz fLeSs #48077 üzenetére

    Szerintem Zen6 megkapja az új IO die-t meg egy kis IPC növekedést és jó lesz az úgy. Aztán Zen7-hez jön DDR6 AM6 és ott lesz talán komolyabb átalakítás.

Új hozzászólás Aktív témák

Hirdetés