The node architecture is almost common to three universities.A node consists of four quad-core processors, AMD Opteron 8350 Barcelona. Since a core can issue two floating point multiply-and-add instructions in its single cycle of 2.3GHz, the peak performance of a processor is 36.8GFlops. Each processor has a Direct Connect memory interface to an 8GB DDR2-667 memory and thrree HyperTransport Links to connect other processors.The HyperTransport links of two processors are also connected with the chip-sets to bridge between each link and two PCI Express x8 links each of which is then connected to an interface module for node interconnection.This interconnection module solely has the variance in three universities because it is for Infiniband 4x DDR in U. Tsukuba and Kyoto U. while U. Tokyo adopts Myrinet 10G links.In total, a node has 147GFlops peak performance, 32GB memoory of 42.6GB/s aggregate bandwidth, and four interconnection links of 8GB/s aggregate bandwidth with Infiniband or 5GB/s with Myrinet 10G.
Each site has its own configuration according to its own mission and restriction.The number of nodes and peak performance of the system in U. Tsukuba are 648 and 95.4TFlops.U. Tokyo has the largest one in the members with 952 nodes for 140.1TFlops, while the system of Kyoto U. is the smallest with 416 nodes for 61.2TFlops but it is combined with a fat-node subsystem of 8.96TFlops and 7TB memory.All the Infiniband links from nodes in U. Tsukuba and Kyoto U. are connected by a switch complex to achieve full bisection bandwidth so that all nodes perform wire-speed communications at once.U. Tokyo also equips this powerful connection for its 512-node and 128-node subsystems and for the subsystems with half-shrunk interconnection of 256 nodes and 56 nodes.