- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
NEC Cluster cacau introduction: Difference between revisions
Line 5: | Line 5: | ||
== Hardware and Architecture == | == Hardware and Architecture == | ||
The HWW Xeon EM64T cluster platform consists of one front node for interactive access | The HWW Xeon EM64T cluster platform consists of one front node for interactive access | ||
(cacau.hww.de) and several nodes for execution of parallel programs. The cluster consists of 210 dual CPU nodes with 3.2GHz/3.0GHz Xeon EM64T CPU's + 2/8/16/ | (cacau.hww.de) and several nodes for execution of parallel programs. The cluster consists of 210 dual CPU nodes with 3.2GHz/3.0GHz Xeon EM64T CPU's + 2/8/16/128 GByte memory on the nodes and two 2way frontend node with 2 Xeon EM64T 3.2GHz CPU's + 6GByte memory. Additionally a RAID system with 8 TByte and a GPFS with 15 TByte is available. The local disks of each node (58 GByte) serves as scratch disks. | ||
'''Features:''' | '''Features:''' | ||
* Cluster of 210 dual SMPs nodes NEC servers with 2/8/16/ | * Cluster of 210 dual SMPs nodes NEC servers with 2/8/16/128 GByte memory | ||
* Frontend node is a 2way NEC Express5800/120Rg-2 server with 6GByte memory | * Frontend node is a 2way NEC Express5800/120Rg-2 server with 6GByte memory | ||
* Node-Node interconnect Voltaire Infiniband(Switch:ISR9288) Network + Gigabit Ethernet | * Node-Node interconnect Voltaire Infiniband(Switch:ISR9288) Network + Gigabit Ethernet | ||
Line 44: | Line 44: | ||
| 1 || 2GB || 3.2 GHz || 2 || 80GB || <font color=red>-</font> || mem2gb || infiniband || noco001-075, noco109-204 || 172 | | 1 || 2GB || 3.2 GHz || 2 || 80GB || <font color=red>-</font> || mem2gb || infiniband || noco001-075, noco109-204 || 172 | ||
|- | |- | ||
| 2 || 8GB || 3.0 GHz || 4 || | | 2 || 8GB || 3.0 GHz || 4 || 160GB || <font color=red>workq</font> || - || infiniband || noco075-106 || 32 | ||
|- | |- | ||
| 3 || 8GB || 3.2 GHz || 2 || | | 3 || 8GB || 3.2 GHz || 2 || 80GB || <font color=red>-</font> || mem8gb || infiniband || noco205-208 || 4 | ||
|- | |- | ||
| 4 || 16GB || 3.2 GHz || 2 || | | 4 || 16GB || 3.2 GHz || 2 || 80GB || <font color=red>workq</font> || mem16gb || infiniband || noco209-210 || 2 | ||
|- | |- | ||
| 5 || 128GB || 2.4 GHz || 8 || 1.7TB || <font color=red>pp</font> || - || GigE || pp2 - pp3 || 2 | | 5 || 128GB || 2.4 GHz || 8 || 1.7TB || <font color=red>pp</font> || - || GigE || pp2 - pp3 || 2 | ||
|} | |} |
Revision as of 14:04, 8 December 2008
This platform serves the following purpose. It enables development and computation of parallel programs on the Intel Xeon processors with Intel EM64T Technology. The two major parallel programming standards MPI and OpenMP are supported. Please note that you must limit the execution time of your jobs during daytime to guarantee the short turn around times that are necessary for development.
Hardware and Architecture
The HWW Xeon EM64T cluster platform consists of one front node for interactive access (cacau.hww.de) and several nodes for execution of parallel programs. The cluster consists of 210 dual CPU nodes with 3.2GHz/3.0GHz Xeon EM64T CPU's + 2/8/16/128 GByte memory on the nodes and two 2way frontend node with 2 Xeon EM64T 3.2GHz CPU's + 6GByte memory. Additionally a RAID system with 8 TByte and a GPFS with 15 TByte is available. The local disks of each node (58 GByte) serves as scratch disks.
Features:
- Cluster of 210 dual SMPs nodes NEC servers with 2/8/16/128 GByte memory
- Frontend node is a 2way NEC Express5800/120Rg-2 server with 6GByte memory
- Node-Node interconnect Voltaire Infiniband(Switch:ISR9288) Network + Gigabit Ethernet
- Disk 8 TByte home/shared scratch + 1.2 TByte local scratch + 15 TByte GPFS parallel Filesystem
- Batch system: Torque, Maui scheduler
- Operating System: Tao Linux release 1 (Mooch Update 2)
- NEC HPC Linux software packages
- Intel Compilers
- Voltaire MPI
- Switcher/Module
Peak Performance: 3.9 TFLOP/s Cores/node: 4 Memory: 1 TB Shared Disk: 24 TB Local Disks/node: 80 GB Number of Nodes: 212 Node-node data transfer rate: 10 Gbps(Full bisectional: 20Gbps) infiniband
Type | memory | Freq | cores | Disk | PBS Queue | PBS properties | Interconnect | nodes | number |
---|---|---|---|---|---|---|---|---|---|
1 | 2GB | 3.2 GHz | 2 | 80GB | - | mem2gb | infiniband | noco001-075, noco109-204 | 172 |
2 | 8GB | 3.0 GHz | 4 | 160GB | workq | - | infiniband | noco075-106 | 32 |
3 | 8GB | 3.2 GHz | 2 | 80GB | - | mem8gb | infiniband | noco205-208 | 4 |
4 | 16GB | 3.2 GHz | 2 | 80GB | workq | mem16gb | infiniband | noco209-210 | 2 |
5 | 128GB | 2.4 GHz | 8 | 1.7TB | pp | - | GigE | pp2 - pp3 | 2 |