- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -

Hawk installation schedule

From HLRS Platforms

This page will be updated as new information becomes available. Please be aware, HPC systems are made out of leading edge components. If one of this components is delayed, the complete schedule will change. Do not take this schedule too serious...

time frame action
end of May 2019 Installation of a small test system
September 2nd ... 6th maintenance and preparation

of infrastructure (1st part).

November 19th Reduce Cray XC40 HazelHen to ~4488 Nodes and

preparation of infrastructure for Hawk

January 17th 2020 Installation of 8 racks Hawk, storage, pre-post servers, ...

integration, testing, setup for users

until Februar 18th complete Hawk hardware installation
mid-February Test Phase for pilot users ~ 3 weeks. Due to the delayed delivery, the test phase will also be postponed. The filesystem will be ws9 (Hazel Hen)
February 24 2020 Final shutdown and decommission of Hazel Hen

and preparation of infrastructure for the complete installation of Hawk.

February 25 2020 hazelhen workspace filesystem (ws9) has been fully integrated with all workspaces into the hawk system.
until March 1st testing, and integrate additional racks into

first phase (2048 nodes)

March 9th general availability for all users
until March 15 prepare power and cooling facility for second phase hawk
later test and integrate all remaining cabinets into the hawk system (5632 nodes total)
even later Acceptance and production
April / May Data migration by users from ws9 (Hazel Hen) onto ws10 (Hawk) filesystem

Terms of Use

we are pleased to provide early user access to Hawk.

Please note that the system is still far from production status in terms of stability / performance / configuration / usage.

  • Both the node configuration (such as numa domains per socket) and the InfiniBand configuration are not yet final und both will be subject to change.
  • This means that the performance of the system is not optimal. It also implies that the users should not yet attempt an optimization of their applications based on the current setup.
  • No monitoring system active, this may cause failed jobs if compute nodes break down and will be part of subsequent jobs

The usage is granted under the following conditions:

  • Do not publish performance measurements of the current system configuration
  • Do not monopolize the system. Give other users the opportunity to use the system. Avoid long running jobs.
  • If you encounter problems, please report this to the prepared Trouble Ticket System via email to:
  • Due to the current state of the system, your application may not yet be operational. In this case, please wait a few more days. The system will be improved very quickly.