- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Workflow and Job monitoring: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
No edit summary
Line 1: Line 1:
For projects that want to implement an automatic workflow using the high-performance computer Hawk, or any other system at HLRS, we provide guidelines how to implement a job monitoring and or workflow handling on this page.  
For projects that want to implement an automatic workflow using the high-performance computer Hawk, or any other system at HLRS, we provide guidelines how to implement a job monitoring and / or workflow handling on this page.  
* please be aware that you are not the only user on the system. There are many other users who perform tasks similar to yours. This may result in the case that the job of one individual is working, but due to the large number of these tasks, there is massive malfunction.
* please be aware that you are not the only user on the system. There are many other users who perform tasks similar to yours. This may result in the case that the job of one individual is working, but due to the large number of these tasks, there is massive malfunction.
* if you implement a monitoring system at your local site, do not run an instance (monitor task) per Job. This will fail due to lots of ssh requsete and qstat - commands on the compute server at HLRS
* if you implement a monitoring system at your local site, do not run an instance (monitor task) per Job. This will fail due to lots of ssh requsets and qstat - commands on the compute server at HLRS
* to not fire up a ssh ( to execute e.g. qstat) on the HLRS compute server in a loop without a delay. You should wait at least 3 Minutes between two commands!
* do not fire up a ssh ( to execute e.g. qstat) on the HLRS compute server in a loop without a delay. You should wait at least 3 Minutes between two commands!

Revision as of 11:55, 24 March 2020

For projects that want to implement an automatic workflow using the high-performance computer Hawk, or any other system at HLRS, we provide guidelines how to implement a job monitoring and / or workflow handling on this page.

  • please be aware that you are not the only user on the system. There are many other users who perform tasks similar to yours. This may result in the case that the job of one individual is working, but due to the large number of these tasks, there is massive malfunction.
  • if you implement a monitoring system at your local site, do not run an instance (monitor task) per Job. This will fail due to lots of ssh requsets and qstat - commands on the compute server at HLRS
  • do not fire up a ssh ( to execute e.g. qstat) on the HLRS compute server in a loop without a delay. You should wait at least 3 Minutes between two commands!