- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

NEC Aurora Access: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
(Created page with "== Introduction == Main entry point for usage of Aurora TSUBASA nodes is the vulcan cluster. The VH are integrated into PBSPro of the vulcan cluster, and have access to the f...")
 
No edit summary
 
(One intermediate revision by the same user not shown)
Line 38: Line 38:
Standard execution model of VE is that all the executable is running on the VE card, and is therefor transfered from host
Standard execution model of VE is that all the executable is running on the VE card, and is therefor transfered from host
to card and executed there, having access to filesystems of the host.
to card and executed there, having access to filesystems of the host.
Transfer happens implicit by calling ./a.out, which is transfered to VE0 in that simple case and executed.
Stdin/stderr/stdout are connected to the VH shell, it feels as if the executable would run on the VH, but it is in fact
running on the VE.
For more control where the exectuable is running, one can use
ve_exec -N 0 ./a.out
command or set
VE_NODE_NUMBER
environment variable.
In general there is many tools under
/opt/nec/ve/bin
which can be used together with VE nodes, there is a
/opt/nec/ve/bin/top
and
/opt/nec/ve/bin/ps
command showing processesses on VEs (use VE_NODE_NUMBER to select the right VE, numbered 0-7),
VE config can be seen with
/opt/nec/ve/bin/lscpu

Latest revision as of 12:18, 8 January 2020

Introduction

Main entry point for usage of Aurora TSUBASA nodes is the vulcan cluster. The VH are integrated into PBSPro of the vulcan cluster, and have access to the filesystems.


Access

There are several frontend/login nodes available:

  • vulcan.hww.hlrs.de

The frontend nodes are intended as single point of access to the entire cluster. Here you can set your environment, move your data, edit and compile your programs and create batch scripts. Interactive usage like run your program which leads to a high load is NOT allowed on the frontend/login nodes.


VH Access

VH access only through batch system.

Compilation

Old SX-ACE executables can not be reused. Codes have to be recompiled.

For compilation you can use the crosscompilers running on the VH nodes (not available on login nodes).

  • nfort
  • ncc
  • nc++
  • mpinfort
  • mpincc
  • mpinc++

VE uses elf binary format.

Execution

Standard execution model of VE is that all the executable is running on the VE card, and is therefor transfered from host to card and executed there, having access to filesystems of the host.

Transfer happens implicit by calling ./a.out, which is transfered to VE0 in that simple case and executed. Stdin/stderr/stdout are connected to the VH shell, it feels as if the executable would run on the VH, but it is in fact running on the VE. For more control where the exectuable is running, one can use

ve_exec -N 0 ./a.out 

command or set

VE_NODE_NUMBER 

environment variable.

In general there is many tools under

/opt/nec/ve/bin

which can be used together with VE nodes, there is a

/opt/nec/ve/bin/top 

and

/opt/nec/ve/bin/ps

command showing processesses on VEs (use VE_NODE_NUMBER to select the right VE, numbered 0-7), VE config can be seen with

/opt/nec/ve/bin/lscpu