- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -

Difference between revisions of "Workspace migration"

From HLRS Platforms
Jump to navigationJump to search
 
(109 intermediate revisions by 9 users not shown)
Line 1: Line 1:
  
== User migration on new workspaces ==
 
  
 +
<!-- {{Warning
 +
| text = This page originally describe the necessary steps to migrate workspaces to the new filesystems back in 2017. This page is mostly kept for documentation purposes.<br/>
 +
 +
Long-term, the most valuable information is the description of the utility ''pcp'', which allows copying directory structures in parallel on Lustre filesystems. Some of the original scripts shown on this page have been modified to account for changes in the HLRS environment. They are up-to-date as of July 2018.
 +
}} -->
 +
 +
 +
== User migration to new workspaces ==
 +
 +
 +
With the installation of HPE Apollo 9000 (Hawk) a new fast workspace filesystem was integrated. For a certain transition period the workspace filesystem of the predecessor system Cray XC40 hazelhen (Sonexion ws9) was integrated at the same time.
 +
Now, users have to migrate their workspaces located on the old filesystem onto the new filesystems. Run the command ''ws_list -a'' on a frontend system to display the path for all your workspaces, if path names match mount points in the following table, these workspaces need to
 +
be migrated to the new filesystem.
  
In December 2016 the workspaces installed in 2011 with the Cray Xe6 System Hermit will be replaced. In preparation for this task, users have to migrate their workspaces onto the replacement filesystems. Run the command ''ws_list -a'' on a frontend system to display the path for all your workspaces, if path names matches workspaces in following table, this workspace needs to
 
be migrated.
 
  
 
{|Class=wikitable
 
{|Class=wikitable
 
|-
 
|-
 
! File System  
 
! File System  
! alias
 
 
! mounted on
 
! mounted on
 
|-
 
|-
| ws1
+
| ws9.0
| univ_1
+
| /lustre/cray/ws9/0
| /lustre/cray/ws1
+
|-
 +
| ws9.1
 +
| /lustre/cray/ws9/1
 +
|-
 +
| ws9.2
 +
| /lustre/cray/ws9/2
 
|-
 
|-
| ws3
+
| ws9.3
| univ_2
+
| /lustre/cray/ws9/3
| /lustre/cray/ws3
 
 
|-
 
|-
| ws3
+
| ws9.4
| univ_2
+
| /lustre/cray/ws9/4
| /lustre/cray/ws3
 
 
|-
 
|-
| ws4
+
| ws9.5
| res_1
+
| /lustre/cray/ws9/5
| /lustre/cray/ws4
 
 
|-
 
|-
| ws5
+
| ws9.6
| ind_2
+
| /lustre/cray/ws9/6
| /lustre/cray/ws6
 
 
|-
 
|-
| ws6
+
| ws9.6p
| res_2
+
| /lustre/cray/ws9/6
| /lustre/cray/ws6
 
 
|-
 
|-
 
|}
 
|}
  
 +
== Before you start ==
  
== How to proceed  ( Version 1) ==
+
Migration for large amount of data consumes a lot of IO ressources. '''Please review and remove data not needed any more or move it into [[High_Performance_Storage_System_(HPSS)| HPSS]].'''
  
* Users have got access to the replacement Workspaces. To find out which one, try following command:
+
== How to proceed ==
** ''ws_allocate –F ws7  test_ws 5''  # if this command run successful, you should prepare your Jobs using this workspace and submit all new compute Jobs utilizing this workspace.
 
** If above command fails, following command should work: ''ws_allocate –F ws8 test _ws  5''      #  if not contact your project supervisor.
 
* Run all new submitted Jobs within workspaces in the new location.
 
* Please review and remove data which you do not need any more.
 
* Migrate data from the “old” location into the fresh created workspace (please double check this target directory is located in either ws7 or ws8 directory tree).
 
to migrate we suggest following command:
 
''rsync -a Old_ws  new_ws''
 
* On November 7th 2016 the default will be changed. Please ensure your jobs are using the new workspace directory
 
* On December 7th the “old” workspaces ws1, … ws6 will be disconnected from the Cray system. The Filesystems will be available on the frontend systems for data migration until 11th January 2017
 
*January 15th 2017 all data on the old filesystems will be removed.
 
  
== Operation of the workspaces will be changed: ==
+
* from <Font color=red>NovDecember XXth 2020 X0:00</Font> on new workspaces will be allocated on the replacement filesystems. Existing workspaces will be listed further on.
 +
* workspaces located on old filesystems can not be extended anymore.
 +
* if you have to migrate data from workspaces on one of the above listed filesystems, do not use the ''mv'' command to transfer data. For large amount of data, this will fail due to time limits. We recommend using [[Workspace_migration#Using_a_parallel_copy_for_data_transfer | parallel copy programm ''pcp'']] for large amount of data in large files. If this fails for e.g. millions of small files following command may help: ''rsync -a  --hard-links  Old_ws/  new_ws/''
 +
* take care when you create new batch jobs. If you have to migrate your workspace from an old filesystem to the new location, this takes time. Do not run any job while the migration process is active. This may result in inconsistent data.
 +
* On <Font color=red>January 31th 2021</Font> the “old” workspaces ws9.* will be disconnected from the HPE Apollo 9000 compute nodes. The filesystems will be available on the frontend systems for data migration until February 14th 2021.
 +
* <Font color=red>February 15th 2021</font> all data on the old filesystems will be deleted.
  
* To create a workspace or extend an existent workspace, an interactive shell is necessary.
+
== Operation of the workspaces: ==
* Due to a drop of performance on high usage of quota, no job of any group member will be scheduled for computation as long as the group quota exceeds 80%. All  blocked group members get a notice by E-mail (if a valid address is registered)
+
 
 +
* No job of any group member will be scheduled for computation as long as the group quota is exceeded.
 
* accounting
 
* accounting
 +
* max. lifetime of a workspace is currently 60 days
 +
* default lifetime of a workspace is 1 day
 +
* please read related man pages or online [[Workspace_mechanism | workspace mechanism document]]<BR>
 +
: in particular note that the workspace tools allow to explicitly address a specific workspace file system using the <tt>-F</tt> option (e.g. <tt>ws_allocate -F ws14.1 my_workspace 10</tt>)
 +
* to list your available workspace file systems use <tt>ws_list -l</tt>
 +
* users can restore expired workspaces using ''ws_restore''
  
 
Please read https://kb.hlrs.de/platforms/index.php/Storage_usage_policy
 
Please read https://kb.hlrs.de/platforms/index.php/Storage_usage_policy
 +
 +
== Using a parallel copy for data transfer ==
 +
 +
pcp is a python based parallel copy using MPI. It can only be run on compute nodes via mpirun.
 +
 +
pcp is similar to cp -r; simply give it a source directory and destination and pcp will recursively copy the source directory to the destination in parallel.
 +
 +
pcp has a number of useful options; use pcp -h to see a description.
 +
 +
This program traverses a directory tree and copies files in the tree in parallel. It does not copy individual files in parallel. It should be invoked via mpirun.
 +
 +
=== Basic arguments ===
 +
 +
Run '''pcp''' with the '''-p''' option to preserve permissions and timestamps, and ownership.<br>
 +
'''Do not''' use the '''-l''' and/or '''-ls''' option to enable file striping. WS14 is not a lustre filesystem. Using the '''-l''' and/or '''-ls''' option to copy files to WS14 leads to the following error:
 +
 +
R0: All workers have reported in.
 +
Starting 256 processes.
 +
Will copy lustre stripe information.
 +
Files larger than 4096 Mbytes will be copied in parallel chunks.
 +
 +
ERROR: You have asked me to set lustre striping attributes, but/lustre/hpe/ime is not a lustre filesystem
 +
Exiting.
 +
Total runtime in seconds: 9
 +
 +
'''-p'''  : preserve permissions and timestamps, and ownership
 +
 +
'''-b C''': Copy files larger than C Mbytes in C Mbyte chunks
 +
 +
=== Algorithm ===
 +
 +
pcp runs in two phases, respectively in three phases, if the '''-p''' option is used:
 +
 +
Phase I is a parallel walk of the file tree, involving all MPI ranks in a peer-to-peer algorithm. The walk constructs the list of files to be copied and creates the destination directory hierarchy.
 +
 +
In phase II, the actual files are copied. Phase II uses a master-slave algorithm.
 +
R0 is the master and dispatches file copy instructions to the slaves (R1...Rn).
 +
 +
In phase III the permissions, timestamp and ownership are set as in the source directory (if '''-p''' option is used).
 +
 +
We highly recommend using the '''-p''' option.
 +
 +
=== Job Script example ===
 +
Here is an example of a job script.
 +
 +
You have to change the SOURCEDIR and TARGETDIR according to your setup.
 +
Also the number of nodes and wallclock time should be adjusted.
 +
 +
Again, pcp does NOT parallelize a single copy operation, but the number of copy operations are distributed over the nodes.
 +
 +
#!/bin/bash
 +
#PBS -N parallel-copy
 +
#PBS -l select=2:mpiprocs=128
 +
#PBS -l walltime=00:20:00
 +
 +
module load pcp/2.0.0-39-ge19b
 +
 +
SOURCEDIR=<YOUR SOURCE DIRECTORY HERE>
 +
TARGETDIR=<YOUR TARGET DIRECTORY HERE>
 +
 +
sleep 5
 +
nodes=$(cat $PBS_NODEFILE | sort -u | wc -l)
 +
let cores=nodes*128
 +
 +
time_start=$(date "+%c  :: %s")
 +
mpirun -np $cores pcp -p -b 4096 $SOURCEDIR $TARGETDIR
 +
time_end=$(date "+%c  :: %s")
 +
 +
tt_start=$(echo $time_start | awk {'print $9'})
 +
tt_end=$(echo $time_end | awk {'print $9'})
 +
(( total_time=$tt_end-$tt_start ))
 +
echo "Total runtime in seconds: $total_time"
 +
 +
Output of a run with the script
 +
 +
R0: All workers have reported in.
 +
Starting 256 processes.
 +
Files larger than 4096 Mbytes will be copied in parallel chunks.
 +
 +
Starting phase I: Scanning and copying directory structure...
 +
Phase I done: Scanned 168 files, 4 dirs in 00 hrs 00 mins 00 secs (6134 items/sec).
 +
168 files will be copied.
 +
 +
Starting phase II: Copying files...
 +
Phase II done.
 +
 +
Copy Statisics:
 +
Rank 1 copied 4.00 Gbytes in 1 files (15.85 Mbytes/s)
 +
Rank 2 copied 4.00 Gbytes in 1 files (15.69 Mbytes/s)
 +
Rank 3 copied 4.00 Gbytes in 1 files (15.72 Mbytes/s)
 +
...
 +
Rank 253 copied 4.00 Gbytes in 2 files (24.91 Mbytes/s)
 +
Rank 254 copied 4.00 Gbytes in 2 files (24.94 Mbytes/s)
 +
Rank 255 copied 4.00 Gbytes in 2 files (7.34 Mbytes/s)
 +
Total data copied: 1.05 Tbytes in 433 files (1.81 Gbytes/s)
 +
Total Time for copy: 00 hrs 09 mins 50 secs
 +
Warnings 0
 +
 +
Starting phase III: Setting directory timestamps...
 +
Phase III Done. 00 hrs 00 mins 00 secs
 +
Total runtime in seconds: 601

Latest revision as of 03:16, 21 November 2020



User migration to new workspaces

With the installation of HPE Apollo 9000 (Hawk) a new fast workspace filesystem was integrated. For a certain transition period the workspace filesystem of the predecessor system Cray XC40 hazelhen (Sonexion ws9) was integrated at the same time. Now, users have to migrate their workspaces located on the old filesystem onto the new filesystems. Run the command ws_list -a on a frontend system to display the path for all your workspaces, if path names match mount points in the following table, these workspaces need to be migrated to the new filesystem.


File System mounted on
ws9.0 /lustre/cray/ws9/0
ws9.1 /lustre/cray/ws9/1
ws9.2 /lustre/cray/ws9/2
ws9.3 /lustre/cray/ws9/3
ws9.4 /lustre/cray/ws9/4
ws9.5 /lustre/cray/ws9/5
ws9.6 /lustre/cray/ws9/6
ws9.6p /lustre/cray/ws9/6

Before you start

Migration for large amount of data consumes a lot of IO ressources. Please review and remove data not needed any more or move it into HPSS.

How to proceed

  • from NovDecember XXth 2020 X0:00 on new workspaces will be allocated on the replacement filesystems. Existing workspaces will be listed further on.
  • workspaces located on old filesystems can not be extended anymore.
  • if you have to migrate data from workspaces on one of the above listed filesystems, do not use the mv command to transfer data. For large amount of data, this will fail due to time limits. We recommend using parallel copy programm pcp for large amount of data in large files. If this fails for e.g. millions of small files following command may help: rsync -a --hard-links Old_ws/ new_ws/
  • take care when you create new batch jobs. If you have to migrate your workspace from an old filesystem to the new location, this takes time. Do not run any job while the migration process is active. This may result in inconsistent data.
  • On January 31th 2021 the “old” workspaces ws9.* will be disconnected from the HPE Apollo 9000 compute nodes. The filesystems will be available on the frontend systems for data migration until February 14th 2021.
  • February 15th 2021 all data on the old filesystems will be deleted.

Operation of the workspaces:

  • No job of any group member will be scheduled for computation as long as the group quota is exceeded.
  • accounting
  • max. lifetime of a workspace is currently 60 days
  • default lifetime of a workspace is 1 day
  • please read related man pages or online workspace mechanism document
in particular note that the workspace tools allow to explicitly address a specific workspace file system using the -F option (e.g. ws_allocate -F ws14.1 my_workspace 10)
  • to list your available workspace file systems use ws_list -l
  • users can restore expired workspaces using ws_restore

Please read https://kb.hlrs.de/platforms/index.php/Storage_usage_policy

Using a parallel copy for data transfer

pcp is a python based parallel copy using MPI. It can only be run on compute nodes via mpirun.

pcp is similar to cp -r; simply give it a source directory and destination and pcp will recursively copy the source directory to the destination in parallel.

pcp has a number of useful options; use pcp -h to see a description.

This program traverses a directory tree and copies files in the tree in parallel. It does not copy individual files in parallel. It should be invoked via mpirun.

Basic arguments

Run pcp with the -p option to preserve permissions and timestamps, and ownership.
Do not use the -l and/or -ls option to enable file striping. WS14 is not a lustre filesystem. Using the -l and/or -ls option to copy files to WS14 leads to the following error:

R0: All workers have reported in.
Starting 256 processes.
Will copy lustre stripe information.
Files larger than 4096 Mbytes will be copied in parallel chunks.

ERROR: You have asked me to set lustre striping attributes, but/lustre/hpe/ime is not a lustre filesystem
Exiting.
Total runtime in seconds: 9

-p : preserve permissions and timestamps, and ownership

-b C: Copy files larger than C Mbytes in C Mbyte chunks

Algorithm

pcp runs in two phases, respectively in three phases, if the -p option is used:

Phase I is a parallel walk of the file tree, involving all MPI ranks in a peer-to-peer algorithm. The walk constructs the list of files to be copied and creates the destination directory hierarchy.

In phase II, the actual files are copied. Phase II uses a master-slave algorithm. R0 is the master and dispatches file copy instructions to the slaves (R1...Rn).

In phase III the permissions, timestamp and ownership are set as in the source directory (if -p option is used).

We highly recommend using the -p option.

Job Script example

Here is an example of a job script.

You have to change the SOURCEDIR and TARGETDIR according to your setup. Also the number of nodes and wallclock time should be adjusted.

Again, pcp does NOT parallelize a single copy operation, but the number of copy operations are distributed over the nodes.

#!/bin/bash
#PBS -N parallel-copy
#PBS -l select=2:mpiprocs=128
#PBS -l walltime=00:20:00

module load pcp/2.0.0-39-ge19b

SOURCEDIR=<YOUR SOURCE DIRECTORY HERE>
TARGETDIR=<YOUR TARGET DIRECTORY HERE>

sleep 5
nodes=$(cat $PBS_NODEFILE | sort -u | wc -l)
let cores=nodes*128

time_start=$(date "+%c  :: %s")
mpirun -np $cores pcp -p -b 4096 $SOURCEDIR $TARGETDIR
time_end=$(date "+%c  :: %s")

tt_start=$(echo $time_start | awk {'print $9'})
tt_end=$(echo $time_end | awk {'print $9'})
(( total_time=$tt_end-$tt_start ))
echo "Total runtime in seconds: $total_time"

Output of a run with the script

R0: All workers have reported in.
Starting 256 processes.
Files larger than 4096 Mbytes will be copied in parallel chunks.

Starting phase I: Scanning and copying directory structure...
Phase I done: Scanned 168 files, 4 dirs in 00 hrs 00 mins 00 secs (6134 items/sec).
168 files will be copied.

Starting phase II: Copying files...
Phase II done.

Copy Statisics:
Rank 1 copied 4.00 Gbytes in 1 files (15.85 Mbytes/s)
Rank 2 copied 4.00 Gbytes in 1 files (15.69 Mbytes/s)
Rank 3 copied 4.00 Gbytes in 1 files (15.72 Mbytes/s)
...
Rank 253 copied 4.00 Gbytes in 2 files (24.91 Mbytes/s)
Rank 254 copied 4.00 Gbytes in 2 files (24.94 Mbytes/s)
Rank 255 copied 4.00 Gbytes in 2 files (7.34 Mbytes/s)
Total data copied: 1.05 Tbytes in 433 files (1.81 Gbytes/s)
Total Time for copy: 00 hrs 09 mins 50 secs
Warnings 0

Starting phase III: Setting directory timestamps...
Phase III Done. 00 hrs 00 mins 00 secs
Total runtime in seconds: 601