- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Workspace migration: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
Line 35: Line 35:
* from <Font color=red>Sep ??th 2021 10:00</Font> on new workspaces will be allocated on the replacement filesystem. Existing workspaces will be listed further on.  
* from <Font color=red>Sep ??th 2021 10:00</Font> on new workspaces will be allocated on the replacement filesystem. Existing workspaces will be listed further on.  
* workspaces located on old filesystems can not be extended anymore.
* workspaces located on old filesystems can not be extended anymore.
* if you have to migrate data from workspaces on one to another filesystems, do not use the ''mv'' command to transfer data. For large amount of data, this will fail due to time limits. We recommend using [[Workspace_migration#Using_a_parallel_copy_for_data_transfer | parallel copy programm ''pcp'']] for large amount of data in large files. If this fails for e.g. millions of small files following command may help: ''rsync -a  --hard-links  Old_ws/  new_ws/''
* if you have to migrate data from workspaces on one to another filesystems, do not use the ''mv'' command to transfer data. For large amount of data, this will fail due to time limits. Currently we recommend for e.g. millions of small files or for large amount of data to use following command: ''rsync -a  --hard-links  Old_ws/  new_ws/'' inside a single node batch job. The [[Workspace_migration#Using_a_parallel_copy_for_data_transfer | parallel copy programm ''pcp'']] is currently not available on vulcan.
* take care when you create new batch jobs. If you have to migrate your workspace from an old filesystem to the new location, this takes time. Do not run any job while the migration process is active. This may result in inconsistent data.  
* take care when you create new batch jobs. If you have to migrate your workspace from an old filesystem to the new location, this takes time. Do not run any job while the migration process is active. This may result in inconsistent data.  
* On <Font color=red>Nov ??th 2021</Font> the “old” workspaces on ws2 will be disconnected from the vulcan compute nodes. The filesystems will be available on the frontend systems for data migration until Dec ??th 2021.
* On <Font color=red>Nov ??th 2021</Font> the “old” workspaces on ws2 will be disconnected from the vulcan compute nodes. The filesystems will be available on the frontend systems for data migration until Dec ??th 2021.

Revision as of 10:51, 3 September 2021


Warning: This page describe the necessary steps to migrate workspaces to the new filesystems.
End of 2021 all data on the old vulcan ws2 filesystems will be deleted. Follow the below guide to transfer your data to the new ws10 filesystems.


User migration to new workspaces

On vulcan cluster a new workspace filesystem (ws3) has been integrated. The currently existing workspace file system (ws2) will be shut down by the end of the year 2021. Now, users have to migrate their workspaces located on the old filesystems onto the new filesystem. Run the command ws_list -a on a frontend system to display the path for all your workspaces, if path names match mount points in the following table, these workspaces need to be migrated to the new filesystem.


File System mounted on
NEC_lustre /lustre/nec/ws2

Before you start

Migration for large amount of data consumes a lot of IO ressources. Please review and remove data not needed any more or move it into HPSS.

How to proceed

  • from Sep ??th 2021 10:00 on new workspaces will be allocated on the replacement filesystem. Existing workspaces will be listed further on.
  • workspaces located on old filesystems can not be extended anymore.
  • if you have to migrate data from workspaces on one to another filesystems, do not use the mv command to transfer data. For large amount of data, this will fail due to time limits. Currently we recommend for e.g. millions of small files or for large amount of data to use following command: rsync -a --hard-links Old_ws/ new_ws/ inside a single node batch job. The parallel copy programm pcp is currently not available on vulcan.
  • take care when you create new batch jobs. If you have to migrate your workspace from an old filesystem to the new location, this takes time. Do not run any job while the migration process is active. This may result in inconsistent data.
  • On Nov ??th 2021 the “old” workspaces on ws2 will be disconnected from the vulcan compute nodes. The filesystems will be available on the frontend systems for data migration until Dec ??th 2021.
  • Dec ??nd 2021 all data on the old filesystems will be deleted.

Operation of the workspaces on ws3:

  • No job of any group member will be scheduled for computation as long as the group quota is exceeded.
  • accounting
  • max. lifetime of a workspace is currently 60 days
  • default lifetime of a workspace is 1 day
  • please read related man pages or online workspace mechanism document
in particular note that the workspace tools allow to explicitly address a specific workspace file system using the -F option (e.g. ws_allocate -F ws3 my_workspace 10)
  • to list your available workspace file systems use ws_list -l
  • users can restore expired workspaces using ws_restore

Please read https://kb.hlrs.de/platforms/index.php/Storage_usage_policy

Using a parallel copy for data transfer

pcp is a python based parallel copy using MPI. It can only be run on compute nodes via mpirun.

pcp is similar to cp -r; simply give it a source directory and destination and pcp will recursively copy the source directory to the destination in parallel.

pcp has a number of useful options; use pcp -h to see a description.

This program traverses a directory tree and copies files in the tree in parallel. It does not copy individual files in parallel. It should be invoked via mpirun.

We highly recommend to use pcp with an empty ~/.profile and ~/.bashrc only! Furthermore, take care that only the following modules are loaded when using pcp (this can be achieved by logging into the system without modifying the list of modules and loading the pcp module only):
1) system/pbs/19.1.1(default)
2) system/batchsystem/auto
1) system/site_names
2) system/ws/1.3.5b(default)
3) system/wrappers/1.0(default)
4) pcp


Basic arguments

Run pcp with the -p option to preserve permissions and timestamps, and ownership.

-p  : preserve permissions and timestamps, and ownership

-b C: Copy files larger than C Mbytes in C Mbyte chunks

Algorithm

pcp runs in two phases, respectively in three phases, if the -p option is used:

Phase I is a parallel walk of the source file tree, involving all MPI ranks in a peer-to-peer algorithm. The walk constructs the list of files to be copied and creates the destination directory hierarchy.

In phase II, the actual files are copied. Phase II uses a master-slave algorithm. R0 is the master and dispatches file copy instructions to the slaves (R1...Rn).

In phase III the permissions, timestamp and ownership are set as in the source directory (if -p option is used).

We highly recommend using the -p option.

Job Script example

Here is an example of a job script.

You have to change the SOURCEDIR and TARGETDIR according to your setup. Also the number of nodes and wallclock time should be adjusted.

Again, pcp does NOT parallelize a single copy operation, but the number of copy operations are distributed over the nodes.

#!/bin/bash
#PBS -N parallel-copy
#PBS -l select=2:node_type=hsw:mpiprocs=20
#PBS -l walltime=00:20:00

module load pcp/2.0.0-39-ge19b

SOURCEDIR=<YOUR SOURCE DIRECTORY HERE>
TARGETDIR=<YOUR TARGET DIRECTORY HERE>

sleep 5
nodes=$(cat $PBS_NODEFILE | sort -u | wc -l)
let cores=nodes*128

time_start=$(date "+%c  :: %s")
mpirun -np $cores pcp -p -b 4096 $SOURCEDIR $TARGETDIR
time_end=$(date "+%c  :: %s")

tt_start=$(echo $time_start | awk {'print $9'})
tt_end=$(echo $time_end | awk {'print $9'})
(( total_time=$tt_end-$tt_start ))
echo "Total runtime in seconds: $total_time"

Output of a run with the script

R0: All workers have reported in.
Starting 40 processes.
Files larger than 4096 Mbytes will be copied in parallel chunks.

Starting phase I: Scanning and copying directory structure...
Phase I done: Scanned 168 files, 4 dirs in 00 hrs 00 mins 00 secs (6134 items/sec).
168 files will be copied.

Starting phase II: Copying files...
Phase II done.

Copy Statisics:
Rank 1 copied 4.00 Gbytes in 1 files (15.85 Mbytes/s)
Rank 2 copied 4.00 Gbytes in 1 files (15.69 Mbytes/s)
Rank 3 copied 4.00 Gbytes in 1 files (15.72 Mbytes/s)
...
Rank 37 copied 4.00 Gbytes in 2 files (24.91 Mbytes/s)
Rank 38 copied 4.00 Gbytes in 2 files (24.94 Mbytes/s)
Rank 39 copied 4.00 Gbytes in 2 files (7.34 Mbytes/s)
Total data copied: 1.05 Tbytes in 433 files (1.81 Gbytes/s)
Total Time for copy: 00 hrs 09 mins 50 secs
Warnings 0

Starting phase III: Setting directory timestamps...
Phase III Done. 00 hrs 00 mins 00 secs
Total runtime in seconds: 601