- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Workspace migration: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
 
(38 intermediate revisions by one other user not shown)
Line 2: Line 2:


{{Warning
{{Warning
| text = This page describe the necessary steps to migrate workspaces to the new filesystems.<br/>  
| text = This page describes the necessary steps to migrate workspaces to another workspace filesystems.<br/>  


End of 2021 all data on the old vulcan ws2 filesystems will be deleted. Follow the below guide to transfer your data to the new ws10 filesystems.
On 18th October 2024 all data on the old Hawk ws10 filesystems will be deleted. Follow the guide below to transfer your data to the ws11 workspace filesystems.


}}
}}
Line 12: Line 12:




On vulcan cluster a new workspace filesystem (ws3) has been integrated. The currently existing workspace file system (ws2) will be shut down by the end of the year 2021.  
On Hawk the workspace filesystem ws11 will be the new default workspace filesystem. The currently existing default workspace filesystem (ws10) will be shut down on 18th October 2024. The policy settings of ws10 will be transferred to the ws11 filesystem.
Now, users have to migrate their workspaces located on the old filesystems onto the new filesystem. Run the command ''ws_list -a'' on a frontend system to display the path for all your workspaces, if path names match mount points in the following table, these workspaces need to  
Now, users have to migrate their workspaces located on the ws10 filesystems onto the ws11 filesystem. Run the command ''ws_list -a'' on a frontend system to display the path for all your workspaces. If path names match mount points listed in the following table, these workspaces need to  
be migrated to the new filesystem.
be migrated to the ws11 filesystem.




Line 22: Line 22:
! mounted on
! mounted on
|-
|-
| NEC_lustre
| ws10.0
| /lustre/nec/ws2
| /lustre/hpe/ws10/ws10.0
|-
| ws10.1
| /lustre/hpe/ws10/ws10.1
|-
| ws10.2
| /lustre/hpe/ws10/ws10.2
|-
| ws10.3
| /lustre/hpe/ws10/ws10.3
|-
| ws10.3P
| /lustre/hpe/ws10/ws10.3P
|-
|-
|}
|}
Line 31: Line 43:
Migration for large amount of data consumes a lot of IO ressources. '''Please review and remove data not needed any more or move it into [[High_Performance_Storage_System_(HPSS)| HPSS]].'''
Migration for large amount of data consumes a lot of IO ressources. '''Please review and remove data not needed any more or move it into [[High_Performance_Storage_System_(HPSS)| HPSS]].'''


== How to proceed ==
== How to proceed / Time schedule for the gradual shutdown of ws10 ==


* from <Font color=red>Sep ??th 2021 10:00</Font> on new workspaces will be allocated on the replacement filesystem. Existing workspaces will be listed further on.  
* <Font color=red>2024-07-22 10:00</Font>:
* workspaces located on old filesystems can not be extended anymore.
** The default workspace filesystem will be switched from ws10 to ws11. ws_* tools applied to ws10 require the additional option <tt>'-F <workspacefilesystem>'</tt>.
* if you have to migrate data from workspaces on one to another filesystems, do not use the ''mv'' command to transfer data. For large amount of data, this will fail due to time limits. Currently we recommend for e.g. millions of small files or for large amount of data to use following command inside a single node batch job: ''rsync -a  --hard-links  Old_ws/  new_ws/'' 
** ws11 will get the [[Workspace_migration#Operation_/_Policies_of_the_workspaces_on_ws11: |same policy settings]] as ws10.
* The [[Workspace_migration#Using_a_parallel_copy_for_data_transfer | parallel copy programm ''pcp'']] is currently only for experimental use available on vulcan.
** <Font color=red>Quota limit settings for each user-group will be moved from ws10 to ws11 and the quota enforcement will be enabled on ws11! (Batchjobs will not be scheduled in case the quota was exceeded the limit for your group)</Font>.
* take care when you create new batch jobs. If you have to migrate your workspace from an old filesystem to the new location, this takes time. Do not run any job while the migration process is active. This may result in inconsistent data.
** Max. extension for workspaces on ws10 will be reduced from 3 to 1 (ws_allocate, ws_extend).
* On <Font color=red>Nov ??th 2021</Font> the “old” workspaces on ws2 will be disconnected from the vulcan compute nodes. The filesystems will be available on the frontend systems for data migration until Dec ??th 2021.
** Max. duration for workspaces on ws10 will be reduced from 60 days to 40 days (ws_allocate, ws_extend).
* <Font color=red>Dec ??nd 2021</font> all data on the old filesystems will be deleted.


== Operation of the workspaces on ws3: ==
* <Font color=red>2024-08-19</Font>:
** Max. durations for workspaces on ws10 will be reduced from 40 days to 10 days (ws_allocate).
** Deactivation of ws_extend for ws10.


* No job of any group member will be scheduled for computation as long as the group quota is exceeded.
* <Font color=red>2024-09-09</Font>:
* accounting
** Max. duration for workspaces on ws10 will be reduced from 10 days to 3 days (ws_allocate).
* max. lifetime of a workspace is currently 60 days
 
* default lifetime of a workspace is 1 day
* <Font color=red>2024-10-18</Font>:
* please read related man pages or online [[Workspace_mechanism | workspace mechanism document]]<BR>
** Start removing all remaining data located on ws10.
: in particular note that the workspace tools allow to explicitly address a specific workspace file system using the <tt>-F</tt> option (e.g. <tt>ws_allocate -F ws3 my_workspace 10</tt>)
** Final shutdown.
* to list your available workspace file systems use <tt>ws_list -l</tt>  
 
* users can restore expired workspaces using ''ws_restore''
=== Important remarks ===
* If you have to migrate data residing in workspaces from one to another filesystem, do not use the ''mv'' command to transfer data. For large amounts of data, this will fail due to time limits. Currently we recommend for e.g. millions of small files or for large amounts of data to use the following command inside a single node batch job: ''rsync -a  --hard-links  Old_ws/  new_ws/'' .
* Please try to use the [[Workspace_migration#Using_mpifileutils_for_data_transfer | mpifileutils '''dcp''' or '''dsync''']].
* Take care when you create new batch jobs. If you have to migrate your workspace from an old filesystem to the new location, this takes time. Do not run any job while the migration process is active. This may result in inconsistent data.
 
== Operation / Policies of the workspaces on ws11: ==
 
* No job of any user-group member will be scheduled for computation as long as the group quota is exceeded.
* Accounting.
* Max. lifetime of a workspace is currently 60 days.
* Default lifetime of a workspace is 1 day.
* Max. number of workspace extensions are 3.
* Please read related man pages or the online [[Workspace_mechanism | workspace mechanism document]]<BR>
: in particular note that the workspace tools allow to explicitly address a specific workspace file system using the <tt>-F</tt> option (e.g. <tt>ws_allocate -F ws11.0 my_workspace 10</tt>)
* To list your available workspace file systems use <tt>ws_list -l</tt>.
* Users can restore expired workspaces using ''ws_restore''.


Please read https://kb.hlrs.de/platforms/index.php/Storage_usage_policy
Please read https://kb.hlrs.de/platforms/index.php/Storage_usage_policy


== Using a parallel copy for data transfer (<font color=red>currently only for experimental usage available on vulcan!</font>) ==
== Using mpifileutils for data transfer ==
The mpifileutils suite provides MPI-based tools to handle typical jobs like copy, remove, and compare for large  datasets, providing speedups of up to 50x compared to single process jobs. It can only be run on compute nodes via mpirun.


pcp is a python based parallel copy using MPI. It can only be run on compute nodes via mpirun.
dcp or dsync is similar to cp -r or rsync; simply provide a source directory and destination and dcp / dsync will recursively copy the source directory to the destination in parallel.


pcp is similar to cp -r; simply give it a source directory and destination and pcp will recursively copy the source directory to the destination in parallel.
dcp / dsync has a number of useful options; use dcp -h or dsync -h to see a description or use the [[https://mpifileutils.readthedocs.io/en/v0.11.1/ User Guide]].


pcp has a number of useful options; use pcp -h to see a description.
It should be invoked via mpirun.


This program traverses a directory tree and copies files in the tree in parallel. It does not copy individual files in parallel. It should be invoked via mpirun.
We highly recommend to use dcp / dsync with an empty ~/.profile and ~/.bashrc only! Furthermore, take care that only the following modules are loaded when using mpifileutils (this can be achieved by logging into the system without modifying the list of modules and loading only the modules openmpi and mpifileutils): <br>
1) system/site_names <br>
2) system/ws/8b99237 <br>
3) system/wrappers/1.0 <br>
4) hlrs-software-stack/current <br>
5) gcc/10.2.0 <br>
6) openmpi/4.1.4 <br>
7) mpifileutils/0.11 <br>


We highly recommend to use pcp with an empty ~/.profile and ~/.bashrc only! Furthermore, take care that only the following modules are loaded when using pcp (this can be achieved by logging into the system without modifying the list of modules and loading the system/pcp module only): <br>
1) system/pbs/19.1.1(default) <br>
2) system/batchsystem/auto <br>
3) system/site_names <br>
4) system/ws/1.3.5b(default) <br>
5) system/wrappers/1.0(default) <br>
6) mpi/ucx/1.8.1 <br>
7) tools/binutils/2.32 <br>
8) compiler/gnu/9.2.0(default) <br>
9) mpi/openmpi/4.0.5-gnu-9.2.0(default) <br>
10) python/2.7 <br>
11) system/pcp/only-experimental-use <br>


=== dcp ===
Parallel MPI application to recursively copy files and directories.


=== Basic arguments ===
dcp is a file copy tool in the spirit of cp(1) that evenly distributes the work of scanning the directory tree, and copying file data across a large cluster without any centralized state. It is designed for copying files that are located on a distributed parallel file system, and it splits large file copies across multiple processes.


Run '''pcp''' with the '''-p''' option to preserve permissions and timestamps, and ownership.<br>
Run '''dcp''' with the '''-p''' option to preserve permissions and timestamps, and ownership.<br>


'''-p'''  : preserve permissions and timestamps, and ownership
'''-p'''  : preserve permissions and timestamps, and ownership


'''-b C''': Copy files larger than C Mbytes in C Mbyte chunks
'''--chunksize C''': Copy files larger than C Bytes in C Byte chunks (default ist 4MB)


=== Algorithm ===
We highly recommend using the '''-p''' option.


pcp runs in two phases, respectively in three phases, if the '''-p''' option is used:
=== dsync ===
Parallel MPI application to synchronize two files or two directory trees.
Phase I is a parallel walk of the source file tree, involving all MPI ranks in a peer-to-peer algorithm. The walk constructs the list of files to be copied and creates the destination directory hierarchy.


In phase II, the actual files are copied. Phase II uses a master-slave algorithm.
dsync makes DEST match SRC, adding missing entries from DEST, and updating existing entries in DEST as necessary so that SRC and DEST have identical content, ownership, timestamps, and permissions.
R0 is the master and dispatches file copy instructions to the slaves (R1...Rn).


In phase III the permissions, timestamp and ownership are set as in the source directory (if '''-p''' option is used).
'''--chunksize C''': Copy files larger than C Bytes in C Byte chunks (default ist 4MB)


We highly recommend using the '''-p''' option.


=== Job Script example ===
=== Job Script example ===
Here is an example of a job script.  
Here is an example of a job script.


You have to change the SOURCEDIR and TARGETDIR according to your setup.
You have to change the SOURCEDIR and TARGETDIR according to your setup.
Also the number of nodes and wallclock time should be adjusted.  
Also the number of nodes and wallclock time should be adjusted.


Again, pcp does NOT parallelize a single copy operation, but the number of copy operations are distributed over the nodes.


  #!/bin/bash
  #!/bin/bash
  #PBS -N parallel-copy
  #PBS -N parallel-copy
  #PBS -l select=2:node_type=hsw:mpiprocs=20
  #PBS -l select=2:node_type=rome:mpiprocs=128
  #PBS -l walltime=00:20:00
  #PBS -l walltime=00:20:00
   
   
  module load system/pcp
  module load openmpi mpifileutils
   
   
  SOURCEDIR=<YOUR SOURCE DIRECTORY HERE>
  SOURCEDIR=<YOUR SOURCE DIRECTORY HERE>
  TARGETDIR=<YOUR TARGET DIRECTORY HERE>
  TARGETDIR=<YOUR TARGET DIRECTORY HERE>
# Disable CUDA warning
export OMPI_MCA_opal_warn_on_missing_libcuda=0
export UCX_WARN_UNUSED_ENV_VARS=n
   
   
  sleep 5
  sleep 5
Line 126: Line 144:
   
   
  time_start=$(date "+%c  :: %s")
  time_start=$(date "+%c  :: %s")
  mpirun -np $cores pcp -p -b 4096 $SOURCEDIR $TARGETDIR
  #mpirun -np $cores dcp -p --bufsize 8MB ${SOURCEDIR}/ ${TARGETDIR}/
mpirun -np $cores dsync --bufsize 8MB $SOURCEDIR $TARGETDIR
  time_end=$(date "+%c  :: %s")
  time_end=$(date "+%c  :: %s")
   
   
Line 133: Line 152:
  (( total_time=$tt_end-$tt_start ))
  (( total_time=$tt_end-$tt_start ))
  echo "Total runtime in seconds: $total_time"
  echo "Total runtime in seconds: $total_time"
Output of a run with the script
R0: All workers have reported in.
Starting 40 processes.
Files larger than 4096 Mbytes will be copied in parallel chunks.
Starting phase I: Scanning and copying directory structure...
Phase I done: Scanned 168 files, 4 dirs in 00 hrs 00 mins 00 secs (6134 items/sec).
168 files will be copied.
Starting phase II: Copying files...
Phase II done.
Copy Statisics:
Rank 1 copied 4.00 Gbytes in 1 files (15.85 Mbytes/s)
Rank 2 copied 4.00 Gbytes in 1 files (15.69 Mbytes/s)
Rank 3 copied 4.00 Gbytes in 1 files (15.72 Mbytes/s)
...
Rank 37 copied 4.00 Gbytes in 2 files (24.91 Mbytes/s)
Rank 38 copied 4.00 Gbytes in 2 files (24.94 Mbytes/s)
Rank 39 copied 4.00 Gbytes in 2 files (7.34 Mbytes/s)
Total data copied: 1.05 Tbytes in 433 files (1.81 Gbytes/s)
Total Time for copy: 00 hrs 09 mins 50 secs
Warnings 0
Starting phase III: Setting directory timestamps...
Phase III Done. 00 hrs 00 mins 00 secs
Total runtime in seconds: 601
== Using mpifileutils for data transfer ==

Latest revision as of 07:23, 15 July 2024


Warning: This page describes the necessary steps to migrate workspaces to another workspace filesystems.
On 18th October 2024 all data on the old Hawk ws10 filesystems will be deleted. Follow the guide below to transfer your data to the ws11 workspace filesystems.


User migration to new workspaces

On Hawk the workspace filesystem ws11 will be the new default workspace filesystem. The currently existing default workspace filesystem (ws10) will be shut down on 18th October 2024. The policy settings of ws10 will be transferred to the ws11 filesystem. Now, users have to migrate their workspaces located on the ws10 filesystems onto the ws11 filesystem. Run the command ws_list -a on a frontend system to display the path for all your workspaces. If path names match mount points listed in the following table, these workspaces need to be migrated to the ws11 filesystem.


File System mounted on
ws10.0 /lustre/hpe/ws10/ws10.0
ws10.1 /lustre/hpe/ws10/ws10.1
ws10.2 /lustre/hpe/ws10/ws10.2
ws10.3 /lustre/hpe/ws10/ws10.3
ws10.3P /lustre/hpe/ws10/ws10.3P

Before you start

Migration for large amount of data consumes a lot of IO ressources. Please review and remove data not needed any more or move it into HPSS.

How to proceed / Time schedule for the gradual shutdown of ws10

  • 2024-07-22 10:00:
    • The default workspace filesystem will be switched from ws10 to ws11. ws_* tools applied to ws10 require the additional option '-F <workspacefilesystem>'.
    • ws11 will get the same policy settings as ws10.
    • Quota limit settings for each user-group will be moved from ws10 to ws11 and the quota enforcement will be enabled on ws11! (Batchjobs will not be scheduled in case the quota was exceeded the limit for your group).
    • Max. extension for workspaces on ws10 will be reduced from 3 to 1 (ws_allocate, ws_extend).
    • Max. duration for workspaces on ws10 will be reduced from 60 days to 40 days (ws_allocate, ws_extend).
  • 2024-08-19:
    • Max. durations for workspaces on ws10 will be reduced from 40 days to 10 days (ws_allocate).
    • Deactivation of ws_extend for ws10.
  • 2024-09-09:
    • Max. duration for workspaces on ws10 will be reduced from 10 days to 3 days (ws_allocate).
  • 2024-10-18:
    • Start removing all remaining data located on ws10.
    • Final shutdown.

Important remarks

  • If you have to migrate data residing in workspaces from one to another filesystem, do not use the mv command to transfer data. For large amounts of data, this will fail due to time limits. Currently we recommend for e.g. millions of small files or for large amounts of data to use the following command inside a single node batch job: rsync -a --hard-links Old_ws/ new_ws/ .
  • Please try to use the mpifileutils dcp or dsync.
  • Take care when you create new batch jobs. If you have to migrate your workspace from an old filesystem to the new location, this takes time. Do not run any job while the migration process is active. This may result in inconsistent data.

Operation / Policies of the workspaces on ws11:

  • No job of any user-group member will be scheduled for computation as long as the group quota is exceeded.
  • Accounting.
  • Max. lifetime of a workspace is currently 60 days.
  • Default lifetime of a workspace is 1 day.
  • Max. number of workspace extensions are 3.
  • Please read related man pages or the online workspace mechanism document
in particular note that the workspace tools allow to explicitly address a specific workspace file system using the -F option (e.g. ws_allocate -F ws11.0 my_workspace 10)
  • To list your available workspace file systems use ws_list -l.
  • Users can restore expired workspaces using ws_restore.

Please read https://kb.hlrs.de/platforms/index.php/Storage_usage_policy

Using mpifileutils for data transfer

The mpifileutils suite provides MPI-based tools to handle typical jobs like copy, remove, and compare for large datasets, providing speedups of up to 50x compared to single process jobs. It can only be run on compute nodes via mpirun.

dcp or dsync is similar to cp -r or rsync; simply provide a source directory and destination and dcp / dsync will recursively copy the source directory to the destination in parallel.

dcp / dsync has a number of useful options; use dcp -h or dsync -h to see a description or use the [User Guide].

It should be invoked via mpirun.

We highly recommend to use dcp / dsync with an empty ~/.profile and ~/.bashrc only! Furthermore, take care that only the following modules are loaded when using mpifileutils (this can be achieved by logging into the system without modifying the list of modules and loading only the modules openmpi and mpifileutils):
1) system/site_names
2) system/ws/8b99237
3) system/wrappers/1.0
4) hlrs-software-stack/current
5) gcc/10.2.0
6) openmpi/4.1.4
7) mpifileutils/0.11


dcp

Parallel MPI application to recursively copy files and directories.

dcp is a file copy tool in the spirit of cp(1) that evenly distributes the work of scanning the directory tree, and copying file data across a large cluster without any centralized state. It is designed for copying files that are located on a distributed parallel file system, and it splits large file copies across multiple processes.

Run dcp with the -p option to preserve permissions and timestamps, and ownership.

-p  : preserve permissions and timestamps, and ownership

--chunksize C: Copy files larger than C Bytes in C Byte chunks (default ist 4MB)

We highly recommend using the -p option.

dsync

Parallel MPI application to synchronize two files or two directory trees.

dsync makes DEST match SRC, adding missing entries from DEST, and updating existing entries in DEST as necessary so that SRC and DEST have identical content, ownership, timestamps, and permissions.

--chunksize C: Copy files larger than C Bytes in C Byte chunks (default ist 4MB)


Job Script example

Here is an example of a job script.

You have to change the SOURCEDIR and TARGETDIR according to your setup. Also the number of nodes and wallclock time should be adjusted.


#!/bin/bash
#PBS -N parallel-copy
#PBS -l select=2:node_type=rome:mpiprocs=128
#PBS -l walltime=00:20:00

module load openmpi mpifileutils

SOURCEDIR=<YOUR SOURCE DIRECTORY HERE>
TARGETDIR=<YOUR TARGET DIRECTORY HERE>

sleep 5
nodes=$(cat $PBS_NODEFILE | sort -u | wc -l)
let cores=nodes*20

time_start=$(date "+%c  :: %s")
#mpirun -np $cores dcp -p --bufsize 8MB ${SOURCEDIR}/ ${TARGETDIR}/
mpirun -np $cores dsync --bufsize 8MB $SOURCEDIR $TARGETDIR
time_end=$(date "+%c  :: %s")

tt_start=$(echo $time_start | awk {'print $9'})
tt_end=$(echo $time_end | awk {'print $9'})
(( total_time=$tt_end-$tt_start ))
echo "Total runtime in seconds: $total_time"