- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

HPSS Introduction: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
m (Introduction moved to HPSS Introduction: misleading name)
No edit summary
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== General ==
= General =
The High Performance Storage System HPSS is developped by several Labs in the US in cooperation with IBM. HPSS is a Hierarchical Storage Management System (HSM) and it is developped to manage petabytes of data stored on disks and in tape libraries.
The High Performance Storage System (HPSS) is developed by a collaboration consisting of several labs in the US and led by IBM.  
HPSS is a Hierarchical Storage Management System (HSM) and it is designed to manage petabytes of data stored on disks and in tape libraries. More information on the system can be found here: http://www.hpss-collaboration.org/


HPSS at HLRS manages 288 TB of disk storage and more than a petabyte tape storage.
Currently HPSS at HLRS manages 500 TB of disk storage and more than 4PB tape storage held in two copies.


== Usage ==
= Hardware =
For the user, there are currently two ways to access the HPSS environment to store data: ftp and parallel ftp. More information about this is given in the [[User Access]] section. '''Please remember always''', that your data is stored on tape and might be on disk. Especially, if you have not touched your data for a longer time, it will happen that your data is located on tape only. This may result in some latency at the begin of data transfers when reading data.
In the HPSS complex, 19 x86 servers are building a distributed system consisting of a core server, disk as well as tape movers. Each system is equipped with 32 cores on 2 sockets using 32 GB (movers) or 128GB (core server) of main memory.
=== Class of Service ===
The only control instrument visible to the outside is the class of service (cos). Each class of service has an id which is used to specify the class of service to be used. The current setup knows three class of service for general usage with different chracteristics.
Metadata is redundantly held on IBM v3700 Storwize device.


cos 112 is for files sized smaller than 2 GB
As disk cache, 4 SNA460 are used controlling 500TB disk space.


cos 122 is for files larger than 2 GB but small than 8 GB
The tapes are stored in two spatially separated copies on IBM TS3500 Tape libraries with approx. 4000 tapes. The first copy is stored on EO7 Jaguar media whereas the second copy is stored on LTO6 media technology.


cos 132 is for files larger than 8 GB
= Software =
 
We are currently using HPSS version 7.4.2 distributed by IBM. HPSS is a Hierarchical Storage Management System (HSM) which is able to manage petabytes of data on disk and tape storage and is used by the big HPC sites across the world. A map of users can be found here: http://www.hpss-collaboration.org/documents/HPSSWorldMap.pdf.
Depending on the class of service, the internal handling of files is different. Please find more information about this in the chapter Configuration Details below.
 
== Hardware ==
In the HPSSSc omplex, 10 server PC's are working as metadata server
and data movers. Each system is equipped with 8 CPU cores on 4 sockets
using 16 GB or 32 GB of main memory. Additinally, each systems has
10 GBit Ethernet, Infiniband and Fibre Channel adapters.
 
== Software ==
We are using the current version of HPSS (High Performance Storage System) distributed by IBM. HPSS is a Hierarchical Storage Management System (HSM) which is able to manage Petabytes of data on disk and tape storage.
 
== Configuration Details ==
Here are some configuration details, which are interesting for the users to know.
 
In our environment, we support three classes of service. Each one is planned for a different
file size. The internal handling of the data is different depending on the class of service
the file belongs to. Independent of the used class of service, each file will be written to tape.
For safety reasons, two copies of each file are written to different tape cartridges.
 
The three different classes of service are:
 
 
'''112'''
 
Filesize: size < 2GB
 
Temporary Disk Storage: file is written to one RAID Volume
 
Tape 1st copy: one tape
 
Tape 2nd copy: a second tape
 
 
'''122'''
 
Filesize: 2GB < size < 8GB
 
Temporary Disk Storage: file is striped over 4 RAID Volumes
 
Tape 1st copy: one tape
 
Tape 2nd copy: a second tape
 
 
'''132'''
 
Filesize: size > 8GB
 
Temporary Disk Storage: file is striped over 4 RAID Volumes
 
Tape 1st copy: striped over 4 tape cartridges
 
Tape 2nd copy: a different tape cartridge

Latest revision as of 16:15, 24 February 2015

General

The High Performance Storage System (HPSS) is developed by a collaboration consisting of several labs in the US and led by IBM. HPSS is a Hierarchical Storage Management System (HSM) and it is designed to manage petabytes of data stored on disks and in tape libraries. More information on the system can be found here: http://www.hpss-collaboration.org/

Currently HPSS at HLRS manages 500 TB of disk storage and more than 4PB tape storage held in two copies.

Hardware

In the HPSS complex, 19 x86 servers are building a distributed system consisting of a core server, disk as well as tape movers. Each system is equipped with 32 cores on 2 sockets using 32 GB (movers) or 128GB (core server) of main memory.

Metadata is redundantly held on IBM v3700 Storwize device.

As disk cache, 4 SNA460 are used controlling 500TB disk space.

The tapes are stored in two spatially separated copies on IBM TS3500 Tape libraries with approx. 4000 tapes. The first copy is stored on EO7 Jaguar media whereas the second copy is stored on LTO6 media technology.

Software

We are currently using HPSS version 7.4.2 distributed by IBM. HPSS is a Hierarchical Storage Management System (HSM) which is able to manage petabytes of data on disk and tape storage and is used by the big HPC sites across the world. A map of users can be found here: http://www.hpss-collaboration.org/documents/HPSSWorldMap.pdf.