- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Thread And Memory Pinning

From HLRS Platforms
Revision as of 09:36, 24 February 2010 by Hpcaschu (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Introduction

Virtual CPUs

Nodes on Linux clusters usually contain multiple CPUs which themselves contain multiple cores. Linux enumerates all cores as separate virtual CPUs. Look at the contents of the file /proc/cpuinfo to see information on the virtual CPUs in your system. If hyperthreading is supported by the CPUs and enabled in the system, each virtual core is reported as a separate virtual CPU by Linux. The CPUs are ordered by the following criteria (highest to lowest):

  • virtual hyperthreading core id (if hyperthreading is present)
  • physical CPU (socket) number
  • core number within physical CPU
Example

Assume a system with 2 CPU sockets, each containing a quad-core CPU with 2-times hyperthreading enabled. The virtual CPUs of Linux will correspond to the hardware in the following way:

Virtual Linux CPU CPU/Socket Core within CPU Virtual Hyerthreading Core
0 0 0 0
1 0 1 0
2 0 2 0
3 0 3 0
4 1 0 0
5 1 1 0
6 1 2 0
7 1 3 0
8 0 0 1
9 0 1 1
10 0 2 1
11 0 3 1
12 1 0 1
13 1 1 1
14 1 2 1
15 1 3 1

Organization of Physical Memory

In a x86 system containing more than one physical CPU physical memory is usually shared between all CPUs (and cores within CPUs) and is fully cache-coherent. Still memory is organized into multiple banks, where each bank is associated to a specific CPU socket. This means memory in this bank is accessible from all cores, but the access is faster when done from the CPU socket is it associated to. In the example above the system would contain 2 memory banks, one associated to socket 0, the other associated to socket 1. So access speeds would be the following:

Virtual Linux CPU Access speed to bank 0 Access speed to bank 1
0,1,2,3,8,9,10,11 fast slow
4,5,6,7,12,13,14,15 slow fast

This behavior creates the wish to explicitly control the affinity of threads to physical CPUs and memory to memory banks. In the following are explained the available methods to achieve this.

Command Line Tools

Linux provides command line tools to control the affinity of complete processes to specific virtual CPUs and memory banks. Of course this is not fine-grained enough to handle multi-threaded (e.g. OpenMP) processes, but it is a simple trick for testing purposes or to pin closed-source MPI-parallel software processes.

Taskset

The command taskset controls which virtual CPUs a process is allowed to use.

  • With the following syntax it sets the process affinity for a process freshly to start:

taskset <cpu mask> <command> [arguments1] [argument2]...

  • With this syntax it sets the process affinity for a process with a given PID:

taskset -p <cpu mask> <pid>

CPU mask

The <cpu mask> is a number which represents a bit mask of the allowed virtual CPUs, Bit 0 corresponds to CPU 0, bit 1 to CPU 1, and so forth. This number may be provided in hexidecimal if prefixed with a '0x'.

Examples
CPU mask virtual CPUs allowed to use
0x1 0
0x2 1
0x4 2
0x8 3
0x10 4
0x3 0, 1
0xf 0, 1, 2, 3
0xf0 4, 5, 6, 7
0xf0f 0, 1, 2, 3, 4, 8, 9, 10, 11
0xf0f0 4, 5, 6, 7, 12, 13, 14, 15

Numactl

The tool numactl can pin a process to start to specific virtual CPUs or 'virtual nodes'. A virtual node means here usually the combination of a physical CPU/socket and the associated memory bank. The command 'numactl -H' shows the virtual node layout numactl assumes for the system.

  • The generic syntax for numactl is:

numactl [options] <command> [argument1] [argument2]...

  • The following options might be useful:
Option Effect
--physcpubind=<cpu list> binds the process to the virtual CPUs given in the list
--cpunodebind=<virtual node list> binds the process to the physical CPUs associated to the virtual nodes given in the list
--membind=<virtual node list> binds the process to the memory banks associated to the virtual nodes given in the list
Examples
  • Bind the process to execute only on virtual CPUs 1,2 or 3

numactl --physcpubind=1,2,3 <command>

  • Bind the process to execute only on virtual node 1 (CPU in socket 1)

numactl --cpunodebind=1 <command>

  • Bind the process to use only memory from virtual node 0 (memory bank 0)

numactl --membind=1 <command>

  • Bind the process to execute only and use only memory from virtual node 0

numactl --cpunodebind=0 --membind=0 <command>

C Interface

From within the code of the program individual threads can be pinned to CPUs and individual regions of memory can be pinned to memory banks.

Pinning Threads and Processes

The function 'sched_setaffinity' defined in header file 'sched.h' can the affinity of a process or thread to a set of CPUs. The function 'pthread_setaffinity_np' defined in header file 'pthread.h' does the same for a POSIX thread.

Pinning Memory

The function 'set_mempolicy' defined in header file 'numaif.h' sets the affinity of memory allocated by the calling process to a specific memory bank. The function 'mbind' defined in header file 'numaif.h' binds a specific region of memory to a memory bank.