Mounting remote filesystems from the cluster

Introduction

The most hassle-free method of mounting a remote filesystem from the cluster for reading and writing is via sshfs. Since each cluster node is independent of the others, this needs to be done once per cluster node, or ideally at the beginning of a job script.

Setting up public key ssh authentication

Introduction

The most hassle-free method of mounting a remote filesystem from the cluster for reading and writing is via sshfs. Since each cluster node is independent of the others, this needs to be done once per cluster node, or ideally at the beginning of a job script.

Setting up public key ssh authentication

Copy your public key on nansen out of /home/username/.ssh/id_rsa.pub. On the machine you wish to log into, add this key to the end of /home/username/.ssh/authorized_keys2. To verify:

  [user@nansen ~]$ ssh somehost hostname
  somehost
  [user@nansen ~]$ 

It should not prompt for a password. If it does, enable public-key authentication on the target host or check your permissions (~/.ssh should be 755 and ~/.ssh/authorized_keys2 should be 644).

Push known_hosts from nansen to all of the cluster nodes

This step assumes your target host’s public key is already in known_hosts on nansen. This step pushes that updated file to all the cluster nodes:

  for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16; do \
  scp ~/.ssh/known_hosts cluster0$i:~/.ssh ; done  
  

Mounting

Always mount the filesystem in your directory in /home/user (where user is your username). This directory is local to each cluster node.

  mkdir /home/user/mountpoint
  sshfs username@machinename:/path/to/mount /home/user/mountpoint 

Since this needs to be done on each cluster node you run on, and you won’t know in advance which cluster nodes your job will run on, this should be at the front of your script, preceded by some logic to ensure it isn’t already mounted.

Unmounting

  fusermount -u /home/user/mountpoint