Connecting Cursor to Offline HPC Compute Nodes via User-Space SSHD

Background

Connecting modern IDEs like Cursor to HPC compute nodes can be highly frustrating. Many SLURM clusters enforce strict security policies: they block inbound internet access, disable outbound internet on compute nodes (breaking tools like Ngrok or Pinggy), and restrict standard SSH access via GSSAPI (Kerberos) or PAM, effectively disabling standard SSH key authentication.

To bypass these restrictions without root privileges, we can run a User-Space SSH Daemon (SSHD) directly on the compute node. By binding it to a high port (e.g., 22222) and running it under your own user account, it bypasses system-level Kerberos/PAM restrictions and allows seamless connections via your local SSH keys.

Step 1: Establish Internal Trust

Because you will be jumping from the login node to the compute node, the cluster needs to trust its own internal connections.

Log in to your HPC login node and generate an internal SSH key. Then, add it to your own authorized keys list:

1
2
3
4
5
6
# Generate an internal ed25519 key (Press Enter to accept defaults and empty passphrase)
ssh-keygen -t ed25519 -N "" -f ~/.ssh/id_ed25519

# Add the public key to your authorized_keys to enable internal passwordless SSH
cat ~/.ssh/id_ed25519.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

Note: You also need to ensure your local computer’s public key (e.g., your Windows/Mac id_rsa.pub) is pasted into this ~/.ssh/authorized_keys file.

Step 2: Create the SLURM Script

Next, create a SLURM batch script that will dynamically generate an SSH configuration file and launch the daemon on the allocated compute node.

Create a file named run_custom_sshd.slurm in your home directory:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
cat << 'EOF' > ~/run_custom_sshd.slurm
#!/bin/bash
#SBATCH --job-name=cursor_sshd
#SBATCH --partition=jlab
#SBATCH --cpus-per-task=2
#SBATCH --mem=10GB
#SBATCH --time=30-00:00:00
#SBATCH --output=cursor_sshd_%j.log

echo "Allocated Node: $SLURM_NODELIST"

# 1. Dynamically generate a custom SSHD config
# We use port 22222, disable PAM, and force it to read your keys
cat << CONFIG > $HOME/my_sshd_config
Port 22222
HostKey $HOME/.ssh/id_ed25519
AuthorizedKeysFile $HOME/.ssh/authorized_keys
UsePAM no
StrictModes no
PidFile $HOME/my_sshd_${SLURM_JOB_ID}.pid
CONFIG

# 2. Launch the system's sshd in user-space mode
# -e prints logs to standard error (captured in the slurm log)
# -f specifies our custom configuration file
/usr/sbin/sshd -e -f $HOME/my_sshd_config

echo "User-space SSHD successfully started on port 22222!"

# Keep the job alive
sleep infinity
EOF

Step 3: Submit the Job

Submit the script to the SLURM queue:

1
sbatch ~/run_custom_sshd.slurm

Once the job is running, check the output log (e.g., cat cursor_sshd_12345.log) to verify the daemon has started and to note the name of the allocated compute node (e.g., d4042).

Step 4: Configure Local SSH

On your local machine (Windows/Mac), open your SSH configuration file (~/.ssh/config). Set up a ProxyJump so your local machine routes traffic through the login node directly into your custom port on the compute node.

1
2
3
4
5
6
7
8
9
10
11
12
# 1. Define the Login Node
Host hpc-login
HostName login.explorer.northeastern.edu
User <your_username>

# 2. Define the Compute Node (Jumping through the Login Node)
Host hpc-compute
HostName d4042 # UPDATE THIS: Change to the node name from your log
User <your_username>
Port 22222 # Crucial: Connect to your custom daemon
ProxyJump hpc-login
StrictHostKeyChecking no

Step 5: Connect via Cursor

Open Cursor, navigate to the Remote-SSH extension, and select hpc-compute to connect.

You now have a persistent, highly stable connection for your 30-day interactive jobs!