Post

Code from Anywhere: Your SSH Guide to Remote Development in PyCharm

Code from Anywhere: Your SSH Guide to Remote Development in PyCharm

If you landed here, you’re probably already using a remote (GPU) for your ML tasks, and you’re sick of git push from your laptop and the pull on your powerful GPU machine. Ask me why I know that, and you’re probably a PyCharm fan as well 😊.
If so, you landed in the right place.
On the web there are several guide how to do remote dev for VS code, but I haven’t found one that works for PyCharm, so I decided to write this post.


Introduction

Welcome aboard the world of SSH-based remote development with PyCharm!
Imagine editing your DNN locally in PyCharm while harnessing the power of a remote Linux server—be it a GPU instance on Runpod, a cluster on Lambda AI, a bare-metal GPU box at Hetzner, or the new Shadeform.ai marketplace. With a secure SSH tunnel, you get:

  • Massive compute on demand
  • Consistent, shareable environments
  • Code safety—your laptop stays light!

No more fighting environment drift or lugging heavyweight workstations. In this guide, we’ll cover every SSH-specific step from host prerequisites to remote interpreter setup, plus tips on squeezing GPU performance from budget-friendly servers. Let’s supercharge your Python workflow! (runpod.io, lambda.ai, hetzner.com, shadeform.ai)


Prerequisites

Before you start, make sure your remote host meets PyCharm’s requirements:

  • CPU & RAM: ≥ 4 vCPUs (x86_64 or arm64), 8 GB RAM. Higher clock speeds beat more cores for this use case.
  • Disk: ~10 GB free on local or block storage (avoid NFS/SMB).
  • OS: Ubuntu 18.04/20.04/22.04, CentOS, Debian, or RHEL.
  • Python & SSH: A running OpenSSH server on your Linux box and the desired Python version (e.g., /usr/bin/python3 or a virtualenv).
  • Your public SSH key: must be deployed on the server. If you are running on Runpod, Lambda, Hetzner, it shall be automatically deployed
  • PyCharm version: this guide applies to PyCharm 2025.1

Open SSH Configurations

  • On the bottom-left corner of the IDE click on Current Interpreter ▸ Add New Interpreter ▸ On SSH and click Create SSH configuration.

Current Interpreter ▸ Add New Interpreter Create SSH configuration

Fill in Connection Details

  • Host: your server’s IP or hostname
  • Port: usually 22 (or custom)
  • Username: your SSH user
  • click next
    Connection Details fill in the connection details

Fill in Auth Details

  • Select: Key pair and browse for your private SSH key. On a Mac the default location is YOUR_USER/.ssh . It’s a hidden folder - Command + Shift + . (period) to show hidden folders on your pop up, if they are not already shown.
  • Fill the passphrase: if your SSH private key weas generated with a passphrase.
  • Select Save passphrase
  • Click Next.
    Auth Details fill in the auth details

Introspecting SSH server

  • If you filled all your params correctly you shall see a blank box, just click next
    Introspecting SSH server fIntrospecting SSH server

Project directory and Python runtime configuration

  • Select the same environment you use locally
  • I leave all the other params as default
  • Click Create
    Project directory and Python runtime configuration Project directory and Python runtime configuration

Running, Testing & Debugging Remotely

  • Run Configurations Your existing Run/Debug configs automatically use the SSH interpreter.
  • Breakpoints & Console Set breakpoints locally; the debugger runs over SSH, showing remote stack frames and variables.
  • Remote Python Console Open a Python console that executes commands on the server.
    Running, Testing & Debugging Remotely Running, Testing & Debugging Remotely

Deployment & Remote Host Tool Window

  • Auto-Upload on Save
    • When you make a change to your code, it is automatically deployed to the server; nothing else you have to care about
    • You write code on your laptop, and it gets automatically executed on your GPU server!
    • Optionally, you may want to open a terminal on your GPU server. On the bottom left of the IDE click on Terminal and on the drop down choose your newly created SSH connection
      open a terminal on your GPU server open a terminal on your GPU server

Licensing & Limitations

  • License SSH interpreters require PyCharm Professional (Community Edition doesn’t support them).
  • Limitations Only Linux servers are supported as SSH backends; no remote Windows/macOS interpreters yet, but hey, I hope you are not using a Windows server to test your ML projects!.

Troubleshooting Tips

  • SSH Connection Errors Verify firewall rules, the correct port, and test with ssh -v user@host.
  • Interpreter Setup Failures Ensure the SSH config is selected in the SSH Interpreter wizard; see JetBrains support threads for similar issues (intellij-support.jetbrains.com).
  • Performance Tuning If the remote IDE backend lags, increase its JVM heap in ~/.cache/JetBrains/RemoteDev/*.vmoptions.

Conclusion & Next Steps

You’ve now unlocked the ability to code from anywhere, tapping into remote CPUs or GPUs without leaving PyCharm. 🚀 Next, you might explore:

  • Container-based development (Docker, Kubernetes)
  • JetBrains Gateway for zero-install remote work
  • Collaborative coding via Code With Me

As for compute, cost-effective GPU backends:

  • Runpod (pay-per-second from $0.00011/s) (runpod.io),
  • Lambda AI (H100 at $1.85/hr) (lambda.ai)
  • Hetzner bare-metal GPUs (e.g. RTX-powered servers from €0.295/hr) (hetzner.com)
  • The Shadeform.ai marketplace (A100 80 GB PCIe at 1.20/hruptoH200SXM5at2.45/hr) (shadeform.ai).

Happy AI coding! 😄

This post is licensed under CC BY 4.0 by the author.