Clipboard to Vmware Vsphere and Proxmox

Capturing a Proxmox VM traffic remotely from a windows host using wireshark

This guide will show you how, step-by-step. We’ll use a single command to pipe traffic from your Proxmox host directly into your local Wireshark GUI.

The Big Picture: How It Works

This technique uses a powerful combination of tools:

  1. SSH: We’ll open a secure shell connection to the Proxmox host.
  2. dumpcap: This is the command-line capture engine that Wireshark itself uses. We’ll run it on the Proxmox host to do the actual packet capture.
  3. The tap Interface: Proxmox creates a virtual network interface on the host (like tap122i0) for every network adapter on a running VM. We’ll tell dumpcap to listen to this specific interface.
  4. The Pipe (|): We’ll “pipe” the raw packet data from the remote dumpcap command, through the encrypted SSH tunnel, and directly into our local Wireshark application.

Step 1: Install Wireshark on Windows

This one is simple. If you don’t already have Wireshark, go to wireshark.org and download the official Windows installer.

Run the installer, and make sure to let it add Wireshark to your system’s PATH if it asks. This will make running it from the command line easier.

Step 2: Install dumpcap on Your Proxmox Host

This is the most important step on the server-side. The dumpcap utility isn’t installed on Proxmox by default, but it’s available in the standard repositories as part of the wireshark-common package.

  1. SSH into your Proxmox host as root.ssh root@<your-proxmox-ip>
  2. Update your package lists and install wireshark-common:apt update apt install wireshark-common

When it asks if non-superusers should be able to capture packets, you can select “Yes” for convenience, but since we’ll be connecting as root, it doesn’t really matter.

Why this package? This package provides /usr/bin/dumpcap. This is crucial because it’s in the default SSH PATH, avoiding many “command not found” errors that can happen when trying to use tcpdump (which is in /usr/sbin).

Step 3: Find Your VM’s Network Interface Name

You can’t just capture from eth0. You need to find the specific tap interface that Proxmox has assigned to your VM.

  1. On your Proxmox host, find your VM’s ID:qm list
    You’ll see a list of your VMs. Let’s say the one you want to monitor is VM 122.
  2. Now, list the network interfaces associated with that VM ID:ls /sys/class/net/ | grep tap122
  3. The output will be something like tap122i0. This is the interface name you need. (The i0 corresponds to net0 in the VM’s hardware tab, i1 would be net1, and so on).

Step 4: Run the All-in-One Capture Command

Now it’s time to put it all together. Open a Command Prompt (cmd.exe) or PowerShell on your Windows machine.

Navigate to your Wireshark installation directory. This is the most reliable way to ensure Windows can find wireshark.exe.

Now, run the following command, replacing the IP and interface name with your own:

Wireshark should pop open on your desktop and immediately start showing a live capture of all HTTPS traffic from VM 122.

Breakdown of the Magic Command

Here’s what each part of that command does:

Remote Part (on Proxmox)

  • ssh root@192.168.0.XXX: Connects to your Proxmox host as root.
  • dumpcap: Runs the capture utility on the host.
  • -i tap122i0: Tells dumpcap to listen only to the interface for VM 122.
  • -P: Uses the modern pcapng format.
  • -w -: Writes the packet data to standard output (the console) instead of a file.
  • -f 'tcp port 443 and not port 22': This is your capture filter. This example captures all HTTPS traffic (tcp port 443) but crucially ignores your own SSH traffic (not port 22) so you don’t capture the capture itself!

Local Part (on Windows)

  • |: The pipe. This takes all the output from the ssh command…
  • wireshark: …and pipes it directly into your local wireshark.exe.
  • -i -: Tells Wireshark to read from standard input (the pipe) instead of a local network card.
  • -k: Starts the capture immediately.
  • -p: Runs the interface in promiscuous mode (good practice).

And that’s it! You now have a powerful, low-impact way to debug traffic from any VM on your Proxmox host without ever having to log into the VM itself.

Speedrunning a Terraform setup in promox

SRC: https://registry.terraform.io/providers/Telmate/proxmox/latest

root@proxmox:~# pveum role modify terraform_role -privs “Datastore.AllocateSpace
Datastore.AllocateTemplate Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Mod
ify VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM
.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.
Migrate VM.PowerMgmt SDN.Use”

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add –
sudo apt-add-repository “deb [arch=$(dpkg –print-architecture)] https://apt.releases.hashicorp.com $(lsb_release -cs) main”
sudo apt update
sudo apt install terraform

SRC: https://registry.terraform.io/providers/Telmate/proxmox/latest/docs/resources/vm_qemu
[Main.tf]
terraform {
required_providers {
proxmox = {
source = “Telmate/proxmox”
version = “3.0.2-rc05”
}
}
}

provider “proxmox” {
pm_api_url = “https://192.168.0.xxx:8006/api2/json”
pm_user = “terraform”
pm_api_token_id = “terraform@pve!terraformAPI”
pm_api_token_secret = “19ca604a-xxxx”
#pm_password = “NotMyPazzsword”
pm_tls_insecure = true
pm_debug = true
}

resource “proxmox_vm_qemu” “my_vm” {
name = “my-vm”
target_node = “threadripper”
clone = “UbuntuWorkstation1”
cores = 2
memory = 2048
disk {
storage = “local-zfs”
type = “disk”
slot = “scsi0”
size = “32G”
}
}

user@user-Standard-PC-i440FX-PIIX-1996:~$ nano main.tf
user@user-Standard-PC-i440FX-PIIX-1996:~$ terraform init
Initializing the backend…
Initializing provider plugins…

  • Finding telmate/proxmox versions matching “3.0.2-rc05″…
  • Installing telmate/proxmox v3.0.2-rc05…
  • Installed telmate/proxmox v3.0.2-rc05 (self-signed, key ID A9EBBE091B35AFCE)
    Partner and community providers are signed by their developers.
    If you’d like to know more about provider signing, you can read about it here:
    https://developer.hashicorp.com/terraform/cli/plugins/signing
    Terraform has created a lock file .terraform.lock.hcl to record the provider
    selections it made above. Include this file in your version control repository
    so that Terraform can guarantee to make the same selections by default when
    you run “terraform init” in the future.

Terraform has been successfully initialized!

user@user-Standard-PC-i440FX-PIIX-1996:~$ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:

  • create

Terraform will perform the following actions:

# proxmox_vm_qemu.my_vm will be created

  • resource “proxmox_vm_qemu” “my_vm” {
    • additional_wait = 5
    • agent = 0
    • agent_timeout = 90
    • automatic_reboot = true
    • automatic_reboot_severity = “error”
    • balloon = 0
    • bios = “seabios”
    • boot = (known after apply)
    • bootdisk = (known after apply)
      … user@user-Standard-PC-i440FX-PIIX-1996:~$ terraform apply
      proxmox_vm_qemu.my_vm: Creating…
      proxmox_vm_qemu.my_vm: Still creating… [00m10s elapsed]
      proxmox_vm_qemu.my_vm: Still creating… [00m20s elapsed]
      proxmox_vm_qemu.my_vm: Creation complete after 29s [id=threadripper/qemu/155]
      Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Install Axolotl on ubuntu with RTX Pro 6000 support

Apply the Blackwell proxmox host crash fix to:

/etc/modprobe.d/nvidia-graphics-drivers-kms.conf
Solution: https://forum.level1techs.com/t/do-your-rtx-5090-or-general-rtx-50-series-has-reset-bug-in-vm-passthrough/228549/35

Adding a Proxmox Node that already contains guest Virtual Machines to a Cluster

On node1 (with guests)

Create a new cluster or get join information.

On node2 (with guests)

scp -r /etc/pve/nodes/* to node1:/etc/pve/nodes (ex. scp -r /etc/pve/nodes/* to 192.168.x.x:/etc/pve/nodes )

rm -r /etc/pve/nodes/* Join cluster.

NOTE: The joining machine will sync its VM’s from the Clusters host, including its PCI mappings and Firewall rules. If you have any on the joining node, back these up before hand!