Proxmox 8.0 - PCIe Passthrough Tutorial
Vložit
- čas přidán 4. 10. 2023
- Grab yourself a Pint Glass, designed and made in house, at craftcomputing.store
Virtualization is great, but sometimes you just need access to physical hardware. If only there were a way to allow a virtual machine bare-metal access to PCIe cards in your server. OH WAIT! THERE IS! Whether you need access to a storage controller, graphics card, network card, or any other PCIe device, this is the video for you.
But first... What am I drinking???
Sierra Nevada (Chico, CA) Torpedo Imperial IPA (8.6%)
Written Documentation can be found here: drive.google.com/file/d/1rPTK...
Links to items below may be affiliate links for which I may be compensated
Parts from today's build:
AUDHEID 8-Bay NAS Chassis: amzn.to/47iCsxb
ERYING i7-11700B 8-Core (Non-ES): s.click.aliexpress.com/e/_Dci...
Leven DDR4 2x16GB 2666 UDIMM: amzn.to/3OmCnjv
Flex ATX 1U 500 Watt: amzn.to/3Qw9EeB
Silicon Power A60 1TB: amzn.to/44XANM1
ASM1064 8-Bay SATA Controller w/ Cables: amzn.to/3KAqULZ
Follow me on Mastodon @Craftcomputing@hostux.social
Support me on Patreon and get access to my exclusive Discord server. Chat with myself and the other hosts on Talking Heads all week long.
/ craftcomputing - Věda a technologie
This tutorial series is top notch. Thank you so much, Jeff!
Proxmox really should just make these options available in the UI.
Truly. I just dont think these things occur to them when they are processing feature adds and the like. They can be slow to adope like Debian which is what its based on.
Right? They have MOST of the UI, they just need the initialization bit to be UI-driven aswell.
A full-feature product like Proxmox should have all of its functions available through its UI, "popping under the hood" with a terminal is an ugly solution, no matter how poweful it might be.
It's stupid easy in ESXi, too bad Broadcom killed it.
@@Solkre82That's where I'm coming from too. Moving from esxi to Proxmox - if my passthrough setup can be replicated in PVE...
@@manekdubash5022 I'm sure it can, just not as simple. I archived my ESXi 8 ISOs and Keys so I'm not worried about moving for a few years.
Who knows, Broadcom might decide to do good.. HAHAHAHA my sides hurt!
Excellent guide.
Do not forget to deselect Device Manager->Secure Boot Configuration->Attempt secure boot in VM UEFI BIOS when installing TrueNAS. Access it by pressing "Esc" key during boot sequence. Othervise you will get access denied on virtual installation disk.
5 months later, this comment just saved me some headache.
@@wirikidor MERCI !!!
Jeff - Just wanted to give an extreme thank you for the quality and content of your videos. I just finished up my TrueNAS Scale build using your guidance and it worked like a charm. I did use an Audheid as well, but the K7 8-bay model. I went with an LSI 9240-8i HBA (flashed P20 9211-8i IT Mode) and the instructions on Proxmox 8 you provided were flawless and easily had my array of 4TB Toshiba N300's available via the HBA in my TrueNAS Scale VM. Lastly, a shout out to your top-notch beer-swillery as I am an avid IPA consumer as well! (cheers)
I've been waiting for this. I already have 2 Erying systems as my Proxmox cluster, after your first video on this, and they've been working perfectly for me, but when you originally said you couldn't get HBA passthrough to work properly, I held off buying a 3rd, as I wanted the 3rd for exactly what you've done in this video, and to have a 3rd node for ceph. Now that I can see you figured it out using a sata card, I'm off to order all the bits for the 3rd node.
Thank You, and after I order everything, I'll pop into your store to buy some glassware to show some appreciation.
"Don't virtualize truenas"
*Chuckles in 4 virtualized truenas servers in production*
STOP SAYING TH....
Wait.... nevermind :-D
Just like Stockton Rush always said.
REAL Men ALWAYS test in production.
@@sarahjrandomnumbers Lmao rip
These tutorials are so much more usefull than Network Chucks and you dont seem like a shill trying to sell me something constantly.
Network Chuck is only good for ideas not how-to guides. He’s more of a cyber influencer to me.
This is actually such a good point. I barely/rarely watch Network Chuck anymore. He just feels fake to me now. Almost unwatchable. I haven't seen one of his videos in months.
seems like a good starting point for newbies or kids. I won't knock him for making the stuff sound exciting but I definitely grew out of his style.
I can't fucking stand that guy. "Look at my beard! Look, I'm drinking coffee! Buy my sponsored bullshit!"
Great video, I enjoy your server content a lot when it's this kind of set up.
Wish after so many years there was a simple gui option for this. Appreciate the guide!
Been searching for this for the past week or so. Love your work Jeff. Cheers
Me to, since upgrade failed on my HP Z440 with xeon 2690 and Tesla M40 24G. Cheers
Hey Jeff I'm from Central Oregon been watching you channel for quite a while now, thank you so much for the videos please please more proxmox videos, show any and everything great content :) I'm trying to learn all ins and out of proxmox.
I really like these series on proxmox
Thank you! Every time i'm stuck on a project in my home lab, you tend to have just the video i need and explain it very well!
Exactly what I had been looking for. Thanks for sharing.
I had to reinstall proxmox for the first time in over a year. This guide was very much needed today. Thanks
Thanks Jeff, you saved me a LOT of frustrating research :-) I just managed to passthrough a couple of network interfaces to a microvm within my NixOS server, and it just took me a couple of hours, I expected to spend all night on it :-D
As always you're Jeff.. There a situation where you aren't Jeff? like maybe Mike? or Chris?
I kind of like being Jeff.
@@CraftComputing Yeah it would be weird if you woke up as Patrick from STH.
That would be weird. I'd be a whole foot shorter.
@@CraftComputingDepends if you're cosplaying as an admin that day or not
@@CraftComputingme too
Darn it. I should have done this video. I got it working about a month ago. Great information!! So many people discouraged me from doing it as they said it wouldn't work. It works great for me.
I just have to say, I spent hours trying to get my GPU to passthrough correctly, and your one comment on Memory Ballooning just fixed it! Thank you so much! I didn't even see anything about that mentioned in any of the official documentation!
Impressive to the point and yet full of details tutorial !
Thank you so much for this! Just what I was looking for
Just getting into my own homelab after watching for a while. Got an old ThinkCentre that I'm going to have a tinker with before fully migrating a Windows 11 PC with Plex etc. This video series is great
Thanks Jeff. As always an excellent and succinct guide. Cheers for making the effort! 😊
Thanks Jeff, great tutorial!
Thank you for this. I couldn't get hardware transcoding working properly. I turned off ballooning on the VM and BAM! It works. HUZZAH!
Thank you for the write-up, especially addressing upfront EFI vs legacy boot config for IOMMU (intel_iommu=on).
Great video 👍
Kindest regards, neighbours and friends.
Can we just take a step back and marvel at how now only that this is all possible, but also won't cost a dime in software?
Thank you sir! Just by adding a new physical NIC to Truenas, my write speed increased by x3 on my ZFS pool! I had saturated the just one NIC I had on board with a lot of LXC and VMs
Thank you for this update. This is one if the more challenging tasks for me in proxmox and I was only successful through sheer dumb luck the last time I did this.
The good news? Its still deployed and the only thing I have changed is the GPUs and Storage controller.
Quality stuff again. Was excited when I saw the thumbnail that finally I will see how to passthrough properly an nvme ssd to a truenas vm. Unfortunately this not happened this time.
Hope that you will cover that as well somewhen and if you could explain how to get the truenas vm to put the hdd's to sleep, that would be just the cherry on top.
Cheers Jeff.
Another little addition to this. It seems that you still need to add ""GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on" "" to the etc/default/grub boot cfg file if using the legacy grub boot menu. The legacy grub boot menu is still teh default if installing ext4 onto a single drive.
This has been a life saver. I finally was able to passthrough my 6700 XT for jellyfin hardware encoding.
My friend I've been struggling with exactly this thx for the vid
It took some good amount of hours to figure things out - but at the end it was worth it! I'm using GPU passthrough to run some language models locally
Good stuff keep up the good work.
Thank you one day, one day, I'll do setup like this.
Awesome content, Man!
For your next tutorial I'd love to see you get some VMs running with their storage hosted on the truenas VM!
Awesome - thanks so much. Exactly what I want to do for a virtual Plex server :-)
Hey Jeff, I had issues passing through a GPU with the exact same hardware until I pulled the EFI ROM off the GPU and loaded it within the VM config PCI line. Adding the flag bootrom=“” to the line in the VM config pointed to the rom should do it. I think this is because the GPU gets ignored during the motherboard EFI bootup so the VROM gets set to legacy mode. When trying to pass it into an EFI VM it won’t boot since the VROM doesn’t boot as EFI
Could you explain a little more on how you got that working? I still can't get GPU passthrough working on my 11900h ES erying mobo.
Also did you mean "romfile=" ?
After looking at his documentation, I think you're onto something here.
Hey Jeff, quick tip: you can use the CZcams sections in the timeline to add timings so people can easily skip to where they need help.
Sponserblock extension allows you to skip ads and see where you should start, try it
Been waiting for this. All the pcie passthrough write ups are old and outdated, and the only one that worked for me on prox 7.4 was yours.
Tutorials: update-grub
Proxmox 8.0: "What's a grub?"
@@CraftComputingexactly!
Quickly for clarification sake, q35 means uefi and ifx440 or whatever is bios boot?
Half the tutorials say to do one or the other, and this is the first time I have heard it mentioned otherwise, unless I just forgot 😅.
@@lilsammywasapunkrock
Both machine types support bios and uefi.
The primary difference between q35 and i440fx is that q35 uses PCI-e while i440fx uses the old PCI.
If I remember correctly, I was able to use PCI-e passthrough with i440fx but only for one device at a time.
I personally don't see any point in using i440fx in modern systems with modern host operating systems.
^^^ Bingo
This is super, thank you so much🍻🍺
Fantastic video! I'm considering setting up a Proxmox server for Home Assistant and other services. I'm contemplating running Blue Iris in a virtual machine within Proxmox. What are your thoughts on this? Alternatively, I'm also considering a dedicated NVR. My concern with the VM approach is that I might need to pass through the video card to the VM, which could potentially cause issues with other VMs that require video transcoding or other resources. What's your take on this dilemma? 😁
Great information!
Perfect video thanks a bunch I got Gpu passthrough working on my dell Precision T3600 with my Gtx 8800 Gpu passthrough
Thanks Jeff.
This exact functionality I got working with an Nvidia P400 in Proxmox v7. I hadn't upgraded to 8 for fear of going through this again. Now I may have to take the dive.
Flash vBIOS to force GPU into UEFI mode and disable Legacy mode at boot ?
Do you need to alter any of those CLI strings depending on chipset connected PCIe lanes vs. direct CPU lanes ?
Thanks for the video.
FYI, the instructions don't work if you're using GRUB. These instructions appear to be specific to systemd-boot.
You'll need to look in /etc/default/grub rather than /etc/kernel/cmdline to make the kernel command line changes.
You're a damn wizard! :v Thxx Mr Magical Pants!
SR-IOV and IOMMU are completely orthogonal features and enabling one will not magically make the other work. SR-IOV simply lets the kernel use a standard way of telling PCI-E devices to split themselves into virtual functions. SR-IOV does not require an IOMMU, and IOMMU does not require SR-IOV.
After the first video, I took the plunge on new server hardware... one of the AS Media controllers passes through fine, but the other doesn't - so waiting on a LSI HBA to arrive.
Thanks Jeff... and Thanks Jeff
It'd be interesting to see a Pt 3 / hear your thoughts on further VMs (Plex for example); installed on Proxmox or through TrueNAS
thanks Jeff
Great video! Waiting for one about SR-IOV, I tried using virtual functions on my Intel I350-T4 NIC and got nowhere with it
You ever get PCIE pass through working for the x16 slot? Looking forward to part 4 😊
Did you try to plug some hdmi dongle to simulate a screen connected on the card ? I once did need this, to make the gpu passthrough work. With one dongle on the video output of the motherboard and one on the gpu.
But if i remember well, i did have trouble if the gpu dongle is connected at boot. I finaly did drop the gpu passthrough since i'm not using it so much and it once stopped to work.
Hello side question, are you using an Erying mob with proxmox and if so how are you finding it, are the intel E cored working alright? And what one are you using? How was virtualization? Some of thr seem to be a bit dodgy worh vtx.
Which case are you using for your file server?
I'm looking to build my first NAS and think it looks great!
I've already got 10Gb/s fiber networking ready to go. Just need the server now. lol
Here's a link to the full build of this server, with a parts list and links in the description: czcams.com/video/NPaDCiyY3Kw/video.html
@@CraftComputing Thank you very much for the information!
I'm currently running an ACS override because my IOMMU groups suck. Does your guide deal with that as well? I need to pass a GPU and a storage controller to two different VMs, but I have only one PCIe slot that is totally separated, the other two seem to share groups with some onboard devices (USB, sound cards etc). I know my devices aren't technically separated, but it works.
Are you planning a video on USB and or PCI passthrough to LXC containers? Something about cgroups and permissions never could get it to work.
Jeff - Thanks for this video.
One thing that almost every tutorial points out is not to pass through the primary (i)gpu and to blacklist kernel modules on the host.
With my new Intel N305 based Firewall box it just works fine with the igpu. I also didn't blacklist the i915 kernel module as some advice.
I've tested it for a week now and there is no issue with HW transcoding in Emby. Also no glitches or crashes.
Not sure if they changed anything on those newer Chipsets or if something in proxmox 8.0 / 8.1 changed.
Or if it's just never been true from the beginning?
Jeff my Nvidia Tesla K80, has two GPU's on it, along with a PCI-E bridge device, do you have to pass thru the PCI-E bridge to allow you to pass thru the two discrete GPU'S ?
Great video! I wrote a hookscript a while ago to aid in PCIe passthrough. I found it useful to use specifically with a Ryzen system with no iGPU. It dynamically loads and unloads the kernel and vfio drivers so when say a windows gaming VM is not in use, the Proxmox console will re-attach when the VM stops. Could be useful for other devices too! If anyone is interested let me know, I'll try to point you to the Github gist. I don't think CZcams likes my comment with an actual link. :)
What's the name of the repo? We'll just search for it.
@@jowdyboyYes, seconded - sounds useful. Any idea if it works with NVidia?
I use it with Nvidia, I've tried to post several comments, but I'm assuming they keep getting flagged.
a particular reason not to passthrough disks before installing is to make it easier not to mess up the installation drive, so it's good advice indeed
Excellent tutorial on PCI passthrough.
Could mention on how to passthrough on motherboard SATA and NVME drive?
God I wish I was at home right now so I can follow along
Hi Jeff. Got this working and super excited. But, I am struggeling when trying to use i.e nVidia Moonlight, since it actually mirrors the main/only connected display, which is locked to some bogus/low resolution. This limits the max available streaming resolution as well. Do you have any good idea on how to solve it? I have read a little bit online, and about some people tend to use SPICE for better resolutions. But it neither can expand to my native resolution which is 3840x1080. This was the only reason for me this actually didn’t work :-(
I'm not even close to an expert but to do this on my lenovo nuc I used a 4k 60hz HMDI dummy plug, connected a monitor and the plug set it to two monitors and set the highest resolution the HDMI dummy plug could go matching the device I was connecting to with moonlight.. I might of even just used mirror instead of extend display so that if the primary wasn't plugged in that I still had a desktop.. then after this was set I could manipulate the resolutions up to the max of the plug but I don't know how this would be possible if it were a Tesla card without HDMI or display ports.. I'm sure you already found a workaround. With all that bit rate and wifi or Ethernet needs to be decently fast for lag free and smooth moonlight experience.
@@spyghetti Hi, thanks for replying. I found a workaround and got it to work. This should've been mentioned in the video, by Jeff since it is actually important to be able to get around. Maybe this is somehow different with a headless GPU/enterprise level GPUs/server GPUs, than it is with my old 1070. But the solution I found, was to install a dummy display driver of some sort. Different drivers may work, I saw that it is several possibilities out there to choose from. But as long as Windows has some sort of display with a correct "native resolution" it works. :-)
Question, with GPU passthrough I have no issues passing through a primary gpu but i do need to have the vbois in the configuration. Though i am using unRAID.
Thanks for the great video! I am hoping to try this on my Dell R720 with a Windows VM.
Also quick question, can you pass through Sata ports to a VM? Or does this only work for PCIe hardware?
Usually you can pass through the integrated sata controller of your motherboard, but it depends on the exact motherboard/chipset layout whether it will work or not. I have a Kontron ITX system with an integrated i7 CPU, and for me passing through the onboard sata controller works perfectly, but usually the onboard SATA controller shares its lanes/IOMMU group with some other devices, so it could cause some issues depending on the exact layout of your board.
Sure this has been asked, but what's the benefit of doing pcie passthrough for ZFS when PVE does ZFS for you and presents it to the VM?
Thanks for the video:
Is there some DB with 100% working configuration/hardware for GPU passthrough that definitely works?
Not sure if I missed it and it was addressed in the video but my scenario is similar to what's done in the video, 1 VM with TrueNAS passing through the SATA controller to the drives for the sweet sweet ZFS setup and another VM to host all my home server stuff like jellyfin, qbittorrent and elasticsearch.
In this case, what would be the best way to connect the ZFS pool between one VM to another?
Hey Jeff have you ever tried unraid would like to know your point of view on it
Hi Jeff,
Are there any drawbacks (i.e. Performance) not blacklisting your GPU from the host Proxmox O.S.? Currently I have GPU pass through working but I didn't black list that GPU from the host O.S. and everything seems to be working without issues.
Thanks!
Same here. I did everything except the Proxmox blacklist and got it working in a Win11 VM.
I also checked the "PCI Express" box on the pass-through model in Proxmox for the video card. It did not work without this.
Additionally, my 1070 GTX needed a dummy HDMI plug (or external monitor) to initialize correctly.
If you can convert a video or see apps use cuda without crashing the VM then no, you are completely golden.
Awesome video, thank you. Btw, how do I go about it if I want to passthrou a RAID array to a Win Server VM?
searching whole internet include ai nothing worked. THANK YOU SO SO MUCH for this VIDEO!!!!
I have a question, does this fix the iommu group? I have issues with the pci slots and mic card on the same group. I am in the process of upgrading the bios on my board but question still stands…
asrock decided to put all my pcie slots in one group and the three m.2 slots in another. needless to say i'm pissed.
Hey Jeff. Do you know if those nvme to sata adapter can do the same? Or it will just appear to be few hard disks?
Yes they can be passed through. Quite a few people buy a NAS case like a jonsbo n3 and a mini-pc with a laptop cpu like the 5800u, then use a nvme to sata adapter and a breakout cable to the backplane. They then passthrough the nvme slot and all the drives follow. The downside is no ECC memory.
Curious... could it be possible to set up a server JUST to host drives and directly connect to a TrueNAS VM or otherwise to allow it access to the additional drives?
No, TrueNAS needs the storage controller to access the drives directly.
Is the physical disk to VM pass-through feature in Proxmox so much worse for visualizing TrueNAS than PCIe pass through off the storage controller? What EXACTLY would ZFS do differently?
Thanks
This worked like a charm for me!
Turned a spare gaming laptop into a remote access gaming server.
For me the graphics card worked, and I removed the errors on my Nvidia card by not adding the sub features of the card, like usb C, and the audio device as advised in this tutorial. It gives an error saying I added the card twice if I did.
I set up VGA passthrough (what we called it then) back in 2013. I ran Xen, had one GPU for a Windows VM, another GPU for a linux VM, and a cheap GPU for console on the host/dom0.
Back then it was really messy with card and driver support. Nvidia supported it on Quattro, but not on Geforce, so some people took a soldering iron to their geforce cards to get them to identify as Quattro cards. Then it worked. I used AMD, which worked for setting it up, but not taking it back down cleanly, as the driver didn't manage to reset properly. As a result, if I needed to boot any of the VM's, I needed to boot the whole system.
Still though, I could play windows games in a VM with only a ~2% performance drop, and some charming artifacting in the top left corner, while leaving anything serious to linux, without having to reboot. Though if not for the tinkering in and of itself, I should have done what I recommended on the forums, "just get two computers".
Efi booted host, cards don't have efi firmware on them, so the vbios doesn't get mirrored into memory.
Get a dump of the vbios, and add it as a vbios file in the pci device section of your VM config.
DOH! You're probably right.
I would love an explanation of this comment or further resources. I don't understand efi, vbios, why and how that gets mirrored, or really anything that was said.
@@dozerd42when a physical system boots, it copies the contents of your video card bios (vbios) into main system memory, into the memory region reserved for communicating with the card.
Some cards have a uefi firmware in addition or instead of a traditional vbios.
Without it though, the card won't initialize the display output during boot.
In this case, the cards didn't initialize during boot at all, so providing the video bios to the VM gives it an opportunity to initialize the card on its own.
While you can technically usually boot cards without supplying it, what will often happen is that the in memory copy will become overwritten in some cases - like if that memory region is needed for texture storage at some point.
When that happens it's necessary to reload the vbios from the card, but if you don't supply the vbios separately, sometimes this reload fails, which will hard lock your host.
Man, I ran TrueNAS in a VM for years now. I never ran into issues.
would be interested in LXC tutorial with GPU passtrough / sharing to it... especially with something like intel NUC with only 1 integrated GPU, or maybe just sharing / passtrough of integrated GPU in general
it's not passthrough for lxc, it'd be just using the host gpu directly in a virtual environment. it's the same kernel
All of my super micro motherboard support PCI pass through in the bios and it works for network cards, but not for video cards. Video card pass-through is not supported on all of my super micro mother boards.
Can you check into that?
Is your motherboard shadowing the VBIOS? Google was my friend for creating a VBIOS file that you add to the PCI line in the conf file, oh and had to reset the PCI slot on VM boot with a hookscript. Took days to sort but now I know it's no issue. Probably missed this in the video typing the comment :)
there is no /etc/kernel/cmdline
I have the same config running for 10 Months now with cron shutting down everything in the night and i had no problems with my truenas vm. So in a Homelab environment its the best option if you want to have proxmox as a hypervisor.
Hi. I am currently investigating the idea of creating a proxmox server to run various things, including MacOs, since i definitely need/want that one for audio. I can't really find a clear answer so i feel like asking you this : is it feasible to have low-latency audio on a VM ? Not remotely, locally of course, through an USB audio interface. I feel like PCI passthrough on a dedicated USB card can give me something viable, but i'm not completely sure. Maybe i can just passthrough my USB controller on the motherboard ? But in the end, will it provide me something useable for realtime audio treatment, as in "i plug my guitar in the audio interface, and i hear it's sound, processed by the computer, on my loudspeakers in real-time with a low latency, under, say, 15/30ms" ? )
Thank you for explaining why you virtualize your file server. I do it through cli on proxmox and wondered why you would do it through a vm. But HW passthrough of the sata controller makes sense. And I'm even thinking about trying how you do yours now.
Can someone help me? At 14:50 you mention the vfio config. You show ####.####,####.#### . Which Hex IDs are those? From the graphics card and the the audio controller? Or the graphics card and the subsystem? Which IDs do you chose? In your written tutorial you don't specify it as well...please? Thank you!
was there going to be a vid about changing the repo and getting rid of the stock error message?
Thank you for sharing your experience! It was incredibly helpful in getting GPU passthrough to work. However, I needed to make a few adjustments:
In Proxmox 8, /etc/kernel/cmdline does not exist. Instead, I entered the settings in /etc/default/grub as follows:
GRUB_CMDLINE_LINUX_DEFAULT="quiet nouveau.modeset=0 intel_iommu=on iommu=pt video=efifb:off pci=realloc vfio-pci.ids=10de:1d01"
It's important to note the parameters video=efifb:off and pci=realloc, which were not mentioned elsewhere. These are crucial because many motherboards use shadow RAM for PCIe Slot 1, which can hinder GPU passthrough if not configured properly. With this setup, I believe all your GPUs should function correctly. Additionally, I had to blacklist the NVIDIA drivers.
I was able to passthrough an RTX A2000 with my Eyring i9 12900H motherboard . I populated 2 of the 3 nvme ports though.
You definitely CAN passtrough your primary GPU to a VM...
Running a setup like this for e few years now. The 'disadvantage' is that a monitor to the proxmox is not available any more, and until the VM boots, the screen says 'loading initramfs'.
Yes, definitely - and Proxmox UI is used through SSH from another device anyway as it usually isn't a thing to run the UI on the Proxmox Servers GPU itself anyway.
It can be handy though to have another means of connecting a GPU to the system if the SSH-interface is messed up - I use a thunderbolt eGPU in such circumstances...