Here's how the Minisforum MS-01 WILL replace ALL of your servers! (XCP-ng vs Proxmox vs ESXi)
Vložit
- čas přidán 22. 05. 2024
- 🔥vip-scdkey 30% Discount Code:Gear
Windows 11 PRO($22):biitt.ly/FjKfa
Windows 10 PRO($16):biitt.ly/sXVT9
Windows 11 home($22): biitt.ly/9gHsm
Office 2016 PRO($27):biitt.ly/CmicL
Office 2019 PRO($48): biitt.ly/WZwtJ
www.vip-scdkey.com/
The MinisForum MS-01 could be the perfect addition or replacement for all you homelab needs. We are taking a bit of a close look at how each of the 3 most popular Hypervisors run on this very strange Mini PC. This is an XCP-ng vs Proxmox vs ESXi showdown! Which will I end up choosing?
MS-01 Review: • This Mini PC can repla...
MinisForum MS-01: store.minisforum.com/products... (not an affiliate link)
US: amzn.to/4ao8LMc
AU: amzn.to/4cLoTZL
UK: amzn.to/43JldUp
Chapters
00:00 - MS-Oh YEAH!
01:31 - A little upgrade
03:29 - This is NOT a tutorial or a how to guide
04:33 - XCP-ng testing configuration
09:17 - XCP-ng observations & notes
12:06 - Proxmox testing configuration
16:20 - Proxmox observations & notes
19:05 - VMware ESXi testing configuration
21:22 - VMware ESXi observations & notes
25:51 - Fork Broadcom but why ESXi?
28:39 - TrueNAS quick migration
30:31 - Memory Allocation & Management Observations with all Hypervisors
34:34 - We're just scratching the surface - Věda a technologie
Hey! Olivier here (CEO of Vates and creator of both Xen Orchestra & XCP-ng): thanks for the review! Indeed, XO Lite will get more and more stuff progressively. And also, yes, it makes a lot more sense to use Xen Orchestra when you start to have more than 1 host, because it's really a central console able to manage thousands of VMs and hosts from a single point.
Thanks again for testing it! Let me know anytime if you need anything regarding the XCP-ng/XO couple :)
Keep up the great work. Thank you for providing a great solution. The only advice I would offer is if you want to expand your offering, make it as easy as Proxmox is to set up. That's the number one complaint I hear from people in the industry, and I know people who develop at Rancher that are missing out because of it.
Hey Oliver! Didn't expect to see you here! Thanks so much for watching!
Hi Oliver, do you have a roadmap when NVIDIA vGPUs will be be supported on XCP-ng? My client base is in engineering and visual libraries are developed with NVIDIA architectures, changing to AMD is not feasible.
@@luminaire7085 It is possible, but everytime I'm posting the answer with a link, my comment is not published :'( Check the XCp-ng forum post from "splatunov" with the thread called Nvidia Tesla P4 for vgpu and Plex encoding").
@@luminaire7085 It's already possible, I posted multiple times the link toward the "how to" but my comment never make it.
I'm going to assume you never installed the microcode patch for big/little cores in Proxmox? Craft Computing has an in-depth series on big/little core testing in Proxmox.
Loving this addition to the channel ‼ (was gonna say 'twist' but suppose it's more of a return-to-roots type thing). Your enthusiasm shines through even more than with the desktop stuff. Nice one. Keep it going.
re: memory management with XCP-NG and Proxmox
You can configure the VMs (in either of those hypervisors) to use a (memory) ballooning device (or not).
If you enable memory ballooning, in both XCP-NG and Proxmox, it will dynamically allocate the memory out of what's available.
The risk with that is that you then oversubscribe the RAM allocation, then you can run into issues where VMs can crash (due to being out of memory) or other VM stability problems.
re: "it's not that obvious"
When you create the VM (I know this for Proxmox, and I would imagine that it would be similar for XCP-NG) -- you have the option to click a checkbox which says whether you want the VM to use (a) ballooning (device) or not. I forget if it is in the advanced options or not, but it is there.
(I tend to always create my VMs with the advanced checkbox checked, so I don't remember if it is shown without that checkbox checked.)
But it's there.
And if you forgot to set that, when you were creating the VM, you can always go back in (to Proxmox) and edit the VM memory configuration, and enable that, and then start the VM.
For Windows, you'd need to install the virtio guest tools.
For Linux, I think that you need to install qemu-guest-agent.
Not sure for FreeBSD.
(But if you're using TrueNAS Scale, being Debian based, then you would install qemu-guest-agent.)
Awesome video fams, love seeing virtualisation stuff
you just got anew sub for looking at this machine and even knowing Wendle Channels and your knowledge glad i found you
Thanks for showing the LSI card inside ; have that MS-01 with xcp-ng and was wondering how to extend storage: LSI with external connector seems logical. Any suggestion for disc-enclosure ?
Our setup is pretty DIY. We have an old Silverstone 3U chassis with 16 bays on the front. I have a the backplane connected up to a SAS extender inside the chassis that is plugged into one of those cheap GPU mining PCIe slot extenders. czcams.com/video/kUDIfakYV7U/video.htmlsi=ddAbyrLiqUaF5xNi&t=282 if you wanna see how I put it together. Its not pretty but it works amazingly!
fellow vmware admin here. for a single host, i'd recommend proxmox (esxi is no longer an option for homelabbers)...but for more than 1, xcp-ng is better, unless you need max performance (xcp-ng's current SMAPIv1 storage stack limits speed, with the newer v3 forthcoming to improve this). as an FYI, i've heard that Broadcom is removing embedded management in future versions of vsphere, and xcp-ng is actualling adding it with XO Lite, which is limited in features but there in the 8.3 beta.
I agree completely. I really hate what Broadcom is doing. I'm so gutted by it. For now I'm going to stay with ESXi because it's going to save me some time over the next year. I don't get a lot of downtime to migrate everything. I quite like the direction XO Lite is going. I HOPE they make as feature rich as possible.
More than 1?
@@dansanger5340 more than 1 host….clustering/pools.
A bit behind on tech news it seems.
ESXi via VMUG Advantage (about $180/year with coupon) is as much an option for homelabbers as it ever was. It's still the best hypervisor all around, and with vCenter, included in the Advantage subscription, its management and feature set are best-of-breed.
MS-01 - a chef kiss of minipc for homelab .
Minisforum stuff is so fucking cool
I hope they make a Ryzen 8000 version of the MS-01. That would make them even cooler.
@@GearSeekers yessss that's what I'm waiting for too
Would like to see how you've got it setup with your JBOD
It’s hard when you are invested in a product by training on it and using it extensively and then they disappoint you by making horrible decisions. I hope proxmox will live up to your expectations in the future and allow you to move on from esxi!
I feel like a whole part of my life has been ripped out and erased. I'm honestly so gutted by it.
@@GearSeekers Do not worry proxmox runs as stable as ESXi. I have been running Gaming VMs, NAS VMs, PiHole VMs, PfSense, OpnSense, Nextcloud….. My servers are up 24/7 and I never had a problem that was down to proxmox. One was on for over 4 years without a problem. Only two hardware failures that where not proxmox‘s fault!
Have you had any success with a solution for the random shutdown of proxmox?
FYI - about your machine turning off, something that I've experienced quite recently is that my ddr5 was a bit flaky, and with ddr5 seems that linux kernel prefers to just hard reset rather than risking corruptions (possibly due to "on the die ecc" being "not great"), swap of ram fixed the problem. I'm not promising anything, but it's work a try.
18:18 im having the same turn off issue too!! I was trying to find logs to see if it tell me why? I felt this thing was too good to be actually just good!
Very interesting! Was it just with Proxmox or are running a different OS?
@@GearSeekers Proxmox running windows 10 or 11. I started noticing when using testdisk to recover a drive i accidentally formatted over. It would run for days then i would check it and i cant find it on the network until i restart. Its become very annoying. I cant tell if the system is overheating or maybe even the power supply. Or even Windows freaking out trying to do updates or something since i have it blocked from the internet. Running linux VMs doesn't seem to cause any issues at all.
@@GearSeekers I also did the microcode thing as well as SR-IOV with middling results. I also notice that sometimes i would log onto proxmox to see an attempted proxmox update took-out the entire system and all my VMs have a question mark symbol until i restart. This thing i quickly becoming a regret for me.
Have thermals been OK for you? There are some reports of the paste job being bad, and people lowering temps by 10C (regular paste) or 20C (liquid metal) by repasting.
Thermals have been pretty good so far. Put it into production this week and will do an update soon :)
He stuck with ESXI (bleh), Saved you 36 minutes.
All three hypervisors support memory balloning with a single check box. Other than that, pcie passthrough needs direct memory access, so balloning is imposible no matter what hypervisor you use.
👍🏻👍🏻👍🏻👍🏻👍🏻
I love my mum; a sandwich _and_ this video? How _did_ she know?
Magic!
did your try Nutanix Community Edition Nic don't get me wrong i an a ESXi fan boi
I just got done cramming an Intel x710 PE310G4SPI9L-XR-CX3 4 port 10GbE SFP+ card into three of these. For fucks sake they didn't give us much room to work with. The card fits and works like a charm, but the only way I could get the four ports in the back to fit was to take off the rear cover. It was either that or cut that extruded bit of plastic on the left of the port opening out. Would fit like a charm if that wasn't in the way. Love XCP-NG myself. All the Proxmox kids tend to tell me that XCP-NG takes too much work to install, but I agree, XCP-NG is more powerful. I am a bit concerned about the limited cooling with an LSI or SFP+ card b/c both tend to get super hot, but I'll probably build a custom case and fit them all in, and within the case I'll likely route air through auxillary fans through the MS-01s. The bit about Unraid being shit is a bit inflammatory for the CZcams community, but I wouldn't have said it any different. It's good for retards that just want a rock solid solution for setting up NFS/SMB and the ability to set up a few docker images without much if any knowledge, but dude, I can't handle the loss of bandwidth from my 10gbe cards in Unraid. Networking is shit in Unraid. I absolutely love networking in XCP-NG...to me that's one of the biggest benefits to using it. Setting up complex VLANS is so much easier in XCP and in my experience is easier than in Proxmox. Good video!
XCP-ng I think is pretty easy to deploy and install. Proxmox is okay but if I had to pick between the two based on feature set and if I had multiple hosts I'd probably go with XCP-ng. It definitely feels more geared towards serious workloads.
UNRAID is garbage? I guess that's you're opinion but I strongly suggest trying UNRAID 6.12 or 6.13 public beta when released, it may change your mind. Interesting to know why you think it's garbage though.
The FUSE filesystem is a straight up bad idea for any serious server filesystem.
Low performance by nature, and borderline negligent from a data integrity standpoint. Completely unsuitable as a home server.
If you ignore that and do zfs, then all you're doing is paying money to have basically the same features as all the open source hypervisors, but unraid doesn't support clustering or workload migration, despite both KVM and docker workloads both supporting workload migration.
I'm LOVING unraid so far for my uses.
Unraid is not designed for mission critical use. It's for hobbyist use. I would never recommend anyone use it for serious workloads.
@@GearSeekers I actully agree with this being an UNRAID user for many users. I actually used ESXi 8 for anything remotely critical and that served me very well. Such a shame hw sensor info is lacking for the MS01 in ESXi given that it has no IPMI.
Wow! I think this is the first time I've seen this opinion, with which I agree, in print. RAID 4 (striping with a dedicated parity disk) was a bad idea and Unraid's so-so related implementation remains so.
If only these were made in North America (US or Canada) instead of a foreign actor.
Name a single computer that is "Made North America (US or Canada)"
North America will do if customer willing to pay more like 2x price raise
wait why is unraid garbage?
Yeah, seems a bit much, right? It has pros and cons like any other solution, but it's not "trash".
why is unraid garbage? I have not see any of yoour videos so i dont know if you have done a video on the why. If you havent can you make one
not really this is just more sponsored garbage - why do i say this - because everybody and his brother is reviewing these - you can do better with a more traditional mobo and going amd particularly on enterprise and higher end - amd is crushing intel in all mkt segments - is this just random luck? probably not
at this price point with that cpu you can't do it off a traditional mobo, mini has a amd machine version thats 399/599 but it trades the networking for a full pci 5.0 slot and 2 m.2's but to get a hba in there and a networking card you go down to 1 m.2 which lowers your redundancy if you wanted to use dual m.2 for os/virtual machines.
@@msolace580 fair enough - a pretty good argument but next gen chips will shift the equation a bit - not in the immediate future but soonish (more than likely) - you likely can get refurb that has gobs more memory and cores but it will use more energia - it won't be the jeb bush version (lo energia)
You sound new here so let me explain a few things. First of all we live in an apartment. We run this channel from that apartment. I need all of our gear to be small and quiet. We don't have a lot of space for big servers. Us decommissioning our larger server and using this to replace it makes more sense. If there was a Mini PC with Dual SFP+ 10GbE and a PCIe slot that had an AMD CPU I would much prefer that over the oddities of these Hybrid Intel CPUs. I got my hands on the MS-01 purely because it has 3x M.2 slots, Dual SPF+ ports and a single PCIe slot. With enough RAM it makes for the perfect host for all of the VMs we need. We have 3 seperate storage servers and we are moving one of them running on the metal to a VM on this server connected up to our disk shelf. Again the MS-01 is the perfect replacement for that server as it's smaller, quieter, runs cooler and consumes less power.
There is no conspiracy. People are using and reviewing the MS-01 because its good. If you like AMD thats cool. If you like Intel thats also cool. You should be more focused on the right tool for the job not who makes some silicon for a CPU. The MS-01 is the right tool for this job. I don't care what CPU it has.
Unraid is garbage! Unsubscribe. JK you do you. I disagree
Unraid is for basic setups. Its got a lot of unnecessary overhead and in the industry no one takes it seriously. The reason is because its a hodge podge set of packages you can easily install yourself and have to pay for it. Proxmox can do everything Unraid does for free.
it's not just frustrating esxi has to change it's license,it's frustrating broadcom and it's vendor don't update their src code when kernel is now form 5.10x all the way to 6.8 and broadcom NIC src code is from era when linux kernel is 2.6x/3.1 ,no wonder it can not match nvidia or intel, even chinese vendor DPU update their src code open to audit. this is sad for a used to be a great company