3 Node Hyperconverged Proxmox cluster: Failure testing, Ceph performance, 10Gb mesh network

Sdílet
Vložit
  • čas přidán 21. 07. 2024
  • Why stop at 1 server? This videos goes over Proxmox clusters, what they can do, and how failure is handled.
    Thanks to QSFPTEK for providing the network cables and transceivers used in this video. These products are available in the links below:
    10G SFP+ DAC: www.qsfptek.com/product/34865...
    SFP-10G-T: www.qsfptek.com/product/10009...
    SFP+-10G-SR: www.qsfptek.com/product/30953...
    OM3: www.qsfptek.com/product/99661...
    Let me know if you have any ideas that I can do with this cluster. I'd love to try more software and hardware configurations. This video also skipped many of the details when setting up this cluster. Let me know if you want me to go into more details with any parts of this video.
    00:00 Intro
    00:32 Hardware overview
    00:57 Networking hardware setup
    03:06 Software overview
    03:30 Ceph overview
    05:06 Network configuration
    6:30 Advantages of a cluster
    7:10 Ceph performance
    8:45 Failure testing
    9:00 Ceph drive failure
    9:28 network link failure
    10:00 Node failure
    11:06 Conclusion
    Full Node Specs:
    Node0:
    EMC Isilon x200
    2x L5630
    24GB DDR3
    4x500GB SSDs
    Node1:
    DIY server with Intel S2600CP motherboard
    2x E5 2680 v2
    64GB
    5x Sun F40 SSDs(20x 100GB SSD spresented to the OS)
    Node2:
    Asus LGA 2011v3 1u server
    1x E5 2643 v4
    128GB DDR4
    4x500GB SSDs
  • Věda a technologie

Komentáře • 97

  • @ZimTachyon
    @ZimTachyon Před rokem +30

    You are genuinely great at presenting this content. You first hinted at not using a switch which caught my attention right away. Then you showed the triangle configuration to answer how you anticipate it would work. Finally you asked and answered the same questions I had like how do you avoid loops. Excellent presentation and extremely valuable.

  • @pauliussutkus526
    @pauliussutkus526 Před rokem +7

    Would love to have video with detailed explanation on how you setup proxmoxes, put them in cluster, setup mesh network using frr, cheking connectivity between the nodes(iperf3 or route ip 6 route). Adding subnet for cluster(ceph). Setup ceph and best pratice to use 2 copies of file or 2+1(parity). Also how to avoid fails. I think full tutorial of that would be great. Or you can divide into parts. Anyway good job.

  • @iamweave
    @iamweave Před 11 měsíci +6

    Nice. Usually I watch instructional videos at 1.25x or 1.5x -- yours is the first one I thought I was going to have to run it at lower than 1x!

  • @TooLazyToFail
    @TooLazyToFail Před rokem +3

    This is a cool project! I (unintentionally) learn things that help me at work every time you post one of these.

  • @bluesquadron593
    @bluesquadron593 Před rokem +7

    Running this kind of setup on three identical Elitedesk sff nodes with dedicated m.2 drive for ceph. Even with a single 1Gb connection to a router everything works great. Ceph likes memory so have to run with at least 24 GB.

  • @davidkamaunu7887
    @davidkamaunu7887 Před rokem +2

    Awesome presentation! and @03:13 free range chickens in the background!! 🤠👏

  • @shivex
    @shivex Před rokem

    Great video, very cool to see all this in action. You are spot on with your content, loving it!

  • @allards
    @allards Před 11 měsíci +3

    Thanks you for this video, I never heard of the Proxmox Full Mesh Network Ceph feature before.
    I recently bought three Mini-pc’s for the purpose of building an Proxmox HA cluster. I was planning on getting a small 2.5 GB switch for the storage.
    Since the Mini-pc’s have two 2.5 GB ports I will use them in a Full Mesh Network buying separate USB-C to Ethernet adapters for the LAN connectivity.
    For my Homelab such a setup is more than powerful enough.
    Going to have a lot of fun (and frustration 😅) with an advanced Proxmox setup and Kubernetes Cluster on top of it..

  • @MikeDeVincentis
    @MikeDeVincentis Před rokem +6

    Nice job. Explained very well.

  • @martyewise
    @martyewise Před 9 měsíci

    Thanks! Super vid! Searching for parts and planning construction of my own PVE cluster.

  • @subpixel2234
    @subpixel2234 Před 9 měsíci +2

    Great content. I'd really like to watch a deep drive on network setup that covers separate networks for Ceph (>=10Gb), VM access outside of the cluster, and a intra-cluster management network (

  • @GapYouIn2
    @GapYouIn2 Před 5 měsíci

    Good stuff! Good to see someone show that you can just grab the commodity hardware from wherever and make a cluster that is fault tolerant lol. I run several ceph clusters and it definitely gets better with scale, but still a lot to be desired. Interesting to see it so well integrated with proxmox.

  • @dtom19
    @dtom19 Před rokem +4

    Love the content. I currently have a 4 node cluster in production with PVE and Ceph. I currently have VM storage on SSD’s and cold storage on spinning rust with ssd db/wal, but would like to see something on ec pool. I know you can create 4+2 on 3 nodes by sending them in pairs, but I can’t quite get my head around the CRUSH rule for it. The logic behind this is to increase storage efficiency.

  • @MikeDent
    @MikeDent Před rokem

    Amazing knowledge and enthusiasm. I think Proxmox should employ you.

  • @Ronaaronhunt
    @Ronaaronhunt Před rokem +12

    Great content. It would be great if you could cover maintenance of the cluster. Things like upgrading a hard drive and/or replacing one of cluster PCs if there is a hardware failure.

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před rokem +11

      Thanks for the idea. I'll start planning for this video soon

    • @jamescross2652
      @jamescross2652 Před rokem +2

      @@ElectronicsWizardry and also updating it. I assume that should be straight forward? if a node reboot is required, you also just do that? For ZFS I found you could just plug in a new drive and then it just finds it and you can increase your storage that way. You cannot however, decrease it. And if you want to upgrade all drives, I suspect it might be better to build that array and then migrate to it. It would be good if you can just pull a drive and replace it with one twice the size and it just deals with it.

    • @dtom19
      @dtom19 Před rokem +1

      Ceph prefers to have drives be of the same size and type. It will work without them being identical, but performance would suffer. Having said that, if you want to increase the size of all disks, it’s not too bad. Mark an OSD as down, let Ceph rebuild its PG’s, stop the down OSD, destroy it, replace it with bigger drive, create a new OSD with the new drive.

  • @joshuamaserow
    @joshuamaserow Před rokem

    Well done dude. You leveled up your game. Glad I subscribed.

  • @MickeyMishra
    @MickeyMishra Před rokem +1

    I love it when old hardware gets used. Sure it may take more power, but in my experience? Mixing and matching may be hard to do? But its overall a better idea for uptime. Chances that 3 sets of gear fail at the same time from different product lines? Yea, not going to happen!
    This is wonderful that more people are using the DAC cables. I stepped away from home server stuff years ago, but its nice seeing other folks keep the hobby alive.

  • @TheRaginghalfasian
    @TheRaginghalfasian Před rokem

    great video, thanks for making it. i thought when you said you were gonna cause a failure you would just cut the power from one of them.

  • @alejandroberistain4831

    Awesome video, thank you for sharing!

  • @cmacpher2009
    @cmacpher2009 Před 8 měsíci

    Since you asked, how about reliable VL intensive OLTP database using no data loss log shipping, very fast failover on multi-node active/passive HA cluster config with enterprise class database products like Oracle and HANA. Hit it hard, every server hardware, OS, network, database, heartbeat, corruption, simulated WAN, DC environment, and disaster failure scenario you can come up with. Show that this product can compete in enterprise environments. Perhaps it can. Enjoy the challenge. I look forward to viewing more of your videos. Amazing talent you have, loved the chickens.

  • @markjones9180
    @markjones9180 Před rokem

    Awesome video, learnt alot, thanks for sharing!!!

  • @rocketi05
    @rocketi05 Před rokem +2

    I love the random chickens behind you. Great content!

  • @lquezada914
    @lquezada914 Před rokem +3

    You’re a champ - thanks for all that information in such a short time. I am currently working with passthrough, trying to get my rtx 2060 to detect on a windows 11 vm. Hopefully I can figure it by this week.

    • @hotrodhunk7389
      @hotrodhunk7389 Před 10 měsíci +1

      Did ya get it? For me windows just doesn't want GPU passthrough in a vm.

    • @lquezada914
      @lquezada914 Před 10 měsíci

      @@hotrodhunk7389
      When I had my lab setup. I did get it to work but you have to make sure the GPU is compatible with Linux. As some cards have more stable drivers then others. I tried with a 2060 and a 1070. The 1070 worked fine but the 2060 gave me troubles but it did work.

  • @CD3WD-Project
    @CD3WD-Project Před rokem

    Great video, I just finished getting our last VMs off a way overpriced Nutanix cluster and I was looking at putting Proxmox on it and you have me sold. No I did not buy the Nutanix as they got it 6 months before I took over.

  • @reasoningCode
    @reasoningCode Před rokem +1

    Love your contents!

  • @GuillermoPradoObando
    @GuillermoPradoObando Před rokem

    Great work, thanks for share it with us

  • @TheBlaser55
    @TheBlaser55 Před 11 měsíci

    WOW I have been looking at something like this for a while.

  • @RyouConcord
    @RyouConcord Před 6 měsíci +1

    god dang wizard indeed. your content is rad man

  • @flahiker
    @flahiker Před rokem +1

    Great Video. A suggestion to test your setup is to simulate a power outage and see how the cluster responds. I have a 3 node ProxMox cluster running Ceph and I am setting up an extra cheap PC to run NUT to manage the UPS. My goal is to simulate a power outage (un plug the UPS) and have the cluster "Gracefully" shutdown and restart when power is restored.

    • @Mr.Leeroy
      @Mr.Leeroy Před 8 měsíci

      what is the point of 4th PC if it is still a single point of failure for UPS NUT master? Just connect it to either of 3 nodes.
      Having an UPS with network card you could probably access it from any node. In case of USB UPS some sort of hardware hack is probably required like an "Arduino" controlled USB switch based on ATX PS_ON logic.

  • @shephusted2714
    @shephusted2714 Před rokem +1

    try adding stuff like nvme and think about going to 40g or 10g bridged/bonded and then se where you get the best perf boosts - good video! 40g dual port connect-x cards on ebay are about 50 bucks if you shop around - this would likely prevent any disk i/o issues but you would also probably have to goto nvme to really take advantage of this but since hw is getting more reasonable and the burgeoning refurb mkt i think upgrading cluster is a good way to go before adding nodes - look forward to updates and followups - please talk about proxmox backup and show how long a backup of cluster takes before and after upgrades - this topic is great for smb who need 5 nines uptime!

  • @roybatty2268
    @roybatty2268 Před rokem +2

    You have chickens. You are cool! Love your channel.

  • @gustersongusterson4120

    How do you have to configure VMs to have HA on nodes with different hardware? I've read a little about this but haven't gone in depth and I'm curious. Great video, thanks for it!

  • @bhupindersingh3880
    @bhupindersingh3880 Před rokem

    Great Video Look for more stuff

  • @krzycieslik6650
    @krzycieslik6650 Před rokem

    Coudl someone tell me, where i can find instruction, how i suppouse do configure ceph with this cache? I have similar problem with 1,53 MB/s in crystalDiskMark...

  • @syav7998
    @syav7998 Před rokem

    Very nice 👍

  • @psutkus
    @psutkus Před rokem

    Is where a way to kill 1 node(for example electricity gonr) and still have working vm without any interuption? Or at first works fence and after that restarting vm and loads up? So it usually around 1-2 minutes. I would like know is there any suggestions to have 0 downtime, because data is in shared ceph.

  • @esra_erimez
    @esra_erimez Před rokem

    Wow, this is impressive! My company give me some old InfiniBand cards I'd love to try this with. I just need the servers

  • @marcorobbe9003
    @marcorobbe9003 Před 6 měsíci

    Hi and thanks for your great videos 🙏
    I am planning to set up a HA cluster with three zimaboards or something in that range for home automation (NodeRed, Grafana, ...).
    Right now I am starting with Proxmox and hanging with one topic. (How) is it possible to share data between VMs or Container?
    I think, Proxmox will run on its own disk. The VMs, and containers are on a seperate SSD - later on a Ceph storage.
    When I am setting up a container, that container gets his own virtual hdd assigned that is placed on the external SSD / the Ceph disk.
    Is it possible / how to have a folder / diskarea, partition, ... lets call it "shared folder" where different containers or maybe also VMs can read and write data.
    Later on there is maybe a container with a simple NAS software solution or just a SMB share, that gives me acces to that "shared folder" via LAN so I can look at that data or also backup that data from time to time.
    Yes, I can run an external NAS and connect an SMB-Share to the containers but that is not what I want.
    I would be very happy, if someone can help me out how to do that.
    Thanks a lot

  • @martyewise
    @martyewise Před 5 měsíci

    Thanks again. I've got a small cluster up and running with ceph, etc. I'm trying to work out some of the network config using SDN... Not a lot of info out there that I've been able to find going over those details... Any chance you've got a video in the pipeline going over SDN in detail?
    Thanks again for your time and effort on this stuff. 🙂

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před 5 měsíci

      A networking/SDN video is planned for the future. I'm glad you like my videos.

  • @shephusted2714
    @shephusted2714 Před rokem +1

    you can and probably should upgrade this to warlock pentagram topology - still no switch needed but you will gain overall cluster robustness and be able to scale out storage - it is just a double triangle, going to 40g or 100g will be more common place and with no switch needed you save a couple grand right there - consider making the management network 2.5gbe

  • @rodfer5406
    @rodfer5406 Před rokem

    You’re the man*** 👍

  • @jamescross2652
    @jamescross2652 Před rokem

    Supposing one of your nodes is gone and you have a bare metal replacement. How easy is it to get that back in to the cluster? We have a 3 node system without ceph using replication, it works fine but if a node dies then the HA starts on a new node but its obviously slightly behind by up to 15 min. And because of our antiquated VM thats a problem because it can't redo those transactions. But, if we have shared storage, it will just be the inconvenience of a reboot, which we can deal with much easier if I understand correctly. we're in a position to build a new one, the risk is that ceph is new to us.

  • @leftblank131
    @leftblank131 Před 9 měsíci

    Yea, but has it eliminated side fumbling?

  • @Ingeanous
    @Ingeanous Před rokem

    Your talking a little fast to follow.. but i get it... that means your are passionate about the subject!

  • @tim_allen_jr
    @tim_allen_jr Před 5 měsíci

    You're the Merlin of computers.🧠📈✨️

  • @davidkamaunu7887
    @davidkamaunu7887 Před rokem

    I would try making an IPFS cluster with that hardware.

  • @colorxlabs7200
    @colorxlabs7200 Před rokem +1

    Great info as always! Any chance you’d experiment with harvester?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před rokem +2

      Thanks for introducing me to harvester. It looks like a cool project, and I want to start testing it soon.

  • @rbjohnson78
    @rbjohnson78 Před měsícem +1

    Did you setup ceph before frr? In the mesh document, frr is setup first and then ceph.
    I'm trying to get frr working right now, but it doesn't appear the steps in the document actually turned up the interfaces.

    • @rbjohnson78
      @rbjohnson78 Před měsícem +1

      I figured it out. I had to set the interfaces I was using to auto start
      Side note... Great job on the basics. A video showing step by step would be great for newbies. Such as myself.

  • @pankajjoshi4206
    @pankajjoshi4206 Před rokem

    1. Does it increases speed?
    2. How to connect more than onr pls show.
    I have 20 old dual core PCs in my lab , how can I parallely use these processor.
    Thank you

  • @EusebioResende
    @EusebioResende Před rokem

    Great video. Will it work in similar fashion with containers running on the nodes?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před rokem +1

      Yea containers will work in almost the same way as VMs here. The only difference I know of is containers can’t be live migrated between nodes. You would need to shutdown the container before moving it. All other features like ha and shared storage will work find

  • @LiebJohnson
    @LiebJohnson Před rokem

    Putting together a three node ceps cluster which needs to be powered efficient, quiet, and have ~50tb of available storage. Would love a parts recommendation.

    • @martyewise
      @martyewise Před 6 měsíci

      Not sure if you've already worked this out, but I recently put together a small cluster along those lines... Not sure it can quite achieve your desired 50TB with this config, but it can get pretty close (if you're willing to spend the $ on the SSDs)...
      I used 3 Dell Precision 3430/3431 SFF systems I bought on EBay for a decent price. They're i7-8700 (6c/12t) w/ 64GB RAM. I found some decent dual 10Gbe NICs I installed into the PCIEx16 slot in each (this NICs are really only PCIEx8), and added an m.2 NVME adapter in the remaining PCIEx4 slot in each. This gives me 2x NVME SSD slots in each that I filled with 2TB SSDs, and 2x SATA slots in each for additional SATA SSDs. I found some 64GB SATA DOMs for boot devices that replace the DVD in each system. If I were to max out all the NVME and SATA SSDs with the largest available devices, it could get close to what you're looking for.
      The problem I found is that with non-enterprise/server class hardware you're likely to run out of PCIE lanes (only 16 total on most available consumer CPU/mobos). Moving to server hardware will mean more noise and power consumption than I was comfortable with (a zillion tiny little fans in most 1U servers generate a lot of noise!).
      I'm not sure if this is what you have in mind, but this arrangement seemed to be the "sweet spot" for me in terms of cost/performance/power/noise. I haven't had it up and running for long, so I don't have a ton of experience with it yet, but things look promising. I'm currently working through the SDN config for the cluster (there doesn't seem to be a lot of info on these details available).
      Good luck. Have fun.😃

  • @Burntcrayon-cb7eh
    @Burntcrayon-cb7eh Před rokem +1

    It would be cool to see you do some HPC setups with this hardware. I’m currently using 4 Xeon e5 dual cpu compute nodes linked with infiniband to run computational fluid dynamic simulations in parallel over MPI, but you rarely see this type of content on CZcams. I would be interested in the different setting and hardware optimizations that can be done on these types of setups to increase performance etc.

    • @banzooiebooie
      @banzooiebooie Před rokem

      Reading your comment make me want to see your video! But yes, more of this you explain.

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před rokem

      I don't have much experience in the HPC field and how to correctly setup, test, and use these programs. Do you know of any good resources that cover these topics?

    • @Burntcrayon-cb7eh
      @Burntcrayon-cb7eh Před rokem

      @@ElectronicsWizardry it is a pretty vast field with many different technologies for many different use cases, but at least what I use requires extremely low latency so I use infiniband (

  • @Darkk6969
    @Darkk6969 Před rokem +1

    I've ran something like this for work. Had 4 node cluster with CEPH. The only issue I had with CEPH is the rebuild performance. It would slow down almost all the VMs to a crawl. Sometimes the VMs would stall and crash. I think my issue was a combination of things like large 8TB drives with no cache and 10 gig network connected to a switch. Each node had two 10 gig connections to the switch. Running vmware with vsan right now and have plans to go back to ProxMox with better hardware. Not sure if I'll use CEPH again but with ZFS replication.

    • @angelg3986
      @angelg3986 Před rokem

      Why do you plan to replace vmware with ProxMox ?

    • @Darkk6969
      @Darkk6969 Před rokem

      @Dyeffson Dorsaint I had two separate dedicated 24 port 10 gig switches just for CEPH traffic without any connection to other network. I did it this way on purpose to isolate CEPH traffic from everything. I was able to manage the switches using dedicated mgt port.

    • @LampJustin
      @LampJustin Před rokem +1

      @@Darkk6969 the reason for the slow downs is, that you have to limit the rebuild traffic. You can set a limit, so that'll not use that much bandwidth. Since you mentioned HDDs ceph's pretty slow with small amounts of drives and you'll definitely want to put your db/wal on a SSD. If you want something "simple" like vsan you could also use linbit drbd9. They do have a proxmox integration. Since it's simple block replication it's fast af and great for nvme or SSDs. Reads are local so you'll get full speed. It just does not do EC.

  • @arthurd6495
    @arthurd6495 Před 6 měsíci

    nice

  • @KLANGOBRA
    @KLANGOBRA Před 11 měsíci

    Valeu!

  • @homehome4822
    @homehome4822 Před měsícem

    Would you be able to create a stretched cluster via tail scale? OR would it have too much latency to work?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před měsícem +1

      Depends on what you’re doing with the cluster. I could see some programs working that just need to sync a state(life if you want to manage multiple proxmox servers as one) but storage clusters would likely preform very badly due to the latency and limited bandwidth.

  • @TheTechnologyStudioTTS
    @TheTechnologyStudioTTS Před 10 měsíci

    Could you post a network diagram so I can build the same setup? I am not sure how to do the mesh network

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před 10 měsíci

      Take a look at this page on the Proxmox wiki. pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server. It goes over a lot of the information you could want to know for a setup like this. I ran a 10G link between all nodes in the cluster, and a 1gbe link to a main network switch.

  • @chaxiongyukonhiatou607
    @chaxiongyukonhiatou607 Před 4 měsíci

    Very good video. Could you show your connection topology?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před 4 měsíci

      I don't have a diagram, but I'll try to explain it here. Each node has a dual 10GBe NIC and a 1GBe Nic. The 1GBe NIC on the servers are all connected to a switch, and the low bandwidth network is to be used for internet access, management, an.
      Then there are 10GBE links between every server. For example if the servers are A, B, and C. The links would be A to B, A to C, and B to C. These links are routed using using frr so the shortest path is taken, but will take an alternate path in case of a failure.
      Hopefully this helps explain my setup.

    • @chaxiongyukonhiatou607
      @chaxiongyukonhiatou607 Před 4 měsíci

      @@ElectronicsWizardry Thank you so much for your explanation

  • @MrEtoel
    @MrEtoel Před 7 měsíci

    I want to try something similar, but my proxmox cluster consists of 3 Intel 13th Gen NUCs with only one Nvme drive each. Would it still be feasible to run Ceph? I guess I need to use what you demonstrated in your gparted/dd video to resize the nvme drive because I made the mistake of assigning it all to lvm. That would be a cool video. My NUCs have 2 thunderbolt ports (40gbit) each, imagine if I could use those for Ceph links. That would be awesome.

    •  Před 6 měsíci

      The 13th-gen i3/i5/i7 nuc features a b-key 2242 m.2 sata ssd slot. Try that for proxmox installation. Relatively inexpensive 256GB m.2 sata SSDs should be sufficient for OS and images. This leaves nvme free for ceph.

  • @DocMacLovin
    @DocMacLovin Před 10 měsíci

    Imagine finding one of those in a dark server room in the last corner. Brrr. Creepy.

  • @KILLERTX95
    @KILLERTX95 Před 8 měsíci

    Just saying, if the ceph ssd your using has "power protection" it massively improves performance. it's like a better version of write caching and speeds things up immensely.
    To avoid confusion, power protection isn't a UPS 😂. In this case it's a feature on the SSD usually found on enterprise SSD's.

  • @TheOnlyEpsilonAlpha
    @TheOnlyEpsilonAlpha Před 10 měsíci

    Impressive, i wonder: You called up that WebUI over a direct IP right? A reasonable addition, to make also that be fault-tolerant, would be to set up a load balancing setup for the Web UI, so you would have a DNS Name to call your Interface which routes to a functional node at all times.
    Or do you have something like vIP running already, which routes to a functional virtual IP?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před 10 měsíci

      Yea the Webui is over a direct IP and a node failure would take out that web interface. I didn't go into the details of how to deal with this, but using DNS or a Proxy may be a good idea.
      I plan on going over Ceph and other HA topics in Proxmox in later videos.

  • @banzooiebooie
    @banzooiebooie Před rokem +1

    Funny is that in the real world, many companies uses this (well they use ESXi but it is almost the same thing...just more expensive) to create VMs to run Kubernetes/OpenShift in a cluster. That is failover on top of failover.

  • @curtalfrey1636
    @curtalfrey1636 Před rokem

    nice man, i got 2 laptops dell 5755, a server z590/i7, and 3 other PC's that i need help with setting up if you got time to help

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před rokem +1

      Sure. What parts would you like to have more help on? I also have my email in my about page if you want to send me a message.

    • @curtalfrey1636
      @curtalfrey1636 Před rokem

      @@ElectronicsWizardry sent message, thanks 😁

    • @gamerboyznet1597
      @gamerboyznet1597 Před rokem

      @@ElectronicsWizardry this is Curt Alfrey. On my other account 😁

  • @AdrianuX1985
    @AdrianuX1985 Před rokem

    +1

  • @enderst81
    @enderst81 Před rokem +2

    Try GlusterFS instead of Ceph. Should get better speeds.

    • @ElectronicsWizardry
      @ElectronicsWizardry  Před rokem +3

      Thanks for the suggest. I’ll try it out in my hardware and see how it works.

    • @MarkConstable
      @MarkConstable Před rokem +1

      GlusterFS is file system storage only for VMs, ie; qcow2. It does not provide VM volume storage like ZFS or CEPH.

  • @ShimoriUta77
    @ShimoriUta77 Před 7 měsíci

    The content is awesome, thanks bro.
    But, how can a dude look so young yet so old at the same time. So beautiful yet so ugly 😂 The duality of men