LINBIT
LINBIT
  • 158
  • 166 595
Highly Available NFS for Proxmox With LINSTOR Gateway
LINSTOR Gateway makes it easy to create highly available NFS exports for Proxmox. Attaching an NFS share to your Proxmox cluster is a convenient way to add reliable storage for backups, ISO images, container templates, and more.
Highly Available NFS for Proxmox With LINSTOR Gateway (blog):
linbit.com/blog/highly-available-nfs-for-proxmox-with-linstor-gateway/
How to setup LINSTOR on Proxmox VE (blog):
linbit.com/blog/linstor-setup-proxmox-ve-volumes/
How to setup LINSTOR on Proxmox VE (video):
czcams.com/video/pP7nS_rmhmE/video.html
9 other things you can do with LINSTOR and Proxmox (video):
czcams.com/video/F9xBANiSX0c/video.html
Continue the conversation over at the new LINBIT forums:
forums.linbit.com/
00:00 - Introduction
00:34 - Required Packages
00:46 - Installing LINSTOR Gateway
01:06 - Portblock Workaround
01:18 - Configuring LINSTOR Satellites
01:52 - Configuring LINSTOR Gateway
02:08 - Deploying NFS
02:28 - Adding Highly Available NFS to Proxmox
03:05 - Outro
zhlédnutí: 245

Video

Benefits of DRBDmon
zhlédnutí 75Před 2 měsíci
LINBIT Lead Developer Robert Altnoeder introduces the benefits of DRBDmon in this clip from our recent Community Meeting. Monitoring & Performing Actions on DRBD Resources in Real-Time Using DRBDmon linbit.com/blog/monitoring-performing-actions-on-drbd-resources-in-real-time-using-drbdmon/
Replicating Data Between Data Centers with LINBIT SDS
zhlédnutí 80Před 2 měsíci
LINBIT Solution Architect Yusuf Yıldız provides an overview of the challenges our clients can face when replicating data between data centers. The clip is from our recent Community Meeting. Learn more about LINBIT SDS: linbit.com/software-defined-storage/
Upcoming LINSTOR Features
zhlédnutí 113Před 2 měsíci
Here's LINBIT Lead Developer Gabor Hernadi providing an insight into some exciting LINSTOR features currently in development. The clip is from our recent Community Meeting. Learn more about LINSTOR: linbit.com/linstor/
Open Source LINBIT VSAN Technical Overview
zhlédnutí 177Před 2 měsíci
Here's LINBIT Lead Developer Christoph Böhmwalder providing a technical overview of LINBIT VSAN from our recent Community Meeting. LINBIT VSAN is a turnkey SDS appliance that manages highly available NVMe-oF, iSCSI, and NFS data stores. It uses LINBIT software from LINBIT SDS and LINBIT HA to provide a unified storage cluster management experience with a simple GUI. Learn more about open source...
LINBIT Community Meeting - June 2024
zhlédnutí 125Před 2 měsíci
Topics of the Q2 '24 meeting: - LINBIT VSAN - Technical Overview - LINSTOR Updates - Use Case Showcase - 3-Site Cluster - DRBDmon Showcase LINBIT runs a quarterly, worldwide community meeting to update everyone on their latest software developments. This gives each developer the chance to pitch their latest progress, answer questions, and get input from the community and team. LINBIT’s roots ru...
Create a Highly Available iSCSI Target With DRBD & Pacemaker
zhlédnutí 293Před 3 měsíci
In this video we go through an overview of our iSCSI High Availability Clustering tech guide, showcasing all the major steps required to create a highly available iSCSI cluster. The guide assumes you're using RHEL 9 or a related distribution such as AlmaLinux. Regardless of which OS you choose, you can adapt these steps in this guide to fit your needs. Download the tech guide: linbit.com/tech-g...
Using DRBDmon to Monitor & Perform Actions on DRBD Resources in Real-time
zhlédnutí 180Před 4 měsíci
One convenient way to work with and check DRBD® and its resources is by using DRBDmon. DRBDmon is an open source utility included with the drbd-utils software package for LINBIT customers, or else you can build the utility from its source code within the drbd-utils project page on GitHub. DRBDmon is CLI-based but works with the concept of displays, similar to windows or panes, and supports keyb...
10 Things You Can Do With LINSTOR & Proxmox
zhlédnutí 3,6KPřed 5 měsíci
In this video we go over 10 things you can do with LINSTOR and Proxmox. Highlights include showcasing important features and best practices to unlocking the full potential of your cluster's storage. For more content focused on configuring LINSTOR and Proxmox, see the links below: How to setup LINSTOR on Proxmox VE (blog): linbit.com/blog/linstor-setup-proxmox-ve-volumes/ How to setup LINSTOR on...
LINBIT Community Meeting - Q1 2024: VMware Talk w/ LINBIT Partners
zhlédnutí 483Před 5 měsíci
This time, instead of sharing our software developments, we will have an open discussion regarding VMware and how that affects the industry, the hypervisor market, and, ultimately, the customers. Joining this discussion will be Giles Sirett, CEO from ShapeBlue - the CloudStack company, Alberto Picón, Principal Cloud Technologist from OpenNebula and Marc-André Pezin, the Director Marketing Opera...
OpenNebula & Hyper-Converged Storage Using LINBIT SDS
zhlédnutí 428Před 6 měsíci
This video will demo integrating LINBIT SDS (LINSTOR® and DRBD®) with OpenNebula, to provide fast, scalable, and highly available storage for your virtual machine (VM) images. After completing the integration, you will be able to easily live-migrate VMs between OpenNebula nodes, and have data redundancy, so that if the storage node hosting your VM images fails, another node, with a perfect repl...
Kubernetes Persistent Storage Using LINBIT SDS
zhlédnutí 153Před 6 měsíci
LINBIT® SDS is the product name for LINBIT’s LINSTOR® and DRBD® software, and the plugins, drivers, and utilities that work with them. Together, these combine to make a software-defined storage (SDS) solution that you can use on its own or else integrate it with other platforms and environments. One popular LINBIT SDS integration is with Kubernetes. To get you started quickly, you can follow th...
Using VDO with DRBD on RHEL 9
zhlédnutí 253Před 7 měsíci
Combining DRBD and VDO by layering DRBD over VDO will provide both data deduplication and synchronous replication of thin-provisioned volumes to your application’s storage. Volumes created within the same volume group will be deduplicated, creating a deduplication domain that spans the entire volume group. Container and virtual machine (VM) images are an excellent use case for such storage topo...
LINBIT Community Meeting - January 2024
zhlédnutí 149Před 7 měsíci
Topics of the Q1 '24 meeting: -Restoring quorum after reboot of a degraded 3-node cluster -LINBIT SDS Cloudstack updates -Storage Pool Mixing -Operator V2 - Helm deployments - Prometheus integration LINBIT runs a quarterly, worldwide community meeting to update everyone on their latest software developments. This gives each developer the chance to pitch their latest progress, answer questions, ...
Deploying a Highly Available NFS Cluster on RHEL 9 with DRBD Reactor
zhlédnutí 400Před 7 měsíci
Deploying a Highly Available NFS Cluster on RHEL 9 with DRBD Reactor
Highly Available KVM Virtualization Using DRBD & Pacemaker
zhlédnutí 796Před 8 měsíci
Highly Available KVM Virtualization Using DRBD & Pacemaker
Deploy a Highly Available MariaDB Service Using LINSTOR & DRBD Reactor
zhlédnutí 346Před 9 měsíci
Deploy a Highly Available MariaDB Service Using LINSTOR & DRBD Reactor
Deploy a High-Availability (HA) Nagios XI Cluster Using DRBD, Pacemaker, & Corosync
zhlédnutí 340Před 9 měsíci
Deploy a High-Availability (HA) Nagios XI Cluster Using DRBD, Pacemaker, & Corosync
Highly Available NFS Exports with DRBD & Pacemaker
zhlédnutí 821Před 10 měsíci
Highly Available NFS Exports with DRBD & Pacemaker
Jenkins High Availability & Disaster Recovery at Scale Using EKS & LINSTOR
zhlédnutí 505Před 10 měsíci
Jenkins High Availability & Disaster Recovery at Scale Using EKS & LINSTOR
Encrypted Replication With DRBD and kTLS
zhlédnutí 248Před 10 měsíci
Encrypted Replication With DRBD and kTLS
Geo-clustering with Pacemaker & DRBD Proxy
zhlédnutí 551Před 11 měsíci
Geo-clustering with Pacemaker & DRBD Proxy
Simplified Cluster Resource Management with DRBD Reactor
zhlédnutí 533Před 11 měsíci
Simplified Cluster Resource Management with DRBD Reactor
DRBD Basics Training - Understand High Availability & Software-Defined Storage Setups Using DRBD
zhlédnutí 1,5KPřed 11 měsíci
DRBD Basics Training - Understand High Availability & Software-Defined Storage Setups Using DRBD
LINBIT Community Meeting - September 2023
zhlédnutí 218Před 11 měsíci
LINBIT Community Meeting - September 2023
High Availability KVM Virtualization Using Pacemaker & DRBD On RHEL 9 or AlmaLinux 9
zhlédnutí 1,8KPřed rokem
High Availability KVM Virtualization Using Pacemaker & DRBD On RHEL 9 or AlmaLinux 9
LINBIT GUI
zhlédnutí 697Před rokem
LINBIT GUI
Introducing the LINBIT GUI
zhlédnutí 1,3KPřed rokem
Introducing the LINBIT GUI
DRBD Reactor vs Pacemaker
zhlédnutí 823Před rokem
DRBD Reactor vs Pacemaker
How to Setup LINSTOR on Proxmox VE
zhlédnutí 13KPřed rokem
How to Setup LINSTOR on Proxmox VE

Komentáře

  • @MR-vj8dn
    @MR-vj8dn Před 2 dny

    Interesting. How can you use LINSTOR without LVM, for a less complicated approach?

    • @mattkereczman_lb
      @mattkereczman_lb Před 2 dny

      You can either use LVM, thin LVM, ZFS, or thin ZFS as backing storage for LINSTOR's storage-pools. LVM (and ZFS) provide the logical volume management tools needed to pool and partition the physical storage, while also adding a bunch of functionality (like dedupe, compression, striping, caching, etc). There are no plans for LINSTOR to support raw partitions at this time.

    • @MR-vj8dn
      @MR-vj8dn Před dnem

      @@mattkereczman_lb Okay, thank you for your quick response. Could you tell me how LINSTOR relates to Vates XOSTOR?

  • @BrianHellman
    @BrianHellman Před 9 dny

    Nice work! Very well done.

  • @yamanalsayed5858
    @yamanalsayed5858 Před 17 dny

    Nice *hit !!

  • @nalixl
    @nalixl Před 20 dny

    Anyone have any experience how this does stack up against glusterfs in terms of stability? And am i correct in assuming that one would only be able to mount disks from one of the DRDB nodes themselves, not from any other host in the network?

    • @mattkereczman_lb
      @mattkereczman_lb Před 19 dny

      I cannot speak to GlusterFS stability. DRBD is a block device, where GlusterFS is a filesystem. In the context of Proxmox, the DRBD devices aren't really mounted by the hosts, they're directly attached to the virtual machines as block devices. The virtual machine can then do whatever they'd like with the DRBD device, which is most likely format it as their root filesystem. The virtual machines can only run on one host in the Proxmox VE cluster at a time, and the DRBD device cannot be accessed from anywhere else in the cluster while the virtual machine is running. Technically, with the virtual machine stopped, you could use something like kpartx to scan the DRBD device to find and map the partitions the virtual machine created on the DRBD device and mount those on a host, but I would think you would do something like that for disaster recovery purposes only.

    • @nalixl
      @nalixl Před 19 dny

      @@mattkereczman_lb Thank you for your time. I guess i didn't ask the question correctly. What i meant to ask was: can one mount a virtual drbd block device from a machine that is not a drbd host itself? And yes, I'm aware of the difference between drbd block devices and glusterfs file system. The reason i asked was that gluster is the only option for a 2 node with arbiter distributed system supported by proxmox. The requirement for ceph are quite staggering for a small setup. Linstor appears to be a very good alternative, but have learned from my time with gluster that you really don't want to have to give deep into the software to find out about that one particular bug that is keeping your cluster from running stable.

    • @mattkereczman_lb
      @mattkereczman_lb Před 19 dny

      @@nalixl Aha, I understand. No, you cannot access the DRBD device from a host that isn't a part of the LINSTOR/DRBD cluster. However, because of how DRBD works internally, the DRBD devices are backed by simple LVM (or ZFS) volumes that can be accessed from any of the hosts that had a replica of the DRBD device, even if you completely destroy the LINSTOR database and DRBD device metadata. You can think of DRBD as RAID 1 mirroring between hosts, so you have a full copy of the block device on each peer, and there is no "distribution algorithm" or "data striping" to consider when you need to recover from catastrophic failures.

  • @jamesrowland1508
    @jamesrowland1508 Před měsícem

    How does this compare with db cluster services at the sql level? Im guessing this is more for read only and not gonna sync read/writes intelligently between node failures.. I can dream tho :)

    • @mattkereczman_lb
      @mattkereczman_lb Před měsícem

      Databases typically have transactional replication capabilities "baked in". DRBD is a block replication tool, so it replicates the underlying storage as the blocks are written to by the database or filesystem. DRBD has no concept of what a "database transaction" would be. DRBD is active/passive in nature, so you can only have one active node accessing the block device at a time, so only one instance of the database would be running in the cluster. DRBD is often used to replicate a database within an appliance or application stack where many services and filesystems needs to be replicated. It's simpler to use DRBD to replicate everything, rather than using DRBD for some replication and SQL replication for the database.

  • @gangadharmatta131
    @gangadharmatta131 Před měsícem

    can you please share the jenkins setup files that you used ? or your github .

    • @linbit
      @linbit Před měsícem

      Sorry for the late reply. Everything you need is right here - github.com/kermat/linstor-jenkins-eks-assets

  • @aashiqs4867
    @aashiqs4867 Před 2 měsíci

    I have tried this and its working, can you guide or create a video showing disaggregated method ( linstor satellite and KVM hosts in separate nodes ). getting error while i trying

    • @mattkereczman_lb
      @mattkereczman_lb Před 2 měsíci

      You will still need the KVM hosts to have the LINSTOR Satellite software and DRBD installed on them, and need to be added to the LINSTOR cluster. The LINSTOR satellites that are also KVM hosts do not need to have storage in them that LINSTOR is managing, and will use DRBD's "diskless" attachment (or client mode) to read and write to a "diskful" replica on one of the LINSTOR satellites that has storage.

  • @Antonzubkoff
    @Antonzubkoff Před 2 měsíci

    720p...

  • @Antonzubkoff
    @Antonzubkoff Před 2 měsíci

    Not working for me. Every time get error - Request failed. (530) Failed to add data store: No host up to associate a storage pool with in cluster 1

  • @RumenBlack
    @RumenBlack Před 3 měsíci

    I gotta say the documentation you provide is pretty frustrating from my understanding drbd is open source but all the documentation and videos reference your paid portal script.

    • @linbit
      @linbit Před 2 měsíci

      As a company that develops open source software, paying customers keep the company and software development going. For this reason, customers enjoy advantages like expert support, access to prebuilt packages, and the portal script that you mentioned for registering nodes and providing access to customer-only package repositories. Our technical guides and user's guides do reference the node registration script as the easiest way to install LINBIT software, that is, from our prebuilt packages. You can do this either as an existing customer or by contacting us about free trial access. That said, almost all the software LINBIT develops is open source: DRBD, LINSTOR, DRBD Reactor, and others. You can freely install this software from the source code in their respective GitHub repositories: github.com/linbit Once built from source and installed, you can use our technical guides and user's guides as if you had installed from prebuilt packages. If you have issues installing from source, you are welcome to ask for help from the community of users at our forum: forums.linbit.com/, or through our community mailing list: lists.linbit.com/listinfo/drbd-user Also, if you just want to try our software out, we have a PPA with DEB packages here: launchpad.net/~linbit/+archive/ubuntu/linbit-drbd9-stack/+packages. We don't officially support the packages in this repository but we provide them for convenience and for testing. Thanks for your feedback. We will review our user's guides and improve them so that they can better serve all of our users.

  • @mpbraj
    @mpbraj Před 3 měsíci

    Hi, I have managed to install linstor with cloudstack successfully, on eh manegement while adding 2nd host "linstor resource list" doesnt show any activity, but the host gets successfully added...any ideas ?

    • @linbit
      @linbit Před 3 měsíci

      Hi, best place to ask technical questions is forums.linbit.com/ where our community and developers meet up

  • @stanislavakinshin2373
    @stanislavakinshin2373 Před 3 měsíci

    Thank you for the Great Video!!! one notice which I want to write here that Linstor should be in SATELLITE mode like in the video, not COMBINED(controller and satellite together) otherwise Cloudstack see the linstor but cant create a volumes -> cant create a systems VMs. The Cloudtsack version 4.19.0.1, Linstor(satellite, controller 1.27) and linstor-client 1.22, drbd-dkms 9.2.9, Ubuntu 20.04 focal

    • @linbit
      @linbit Před 3 měsíci

      We can safely say, that SATELLITE vs COMBINED is definitely not the fault. They are basically the same - on that node, a controller could be running also.

  • @mpbraj
    @mpbraj Před 4 měsíci

    HI...Thanks for the Videos, very informative...I am getting stuck at installing drbd-dkms, any idea, I have tried this on Ubuntu 20.04 and Ubuntu 22.04, I am getting similar errors on both.. " Building for 5.4.0-177-generic Building initial module for 5.4.0-177-generic ERROR: Cannot create report: [Errno 17] File exists: '/var/crash/drbd-dkms.0.crash' Error! Build of drbd.ko failed for: 5.4.0-177-generic (x86_64) Consult the make.log in the build directory /var/lib/dkms/drbd/9.2.9~rc.1-1ppa1~focal1/build/ for more information. dpkg: error processing package drbd-dkms (--configure): installed drbd-dkms package post-installation script subprocess returned error exit status 7 Errors were encountered while processing: drbd-dkms E: Sub-process /usr/bin/dpkg returned an error code (1)"

    • @mpbraj
      @mpbraj Před 4 měsíci

      and while installing "apt install linstor-controller linstor-client" I get this error and not completing the process with success ------------------------------ Deleting module version: 9.2.9~rc.1-1ppa1~focal1 completely from the DKMS tree. ------------------------------ Done. Loading new drbd-9.2.9~rc.1-1ppa1~focal1 DKMS files... Building for 5.4.0-177-generic Building initial module for 5.4.0-177-generic ERROR: Cannot create report: [Errno 17] File exists: '/var/crash/drbd-dkms.0.crash' Error! Build of drbd.ko failed for: 5.4.0-177-generic (x86_64) Consult the make.log in the build directory /var/lib/dkms/drbd/9.2.9~rc.1-1ppa1~focal1/build/ for more information. dpkg: error processing package drbd-dkms (--configure): installed drbd-dkms package post-installation script subprocess returned error exit status 7 Setting up python-linstor (1.22.0-1ppa1~focal1) ... Setting up linstor-client (1.22.0-1ppa1~focal1) ... Setting up libjs-jquery (3.3.1~dfsg-3) ... Setting up libjs-underscore (1.9.1~dfsg-1ubuntu0.20.04.1) ... Setting up libjs-sphinxdoc (1.8.5-7ubuntu3) ... Setting up python-natsort-doc (7.0.1-1) ... Processing triggers for man-db (2.9.1-1) ... Errors were encountered while processing: drbd-dkms E: Sub-process /usr/bin/dpkg returned an error code (1)

    • @linbit
      @linbit Před 4 měsíci

      @@mpbraj It is a RC version so bugs and errors are expected. We have already released 9.2.9 on April 30, and will add it to PPA soon, to replace 9.2.9-rc1. Will be fixed with the full release I believe!

    • @mpbraj
      @mpbraj Před 4 měsíci

      ​@@linbit Thanks for the update...what would be the work around if I have to implement it now...I am working on Virtualization HCI solution where I have to provide HA in regards to the Storage...on Ubuntu 22 or 24

    • @linbit
      @linbit Před 3 měsíci

      ​@@mpbraj We released v9.2.9 for 22.04 (Focal) a few days ago - it should work now. I hope that helps, but if you need more feedback from our devs then I recommend heading to the LINBIT forum. It's better suited to help with technical matters forums.linbit.com/

  • @GrishTech
    @GrishTech Před 4 měsíci

    Is there an article on how to deal with proxmox upgrade issues? New kernel and it’s just failing to recompile. Proxmox itself fails to upgrade the kernel because it’s calling dkms to recompile the drdb module, but the drdb module tries to use the newest kernel but it’s not installed. Thus the loop.

    • @linbit
      @linbit Před 4 měsíci

      One of our customers opened a ticket for this issue, because Proxmox suddenly upgraded the kernel version from 6.5 to 6.8, and only DRBD v9.2.9-rc supports v6.8 kernel at the moment. So there are three options right now: 1. Use v6.5 kernel instead of v6.8 2. Use v6.8 + DRBD v9.2.9-rc 3. Wait for a few days, we will release DRBD v9.2.9 next Monday (planned) Hope that helps!

  • @GrishTech
    @GrishTech Před 4 měsíci

    This is fantatic. Have you guys considered having proxmox add this into their storage types and be available, along with Ceph for the other type of workloads. Most workloads just need a simple replica, where ceph is just overkill.

    • @linbit
      @linbit Před 4 měsíci

      Thanks for your suggestion - will pass to the team 🤔

  • @GrishTech
    @GrishTech Před 4 měsíci

    Compared to ceph, I assume linstore is less cpu intensive?

    • @linbit
      @linbit Před 4 měsíci

      ABSOLUTELY - far less CPU intensive compared to Ceph!

  • @johnsirmans965
    @johnsirmans965 Před 5 měsíci

    Hi. Would this solution be usable in order to do cross-site replication, between 2 remote clusters ?

    • @linbit
      @linbit Před 5 měsíci

      Technically speaking, yes. One would most likely need to use our asynchronous replication mode for real time replication between remote clusters. A better solution might involve using LINSTOR's backup shipping feature, allowing you to push snapshots from one cluster to another on a scheduled basis such as every hour, etc.

  • @somniumism
    @somniumism Před 5 měsíci

    thanks for the video. are there any plans to make such a video about Linstore on Proxmox with ZFS? I didn't quite understand that in your documentary.

    • @linbit
      @linbit Před 5 měsíci

      You can use ZFS zPools instead of LVM volume groups for configuring LINSTOR's storage pools. In this configuration, ZFS zVols become the backing storage for replicated disk images instead of LVM logical volumes.

  • @redetermine
    @redetermine Před 5 měsíci

    Seems like a great product, I will definitely keep you in mind as our business continues to scale!

  • @omgnowairly
    @omgnowairly Před 5 měsíci

    This is solid.

  • @jmhcxh
    @jmhcxh Před 5 měsíci

    I encountered a problem during testing. I have 2 nodes and a diskless node. It seems that there is a problem with the quorum. When one of my storage nodes fails, the entire storage cannot work. Please tell me what to do

    • @rycodge
      @rycodge Před 5 měsíci

      It sounds like you most likely need to make the LINSTOR Controller highly available. If you're losing the control plane when your storage node fails, Proxmox can no longer tell LINSTOR what to do with the storage when you perform new actions such as start a VM, or allocate new storage, etc. 1) Make the LINSTOR Controller highly available. 2) Add each possible controller IP address (should be your two storage nodes) to the PVE storage configuration for LINSTOR. I would link the url's for sections of our LINSTOR User's Guide, but I think my previous comment got flagged for doing do. A quick google search for the LINSTOR User's Guide should get you to the information needed to do steps 1 & 2 above.

    • @jmhcxh
      @jmhcxh Před 5 měsíci

      @@rycodge Thank you very much, but that's not how it works. I have two storage nodes and a diskless node. I installed the controller on the diskless node. When I shut down a storage node, the pve storage and vm became unavailable. I don't know what I did wrong. Are there parameters that need to be specially adjusted?

    • @rycodge
      @rycodge Před 5 měsíci

      @@jmhcxh Hmm, do you actually have a "three node" cluster configured in Proxmox? For HA you'll need to setup a qdevice on the quorum node. If you power down one storage host, do VMs (backed by LINSTOR) continue to run on the other node?

    • @jmhcxh
      @jmhcxh Před 5 měsíci

      ​@@rycodgeIt's because linstor lost a storage node and the remaining storage cannot be read or written. It's not a problem with the pve platform.

    • @rycodge
      @rycodge Před 5 měsíci

      @@jmhcxh 'linstor resource list' and 'drbdadm status' (run from each node) will inform you of the current status of each replicated resource in the cluster. The status of the resources would point you in the direction to take action. There's a number of things that could be causing this, for example a quorum node that cannot reach one of the storage nodes (in the replication/DRBD network for quorum), that would cause the remaining LINSTOR node to lose quorum for any resources that were active on the node that was down, and refuse to do anything until the other "diskfull" node is powered back up.

  • @Glatze603
    @Glatze603 Před 5 měsíci

    That‘s really interesting! Thanks a lot.

  • @kwnstantinos79
    @kwnstantinos79 Před 5 měsíci

    "Administer Your Cluster with the LINSTOR GUI" is free or needs subscription; GOOD JOB BY THE WAY - KEEP WALKING ;-)

  • @JohnSmith-yz7uh
    @JohnSmith-yz7uh Před 5 měsíci

    I wonder why proxmox did choose ceph over linstore. It would be great if proxmox would support both. Setup within the proxmox web gui would be great

    • @BobHannent
      @BobHannent Před 22 dny

      It's already hard to get Enterprise customers to adopt Proxmox. Ceph is a tech that has higher market recognition amongst traditionalists, so it's an easier sell. Plus Proxmox would likely need a relationship with another company (Linbit) in order to be able to offer it as an enterprise license, they don't really need to do that for Ceph. A formal partnership between Linbit and Proxmox would be a good idea, and it could be offered as part of the installation wizard.

  • @TechVirt1
    @TechVirt1 Před 5 měsíci

    Can I do this with 2nodes?

    • @linbit
      @linbit Před 5 měsíci

      This should help answer your question - czcams.com/video/M5VD1xXCrh0/video.html

    • @linbit
      @linbit Před 5 měsíci

      Also, Proxmox does need three nodes to have a true cluster with HA capabilities and we need three nodes to leverage quorum. But yes, technically speaking you can install LINSTOR, even on one node only (wouldn't make sense because you can't use replication with one node), but two nodes also works, just less robust and less functionality overall in the Proxmox cluster.

  • @waldmensch2010
    @waldmensch2010 Před 6 měsíci

    great, on point

  • @OpenNebula
    @OpenNebula Před 6 měsíci

    Great video! 😍

  • @waldmensch2010
    @waldmensch2010 Před 6 měsíci

    nice software but the progressive price scaling is not nice :-/

  • @linbit
    @linbit Před 6 měsíci

    For more information: linbit.com/linbit-vsan/

  • @easydezeindiankichen4909
    @easydezeindiankichen4909 Před 6 měsíci

    Hi, Very good and interesting , one base question .. proxmox-1 has failed and moved to proxmox-0 but doesnt look like HA , because it is taking 1-3 mins to start the VM , booting up freshly , with this scenario , how can we ensure no interruption during the fail-over.

    • @tariq4846
      @tariq4846 Před 6 měsíci

      Use Ceph

    • @linbit
      @linbit Před 6 měsíci

      To start, there will always be an interruption during a failover. In the best case (theoretical), this will be analogous to the same time it takes to hard reboot a VM. Proxmox has its own timeouts for determining if a host node is down and when to react accordingly and migrate VMs. This is not unique to LINSTOR, and we suggest looking into documentation from Proxmox on how to decrease the time it takes for VMs to failover between hosts.

    • @mattkereczman_lb
      @mattkereczman_lb Před 19 dny

      @@tariq4846 That wouldn't change anything. For that to be the case, it would mean that a virtual machines memory content was being persisted to disk, which would be very slow.

  • @jmhcxh
    @jmhcxh Před 6 měsíci

    Can it be used in 4 nodes or 5 nodes? What is the storage distribution strategy to avoid using diskless?

    • @linbit
      @linbit Před 6 měsíci

      Yes, of course. LINSTOR can support 4, 5, or many more nodes. A three-node cluster is simply the minimum size for a proper Proxmox cluster and the ability to enable Proxmox's high availability features (which pair nicely with LINSTOR). Avoiding diskless operation is as simple as making sure each node has a storage pool available backed by physical storage where the VMs are intended to run. Alternatively, selected VMs can be restricted to only run on certain hosts containing physical backing storage in environments that have VM hosts in a mixed envronment (with and without backing storage).

    • @jmhcxh
      @jmhcxh Před 6 měsíci

      @@linbit I know this, for example, I have 5 pve storage nodes and 3 replicas. I can only know where the vm is stored in the linstor controller. But the pve gui does not know this, and diskless situations cannot be avoided during live migration. I want all vms to run on nodes with storage replicas, or can linstor automatically perform storage migration? I have no idea

    • @linbit
      @linbit Před 6 měsíci

      ​@@jmhcxh When configuring the LINSTOR plug-in in '/etc/pve/storage.cfg' one can use 'preferlocal' as mentioned in our UG here: linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-proxmox-ls-configuration. This will steer Proxmox to initially run VMs where there backing storage. I'd also recommend looking into setting "Auto-Diskfull" (and related options) on the LINSTOR resource-group used by Proxmox as described here: linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-linstor-auto-diskful

  • @BillMac1966
    @BillMac1966 Před 6 měsíci

    Why would you use this over Ceph??? Ceph is built-in, less complex to configure, and configurable entirely via the GUI. I just don't see the advantage of using Linstor.

    • @linbit
      @linbit Před 6 měsíci

      While LINSTOR's setup process is more involved, the better write performance of LINSTOR could be why someone would choose LINSTOR over Ceph - this blog post should help you understand too: linbit.com/blog/how-does-linstor-compare-to-ceph/

    • @ewenchan1239
      @ewenchan1239 Před 6 měsíci

      @@linbit Three things from the blog post: 1) "The downside to CRUSH is a complete loss of all data when Ceph isn’t operating normally." Is this actually true??? There's been times when I've rebooted my entire cluster, and as the nodes comes back online, it will initially report that Ceph ISN'T operating normally, but my VMs and containers are able to launch just fine. I would challenge you to present data that supports this statement. (N.B.: I'm using erasure coded Ceph pools rather than replicated Ceph pools. But in the case of replicated Ceph pools, that statement would likely to be LESS true, because it is replicated.) 2) "If replication overhead is a bigger concern than the system’s complexity, consider Ceph" The blog post talks almost exclusive from the perspective of a replicated pool. Does LINSTOR have and support an erasure coded pool? 3) "If the complexity of the system makes you feel uneasy about recovering from failure scenarios, consider LINSTOR." Ceph CAN be (quite) complicated as there are concepts that are introduced that users, operators, and admins will need to learn. But conceptually, the concepts that are introduced by Ceph makes decent sense given what they're trying to accomplish (and how). I haven't deliberately tried to break my Ceph setup, but I have shut down nodes, one at a time, which, conventional theory would suggest that that should take down the Ceph erasure coded pool. But once the nodes come back up and online (from a reboot and/or shutdown/cold start) -- Ceph checks the pools for consistency and after about 10 minutes or so (the nodes are connected to each other only via GbE and I am only using Intel N95 Processors, so they're not the fastest processors around) -- Ceph reports the pools as being healthy. So, in a case like that, it seems to handle it just fine. But like I said, I haven't actually tried breaking Ceph by doing something that I wasn't really supposed to.

    • @nexus1972
      @nexus1972 Před 6 měsíci

      Also DBRD allows for a 2 node setup@@linbit

    • @GrishTech
      @GrishTech Před 4 měsíci

      @@nexus1972 right. But in a scenario where there are two nodes, I think you would want a witness to prevent split brain issues

    • @funkyyyy_fresh
      @funkyyyy_fresh Před 23 dny

      @@linbit Thank you -- I just heard of linstor and heard of ceph/guster before.. was wondering about a chart so thank you :)

  • @waldmensch2010
    @waldmensch2010 Před 6 měsíci

    nice to see more Proxmox support 🙂

  •  Před 6 měsíci

    8:56 - In the /etc/pve/storage.cfg file you add a single IP address as the controller. What happens to the storage pool when that controller goes down?

    • @linbit
      @linbit Před 6 měsíci

      No resources could be created/modified/deleted during the controller failure, all existing resources are still working as before. When the controller comes back online, everything is automatically operational.

    • @linbit
      @linbit Před 6 měsíci

      However, the controller can be HA between the nodes. And a virtual IP will be flowing through the nodes for HA functionality.

    •  Před 6 měsíci

      So that would be fine for static resources like ISO files, etc. but would be problematic for VM disk files (esp for running VMs), is that correct?

    •  Před 6 měsíci

      Ah yes, HA controller betwen the nodes ("combined" type I presume) with virtual IP sounds great.

    • @linbit
      @linbit Před 6 měsíci

      @ As far as "what happens to the storage pool" - Even if the LINSTOR Controller "goes down" the storage pools are still active, currently running VMs continue to run within the cluster. However, modifying or creating new resources, such as provisioning new storage for a VM, will not work until the LINSTOR Controller is available again.

  • @SpookyLurker
    @SpookyLurker Před 7 měsíci

    Does this not work on Proxmox 8.1.3?

    • @linbit
      @linbit Před 7 měsíci

      From one of the team: "I'm currently running a cluster on the latest Proxmox (8.1.4). So yes, it works on the latest Proxmox releases. The CZcams video happened to launch right before Proxmox 8.0 came out, but not much has changed."

  • @waldmensch2010
    @waldmensch2010 Před 7 měsíci

    how performace costs vdo to use it?

    • @mattkereczman_lb
      @mattkereczman_lb Před 7 měsíci

      For VDO to deduplicate a write, each new write has to be compared against VDOs index to check if there is already a matching block. If there is a match, the index is written to, but the block is not rewritten. If there isn't a match, the index gets written to as well as the block. This means each write amplifies into many reads and writes (index and block). This amplification is more tolerable with fast SSDs or NVMe, but could be painful on HDDs. For VDO to compress data, each write requires some CPU time which comes at the cost of additional latency. Some CPUs can compress data faster than others, and some data can be compressed more heavily than other data. If the dataset can be deduped at a good ratio, the saved space and "block writes" are probably worth the hit to performance, but if the data can not be deduped well, than it won't be worth the impact to performance. Same is true for compression. Testing with your specific hardware and your specific datasets are the only way to know how well it will perform and whether it's correct to use.

  • @MrKMV34
    @MrKMV34 Před 7 měsíci

    Wedos has been replaced linstor by proprietary solution after all czcams.com/video/otr55vmKf30/video.htmlsi=cSvGWC6Cn0grkdft 27:57

  • @faberfox
    @faberfox Před 8 měsíci

    So, now that RHEL is a bad word, I'm guessing you're regretting parting ways with proxmox... I'd love to see drbd properly integrated for two node clusters, where Ceph is overkill, any chance of that ever happening?

    • @linbit
      @linbit Před 8 měsíci

      Thanks for the comment. We never parted ways with Proxmox, is there something that leads you to believe that's the case? Here is a recent blog/video we did on how to integrate our SDS solution with Proxmox linbit.com/blog/linstor-setup-proxmox-ve-volumes/

    • @marconwps
      @marconwps Před 6 měsíci

      Linstor in proxmox COOL!!

  • @raul6236
    @raul6236 Před 9 měsíci

    the iscsi lun not showing disk in disk management after iscsi connection in windows. linux client shows the lun though. compatibility issue???

  • @chrisjchalifoux
    @chrisjchalifoux Před 9 měsíci

    Thank you for the video

  • @leiw324
    @leiw324 Před 9 měsíci

    Hello, I followed the LINSTOR cluster with LINSTOR Gateway guideline, but I can't download the Linstor-gateway by wget, ERROR 404 Not Found And you are using dnf to install Linstor-gateway, how can you install Linstor in Centos? Rocky Linux? Thanks!

    • @leiw324
      @leiw324 Před 8 měsíci

      Hello, I used source code to installed gateway, after created iscsi that cannot ping the virtual IP.

  • @kermatog
    @kermatog Před 9 měsíci

    RHELatives 😂

  • @walterlucerotkdo
    @walterlucerotkdo Před 9 měsíci

    Great work LINBIT team!!. Amazing video!. THANKS!. Hugs from Argentina💪

  • @walterlucerotkdo
    @walterlucerotkdo Před 9 měsíci

    Great work LINBIT team!!. Amazing video!. THANKS!. Hugs from Argentina💪

  • @clarkkentgwapo1
    @clarkkentgwapo1 Před 9 měsíci

    Can we implement this syncing 2nodes in different location? Not LAN

    • @linbit
      @linbit Před 9 měsíci

      Short answer: DRBD Proxy linbit.com/drbd-proxy/

  • @user-nl6tu4kk3z
    @user-nl6tu4kk3z Před 10 měsíci

    Have you measured perfomance? I have this idea that doesn't let me sleep at night. I want to build a k8s cluster using chinese cm4 clones and ceph seems to be "too much" for them to handle. linstor is probably the way to go. Also thanks for sharing the repository.

    • @linbit
      @linbit Před 10 měsíci

      @user-nl6tu4kk3z I wrote about running LINBIT SDS (LINSTOR + DRBD) on this same Le Potato cluster on LINBIT's blogs: linbit.com/blog/kubernetes-at-the-edge-using-linbit-sds-for-persistent-storage/ I wrote about performance a bit there, and I can tell you I was up against the limits of this small cluster's resources while running LINBIT SDS (and it is pretty light on resources). I happen to have LINBIT SDS running along side Ceph in a virtualized k8s cluster, and can see that the rook-ceph namespace is using 10x the CPU and 2.4x the memory that the linbit-sds namespace is using, so I'm not confident rook-ceph would even deploy into my Le Potato cluster. Good luck with whichever storage solution you choose for your k8s cluster! Don't hesitate to contact LINBIT (linbit.com/contact-us/) to chat about your project :) - Matt

  • @waldmensch2010
    @waldmensch2010 Před 10 měsíci

    what is with the latency with the kernel nfs server vs nfs-ganesha

    • @linbit
      @linbit Před 10 měsíci

      Not sure we can help with this question. Our guess is there is some context switching between kernel and userspace, where with Ganesha everything stays in user space.

  • @bensatunia8842
    @bensatunia8842 Před 10 měsíci

    It would be nice to hide the non-formated disk once the DRBD device with a filesystem was created. So you wont see a drive D: and E: in the explorer.

  • @roman.brunetti
    @roman.brunetti Před 11 měsíci

    Great but if I add this resources I get errors pcs cluster cib drbd_cfg pcs -f drbd_cfg resource create shares ocf:linbit:drbd drbd_resource=r0 op start interval=0s timeout=240s stop interval=0s timeout=100s monitor interval=31s timeout=20s role=Unpromoted monitor interval=29s role=Promotedpcs -f drbd_cfg resource promotable shares promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true pcs -f drbd_cfg resource promotable shares promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true pcs -f drbd_cfg resource status pcs resource status Cluster Summary: * Stack: corosync * Current DC: node2 (version 2.1.5-9.el9_2.3.alma.1-a3f44794f94) - partition with quorum * Last updated: Thu Oct 12 00:26:13 2023 * Last change: Thu Oct 12 00:25:27 2023 by root via cibadmin on node1 * 2 nodes configured * 2 resource instances configured Node List: * Online: [ node1 node2 ] Active Resources: * No active resources Failed Resource Actions: * shares start on node1 returned 'error' at Thu Oct 12 00:25:28 2023 after 99ms * shares start on node2 returned 'error' at Thu Oct 12 00:25:16 2023 after 147ms [root@node1 ~]# drbdadm status r0 r0 role:Secondary disk:UpToDate node2 role:Secondary peer-disk:UpToDate [root@node1 ~]#

  • @roman.brunetti
    @roman.brunetti Před 11 měsíci

    An implementation into PowerShell would be great 👍

    • @linbit
      @linbit Před 11 měsíci

      Thanks for the feedback! Informed the team.