Don't use local Docker Volumes

Sdílet
Vložit
  • čas přidán 7. 06. 2024
  • How to avoid using local Docker Volumes and connect them to a remote NFS Storage Server like QNAP, Synology, etc.? I will show you how to create NFS Docker Volumes in Portainer and connect them to my TrueNAS server. #Docker #Linux #Portainer
    Teleport Tutorial: • How I secure my Server...
    Teleport-*: goteleport.com/thedigitallife
    Follow me:
    TWITTER: / christianlempa
    INSTAGRAM: / christianlempa
    DISCORD: / discord
    GITHUB: github.com/christianlempa
    PATREON: / christianlempa
    MY EQUIPMENT: kit.co/christianlempa
    Timestamps:
    00:00 - Introduction
    01:17 - Why not store Docker Volumes locally?
    03:03 - What is an NFS Server
    03:30 - Advantages of NFS Servers
    04:29 - What to configure on your NAS?
    06:02 - Advertisement-*
    06:35 - Create NFS Docker Volumes
    11:02 - Migrate existing Volumes to NFS
    ________________
    All links with "*" are affiliate links.

Komentáře • 258

  • @NathanFernandes
    @NathanFernandes Před 2 lety +9

    Brilliant! I was just looking at something like this today, but instead I was trying mounting my synology as a cifs volume, nfs is so much easier and this video helped me set it up under 5mins. Thank you!

    • @christianlempa
      @christianlempa  Před 2 lety +1

      Thanks! :) You're welcome

    • @therus000
      @therus000 Před rokem +1

      is it work for you?
      i just do as instructions. but cannot great subfolders as a volume's.
      i got error 500. i try add map root to admin. but got same problem. error500. but if i make volume in the root of nfs folder. it work. but im not comfortabe with that.
      may be any help. i got dsm7

    • @a5pin
      @a5pin Před rokem

      @@therus000 Hey I have the same problem, did you manage to resolve this?

  • @anthonyjhicks
    @anthonyjhicks Před 2 lety +4

    This was awesome. Perfect timing as exactly the next step I wanted to make with my volumes on Portainer.

  • @derp0ps
    @derp0ps Před rokem +2

    I followed this for my WindowsNAS to share to docker and this is so much easier then doing host level mounts. Thank you soo much

    • @christianlempa
      @christianlempa  Před rokem

      You’re welcome :)

    • @derp0ps
      @derp0ps Před 4 měsíci

      @@christianlempa Hello again , I was wondering if by chance you knew how to specify this in a docker compose file im trying to make a template and this seems to be only my stopping point atm

  • @blairhickman3614
    @blairhickman3614 Před rokem +1

    I wish I found this video last week. It would have saved me hours trying to mount a NFS share on my Ubuntu server. I ran into the user permission issue also and it took a lot of searching to find the answer.

  • @MichaelWDietrich
    @MichaelWDietrich Před 23 dny +1

    Great walkthrough and howto. Thanks for that. Nevertheless, small criticism at this point. NFS volumes and snapshot backups on the target NAS IMHO do not replace an application-based backup. Of course (for example in the event of a power failure but also due to other technical and organizational problems) the volumes on the NFS can be destroyed and become inconsistent in the same way as those on the local machine. This is even more likely because the writing process on the NFS relies on more technical components. That's why I also do and highly reccomend application-based backups with at least the same frequency. If the application's backup algorithm is written sensibly, it will only complete the backup after a consistency check and then it is clear that at least this backup is not corrupted and can be restored without data loss.

  • @yangbrandon301
    @yangbrandon301 Před rokem

    Thank you. A great tutorial for NFS.

  • @ilco31
    @ilco31 Před 2 lety +1

    this is great to know -i am currently looking into how to back up my more info sensitive docker containers -like vaultwarden or nextcloud -great video

    • @dastiffmeister1
      @dastiffmeister1 Před 2 lety

      I mainly use bind mounts for my persistent docker storage (had MAJOR issues with docker databases over cifs or nfs) and an awesome docker image for volume backups: offen docker-volume-backup
      It stops the desired containers before a backup, creates a tarball, sends it to an S3 bucket on my truenas server, spins up the stopped containers and lastly a cloud sync task on my truenas encrypts the data before pushing the backup to the cloud.

    • @christianlempa
      @christianlempa  Před 2 lety

      Glad it was helpful!

    • @christianlempa
      @christianlempa  Před 2 lety

      What were the issues with the DBs?

  • @nixxblikka
    @nixxblikka Před 2 lety

    Nice video, looking forward to true nas content!

  • @HoshPak
    @HoshPak Před 2 lety +24

    Some food for thought...
    Most NAS systems have 2 network interface. You could attach the NAS directly to the server on a separate VLAN and optimize that network for I/O, enabling jumbo frames etc. That basically makes it a SAN without the redundancy.
    Using iSCSI instead of NFS is also an option and might be preferred for database workloads I assume.

    • @kamilkroliszewski689
      @kamilkroliszewski689 Před 2 lety +1

      Exactly, we used NFS as storage for databases and it handled not very good. Sometimes you have locks etc.

    • @christianlempa
      @christianlempa  Před 2 lety +6

      Thanks for your experience guys, I have just a little experience with it. FYI, I was playing around with Jumbo Frames on a direct connection between my PC and TrueNAS, worked pretty well. VLANs is a topic for a future video as well :D So stay tuned!

    • @macntech4703
      @macntech4703 Před 2 lety +3

      I also think that iscsi might be the better choice compared to nfs or smb. at least in a homelab environment.

    • @HoshPak
      @HoshPak Před 2 lety

      @@christianlempa I've been though that endeavor, just recently. When I searched for tagged VLAN on Linux, the documentation was hopelessly outdated referring to deprecated tools.
      My advice: disable anything that messes with net devices even remotely (i.e NetworkManager) and go straight for systemd-networkd. I've built a VLAN-aware bridge which functions just like managed switch. Virtual NICs from KVM attach to it, automatically as well as Docker containers. This is also a pretty good way to use tagged VLAN inside VMs which on KVM is hard to use, otherwise.
      If you'd like some help at the beginning, let me know. We will get you started. :)

    • @gordslater
      @gordslater Před 2 lety

      last time I did this I had best results using ATAoE, it's not used very much nowadays but is very fast for local links (it's non-routable but ideal for single-rack use)

  • @G00SEISL00SE
    @G00SEISL00SE Před rokem

    I have been stuck for weeks with an nsf volume not mounting right inside my containers. First I couldnt edit the created files. Then I could but couldn't create new ones. This fixed both my issues thanks going to set up my shares like this moving forward.

  • @wuggyfoot
    @wuggyfoot Před rokem

    wow dude when you typed root in that box you solved all of my problems
    crazy how ur the best source of information i have found over all of the internet

  • @fx_313
    @fx_313 Před rokem +7

    Hey! Thank you very much for this.
    Can you also give an example on how to use an NFS mount in docker-compose / Stacks in Portainer? I've spent the last couple of hours trying and googling but wasn't able to find a real answer or constant examples on how to do it.

    • @derp0ps
      @derp0ps Před 4 měsíci

      I'm having the same issue lol , did you by chance find a way to do this?

  • @MadMike78
    @MadMike78 Před 10 měsíci

    Love your videos. Question how would I use portainer to add new volume to existing container? I found how to add the volume but after that I don't know if anything needs to be copied over.

  • @myeniad
    @myeniad Před 9 měsíci

    Great explanation. Thanks!

  • @Spydaw
    @Spydaw Před 2 lety

    Awesome video, thank you for explaining this. I am doing the exact same with all my pods in k3s ;)

    • @christianlempa
      @christianlempa  Před 2 lety +1

      Oh, that is cool! I'm planning that as well in my k3s cluster I'm currently building ;)

    • @Spydaw
      @Spydaw Před 2 lety

      @@christianlempa Feel free to ping me if you have any questions ;)

  • @steverhysjenks
    @steverhysjenks Před rokem +3

    How do you replicate into docker compose? I get these steps and its graat for me to understand manually the steps. I'm not convinced my compose is working correctly for NFS. Love to see this explanation converted into a docker compose example.

  • @ercanyilmaz8108
    @ercanyilmaz8108 Před rokem

    You can also run docker containers in TrueNas itself and connecting to the NFS locally. If I'm right the writes can then handled synchroniously. This gurantees the data integrity.

  • @Muhammad_Hannan9963
    @Muhammad_Hannan9963 Před rokem +3

    I read somewhere the NFS is not secure when used with container as it also open access to container processes to get into the main OS processes when we expose server file system to docker container. What's your thoughts on this? Any one.

  • @Dough296
    @Dough296 Před 2 lety +2

    What if you need to perform an update on TrueNas that needs to reboot the NFS service or the TrueNas system ?
    Will the Docker container wait for the NFS service to come back without having trouble with data consistency ?
    I have myself that setup but when I want to restard my NAS it's a real pain to stop everything that depends on it...

  • @marcoroose9973
    @marcoroose9973 Před 2 lety +2

    When migrating cp -ar maybe better as it copies permissons, too. Nice video! Need to figure out how to do this in docker-compose.

    • @christianlempa
      @christianlempa  Před 2 lety +1

      Oh great, thank you ;)

    • @ansred
      @ansred Před 2 lety +1

      Would be appreciated if you figured out and share how to use docker-compose on Portainer. That would be really handy!

  • @gunnsheridan2162
    @gunnsheridan2162 Před 6 měsíci

    Hi Christian, thanks for the informative video. I have two questions though:
    1. What is the correct way of setting user with same id, group id on nfs server and client? I have rpi with user 1000:1000. Such user doesnt exist on my Synology. Should I add new user to synology? Or should I pick one of synology user ids and create such user on raspi? If so, how do you create user with specific ids?
    2. What about file locking through nfs? I had issues with network stored (samba cifs) data containing sqlite database, for example homeassistant, baikal. I couldnt network store mariadb, mysql either due to some "file locking issues".

  • @vidiokupret
    @vidiokupret Před 2 lety

    Thank you, it really helps

  • @gonzaloamadorhernandez7020

    Oh my gosh!!! You are a crack !!! Thank you very much, mater

  • @MarkJay
    @MarkJay Před 6 měsíci

    What happens when you need to reboot or shutoff the storage server. Can the docker containers stay running or do they need to be stopped first?

  • @AJMandourah
    @AJMandourah Před 2 lety +1

    I have read around some people complaining of database corruption using NFS as their cluster storage, didin't tried it personally and I am currently using CIFS mounts for my docker swarm. I was wondering if you have tried Glusterfs as it seems it is recommended for cluster volumes in general ,

    • @christianlempa
      @christianlempa  Před 2 lety +2

      I hear that a couple of times, but never found any resources or details why this should be the case. Could you kindly share some insights? Thanks

  • @scockman
    @scockman Před 2 lety

    Another AWESOME video!! But, I saw in the video that you have a portainer_data volume on the nfs share, how was that done? I have been trying to get this to work but getting docker error while trying to mount the volume.

    • @christianlempa
      @christianlempa  Před 2 lety

      Thanks! You need to do it outside of the gui with docker cli commands unfortunately.

  • @Ogorodovd
    @Ogorodovd Před 9 měsíci

    @christianlempa Thanks Christian! Could install portainer on a Debian VM within TrueNAS scale, and then communicate with that? Or are you using a separate machine entirely for your Portainer/Server?

  • @GundamExia88
    @GundamExia88 Před 2 lety +1

    Great Video! I have a question, so , if I have mounted the volume to the NFS in /etc/fstab. Do I still need to create the nfs volume? couldn't I just point to the mounted NFS on the host? What's the advantage of creating a new nfs volume in portainer? Is it just for easier to migrate from nfs to nfs? Thx!

    • @christianlempa
      @christianlempa  Před 2 lety

      It's just for easier management. If you already have NFS mounted on the host, that is totally fine

  • @tuttocrafting
    @tuttocrafting Před 2 lety

    I have a docker user and group on both docker and Nas machines. I use the same UID and guid on the container using env variables.

  • @Billyfelicianojp
    @Billyfelicianojp Před rokem +1

    I am having issues installing a stack in a Volume. the volume is already added in portainer and I see it and Tested it but my yml I cant figure out how to add nextcloud to the volume I want it.

  • @ninadpchaudhari
    @ninadpchaudhari Před 2 lety +26

    Btw, one point I need to add here: doesn’t mean you don’t need to have backups. Having redundant storage over NFS is nice. But do ensure you still have restorable backups in addition to this.
    There can be many things that can go wrong here, your FS might get corrupt in case there was a network problem while writing a bit or the raid fails etc etc.

    • @christianlempa
      @christianlempa  Před 2 lety +4

      Absolutely! Great point, and I might explain this in future videos :)

    • @gullijons9135
      @gullijons9135 Před 2 lety +3

      Good point, "RAID is not backup" is something that really needs to be hammered home!

  • @ronaldronald8819
    @ronaldronald8819 Před 2 lety +1

    I am triggered into high gear learning mode by all of this. The aim is to set up a home assistant server. ha runs in docker and stores its data in local volumes. I am no fan on having my data all over the place so this video solves that problem. So next step is to get my hands dirty and hope i do not get to many errors that exceed my domain of knowledge. Thanks!!

  • @rodrigocsouza8619
    @rodrigocsouza8619 Před 8 měsíci

    @christianlempa is the activation of NFS4 that simple?? I've tried exactly what you deed and the mount fails returning "permission denied", always. I tried to dig on the subject and looks like that NFS4 requires a lot of effort to get working.

  • @manutech156
    @manutech156 Před rokem

    Any plans to do a tutorial for Kubernetes Persistent Volume to TrueNAS NFS ?

  • @jmatya
    @jmatya Před 3 měsíci

    Many of the big datacenter also use Fibre channel based storage, network attached can be slow and subject to tcp congestion and package loss, whereas fc is guaranteed delivery.

  • @RobbyPedrica
    @RobbyPedrica Před 2 lety

    Great video.

  • @vmdcortes
    @vmdcortes Před rokem

    Awesome!!
    Is this a good solution for a docker swarm volume sharing with the different nodes?

    • @christianlempa
      @christianlempa  Před rokem

      Thx! I'm not sure about that, I think NFS is still the easiest for my setup.

  • @desibanjankri5646
    @desibanjankri5646 Před 2 lety

    LOL I spent a week on figuring this exact thing out - wanted to use Photoprism to use pictures from Backup server rather than Import files into Docker. Just got it working last night.😂

  • @macenkajan
    @macenkajan Před rokem

    Hi Christian, great Videos. I would love to see how to use this NFS (or maybe iSCSI) Setup with a Kibernetes Cluster. This is what I am trying to setup right now ;-)

    • @christianlempa
      @christianlempa  Před rokem

      Thanks mate! Great suggestion, maybe I'll do that in the future.

  • @226cenk9
    @226cenk9 Před rokem

    This is nice, but. Is there a way to use a local directory on the host instead? I have docker installed on my Ubuntu 22.04 and it would be nice to use local directories.

  • @florianlauer7591
    @florianlauer7591 Před 2 lety

    Hi!
    what tool for drawing and marking via mouse directly on screen are you using ?

    • @christianlempa
      @christianlempa  Před 2 lety +1

      Hi, I'm using EpicPen and my Galaxy Tab as a drawing screen.

  • @RiffyDevine
    @RiffyDevine Před 8 měsíci

    The folder/volume I created inside the container is user 568 so I can't access the /nfs folder in my container. Why did it use that user id over root?

  • @haxwithaxe
    @haxwithaxe Před 2 lety +4

    Glusterfs works really well for small files or small file transfers as well.
    Edit: I've since been told databases don't scale well on glusterfs. IDK how much experience that person has with glusterfs but they have enough experience with k8s for me to accept it until I can test it. Works great for the really small stuff I do in my home lab though.

    • @christianlempa
      @christianlempa  Před 2 lety

      Oh yeah that's an interesting topic

    • @NicoDeclerckBelgium
      @NicoDeclerckBelgium Před 22 dny

      True, but you have Galera Cluster for MySQL/Mariadb or just replication in PostgreSQL.
      But I get that the real problem is the 'other' dozens of databases.
      They can still be on a centralised storage, but as you say they don't scale well ...

  • @raylab77
    @raylab77 Před rokem

    Will this work with a backup solution as PCloud?

  • @kanarie93
    @kanarie93 Před 10 měsíci +3

    isnt it better to map the NFS volume in a /mnt/NFS on the host running docker so you have 1 connection open instead of hunderds for every container picking its own connection? Or is that not possible when you go docker swarm?

    • @KilSwitch0
      @KilSwitch0 Před 4 měsíci

      I have this exact question. This is the way unraid handles this. I think I will duplicate UnRaids approach.

  • @solverz4078
    @solverz4078 Před rokem

    What about storing portainers volumes on a NFS share too?

  • @kevinhilton8683
    @kevinhilton8683 Před 2 lety

    Hmm seems based on the comments iscsi might be the way to go which is block storage vs nfs which is file storage. I don't know however but I do know when I've had two linux system sharing via nfs the nfs connection has crapped out in the past causing problems. I'm not sure this is a better option than keeping bind mounted volumes and just having a backup solution for the volumes that runs periodically to backup the volumes to remote source. Lastly I'm wondering if you run an ldap server since this would synchronize users on vms and the NAS. I'm curious if you would get nfs errors in this scenario

    • @christianlempa
      @christianlempa  Před 2 lety

      Currently I don't have LDAP, but I'm planning setting up an AD at home

  • @schrank392
    @schrank392 Před 2 měsíci

    how do you draw this stuff like @2:30 ?

  • @ettoreatalan8303
    @ettoreatalan8303 Před 6 měsíci

    On my NAS, CIFS is enabled for the Windows computers connected to it. NFS is disabled.
    Is there any reason not to use CIFS instead of NFS for storing Docker volumes on my NAS?

  • @pedro_8240
    @pedro_8240 Před 2 měsíci

    And how do I use NFS to mount the initial portainer data volume, before configuring portainer?

  • @TheManuforest
    @TheManuforest Před rokem

    Hello guys ,... I had a power failure and all my docker volumes have gone. Is this a predictable behavior ? Are they still there in disk ? Thanks

  • @rino19ny
    @rino19ny Před rokem

    it's the same thing. a storage server can also crash. and you have a complicated setup with a NAS storage. either methods you select, a proper backup is best.

  • @macpclinux1
    @macpclinux1 Před 2 lety

    if i ever go crazy and have to setup a dirty docker system i'll try to remember this. it seems really helpful and imo any improvement possible is a god's gift with docker (i really hate it, ngl kubernetes. also not a big fan)

  • @j4nch
    @j4nch Před rokem

    I'm far from an expert on linux and there is something I'm missing on permissions: When you say that we need to have the same user that use the same permissions between the NFS server and the docker image, how does it work? I though that just having the same user id or the same user name isn't enough, no? I mean, they could have different password ?
    Also, what about the performance implications? I'm thinking to move my plex server in a docker container, with its storage on a NFS volume, could this be an issue?

  • @huboz0r
    @huboz0r Před rokem +1

    Btw, one point i need to add here: why share all of your files as root? You could just make a new group and user(s), specifically for accessing your files, and map your NFS shares to them. It is belt and suspenders since you only expose NFS to a specific IP, however not using root whenever possible is the way forwards. Probably why it got removed as a default in the new release.

    • @christianlempa
      @christianlempa  Před rokem +2

      Yep that’s something I need to get fixed in the future

  • @Telmosampaio
    @Telmosampaio Před 2 lety

    I usually do a cron job to copy volumes and database exports to an AWS S3, and then another cron job to delete files older than 1 month!

  • @streambarhoum4464
    @streambarhoum4464 Před rokem +1

    Christian, How to make a disaster recovery of the entire system ? could you simulate an example?

  • @tl1897
    @tl1897 Před 2 lety

    I tried this some time ago. Sadly my pi4 with 3 HDD's in raid5, using mdadm was not fast enough.
    So i decided to have my deployment files on the nfs, but volumes locally.
    And i wrote backup scripts for the rest.

  • @esra_erimez
    @esra_erimez Před 2 lety

    Would you please do a video about using Ceph Docker RBD volume plugin?

  • @rw-xf4cb
    @rw-xf4cb Před 2 lety

    Could use iscsi targets perhaps worked well with VMWARE esx years ago when i didnt have a san

  • @shinwadone
    @shinwadone Před 2 lety

    There was a permission problem when I started the container, user and group exist on both server and client, but when executing the chown command in dockerfile, it shows no permissions error, maybe I have to use root user instead. Is there any other ways to work around with using the root user?

  • @ninji4182
    @ninji4182 Před 5 měsíci

    how do i do this with wsl2 and synology nas?

  • @denniskluytmans
    @denniskluytmans Před 2 lety

    I'm running my docker inside a LXC on proxmox. Which has MP mount to the host. Which has NFS mounts to the storage server. I'm using bind mounts inside of portainer, is that wrong?

  • @anthonycoppet8788
    @anthonycoppet8788 Před 6 měsíci

    Hi . Created nfs volume on server. Can connect with the synology. But in portainer. The nfs volume appears empty .. The server volume is nobody;nogroup.

  • @ailton.duarte
    @ailton.duarte Před 9 měsíci

    is it possible to use zfs pools?

  • @StephenJames2027
    @StephenJames2027 Před rokem

    7:48 After many hours I finally figured out that I needed NFS4 enabled on TrueNAS to get this to work on my setup. I kept getting Error 500 from Portainer when attempting this with the default NFS / NFS3. 😅

  • @sasab7584
    @sasab7584 Před rokem

    Is the same thing posible using the CIFS/SMB mounts originating in windows or it does have to be NFS?

  • @D76-de
    @D76-de Před 6 měsíci

    First of all, I don't have much knowledge about Infrastructure... T,T
    Can the NFS of truenas VM be delivered to a container volume without 1Gb Network bottlenecks?
    Both truenas and container (ubuntu VM) are operated on proxmox.

  • @mrk131324
    @mrk131324 Před rokem

    How about volumes where performance matters? Like tmp or cache folders or source files in local development?

  • @martinzipfel7843
    @martinzipfel7843 Před rokem

    I'm trying to do this for hours now and always run into permission issues. My User on the docker host and the NAS are exactly the same (same username, pw, UID GID) and I get permission denied when I just try to cd into the NAS folder from the ubuntu test container. Anyone an idea?

  • @wstrater
    @wstrater Před 2 lety

    How about HACS without OS?

  • @markobrkusanin4745
    @markobrkusanin4745 Před 2 lety +1

    Gluster FS will be even better option to manage data inside Docker swarm.

    • @christianlempa
      @christianlempa  Před 2 lety

      I'm so interested in these filesystems, once I finish my projects I start looking at them

  • @HelloHelloXD
    @HelloHelloXD Před 2 lety +1

    Great topic. One question. What is going to happen to the docker container if the connection between the NFS server and docker server is lost?

    • @christianlempa
      @christianlempa  Před 2 lety

      The container will fail to start

    • @HelloHelloXD
      @HelloHelloXD Před 2 lety +1

      @@christianlempa what if the container was already running and the connection was lost?

    • @vladduh3164
      @vladduh3164 Před 2 lety +1

      @@HelloHelloXD it seems that the container just keeps running, it may not be able to do anything tho, i just tested this with sonarr as i had the /config folder in the nfs volume and it seemed to work as long as it didnt need anything from that folder, when i clicked on each series it just showed me a loading screen until i reconnected it, I suppose the answer is it depends entirely on what folders you put in that volume and how gracefully the application handles losing access to those files

    • @HelloHelloXD
      @HelloHelloXD Před 2 lety +1

      @@vladduh3164 thank you.

  • @FunctionGermany
    @FunctionGermany Před 2 lety +4

    Don't you need higher-tier networking to make sure you're not suffering any performance penalties? I can imagine that the additional latencies can make certain applications run slower when all file system access has to go through 2 network stacks and the network itself.

    • @charlescc1000
      @charlescc1000 Před 2 lety

      I have been running some basic selfhosted containers on my servers in a similar configuration as laid out in this video. (Ubuntu server with portainer & Docker connected to TrueNAS for the storage.). My TrueNAS is setup with mirrored pairs instead of raidz/raidz2- but it’s still over a 1Gbe LAN.
      It’s been fine. Yes I’m sure 10GbE would improve it, but it’s plenty usable for most containers.
      I originally set it up as a test environment before buying 10Gbe hardware and then it worked so well that I decided not to bother with 10Gbe (yet)
      It’s not great with a VM that has a desktop environment- but it’s been fine with server VMs. Not fantastic, but fine.

    • @christianlempa
      @christianlempa  Před 2 lety +1

      Thanks for your experience! I'm running it with a 10Gbe connection, but I highly doubt this would make a huge difference in this case. As for VM-Disks this might be totally different, of course, but for Docker Volumes 1Gbe should be fine.

    • @ElliotWeishaar
      @ElliotWeishaar Před 2 lety

      You are correct. The approach outlined in the video is great! And I would recommend this approach to pretty much anyone starting with docker. As you continue your journey you will have to adapt to the requirements of what you're hosting. You are correct in saying that some applications don't play well with storage over the network. Plex is a big one. I tried hosting my plex library (the metadata, not the media) on NFS, and the performance was atrocious. The application was unusable, and I had to switch to storing the data locally. I suspect it has to do with SQLite performing tons of IOps which NFS couldn't handle. This was with a dedicated point to point 10GBe connection as well. I was using bind mounts instead of docker volumes but I don't think that created the issue (could be wrong). I have other applications that have experienced this as well. I've resorted to having all of my data be local on the machine, and then just create backups using autorestic.

    • @nevoyu
      @nevoyu Před 2 lety

      Your not serving hundred of connections at a time. So really your not needing as much performance as your think. I run a 1gb connection to my homelab with a 4 disk raid 10 array and I can't tap the full bandwidth of the connection but I have no issues with performance watching 1080p (since I don't have a si gle display 4k makes sense on)

    • @robmcneill3641
      @robmcneill3641 Před 2 lety +1

      @@ElliotWeishaar I ran into the same issues with Plex. I did end up storing the media remotely but could never get the library data to work reliably.

  • @Got99Cookies
    @Got99Cookies Před 2 lety +1

    Wouldn’t raidz2 be more safe especially with a 12 drive arrays? Good video tho! Docker volume management is something very important and you made some very good points!

    • @christianlempa
      @christianlempa  Před 2 lety +1

      Yea, it would be. You can possibly argue if that would be a better option, I still think it's unlikely that more than 1 hard drive fail at the same time, but hey... people might have seen this in the wild, undoubtedly. That's why offsite backups are important.

    • @devinbuhl
      @devinbuhl Před 2 lety +2

      You would be surprised at how easy it is for another drive to die while your pool is resilvering from a 1 disk failure.

  • @jorgegomez374
    @jorgegomez374 Před 2 lety

    I have 3 rpis on a docker swarm. One of the. Is my nfs server and doing exactly this. But worry if my docker drive dies so any idea on making backups.

    • @christianlempa
      @christianlempa  Před 2 lety +1

      Hm, I would try to back up the raspberry Pis file systems with rsync or similar backup software for Linux.

    • @jorgegomez374
      @jorgegomez374 Před 2 lety

      @@christianlempa thanks

  • @yotuberrable
    @yotuberrable Před rokem +1

    In this case I assume NAS server must always be started before docker server and shutdown in reverse order. Otherwise I assume containers will just fail to start. How do you guys handle this?

    • @MarkJay
      @MarkJay Před 6 měsíci

      I also would like to know how to handle this.

  • @realabzhussain
    @realabzhussain Před 2 lety

    Could you use cifs/samba to do the same thing?

  • @Grand_Alchemist
    @Grand_Alchemist Před měsícem +1

    Event with the "wheel" user added to TrueNAS, NFS refused to work for deluge /sonarr / radarr (CentOS using docker compose). I ended up making an SMB share (yes, Microsoft, blasphemy!) and it works perfectly. So much less of a headache than NFS, PLUS it's actually secure (authenticated with a password and ACL (Access Control). SO, yeah. Unexpected but I would just recommend making a fricking SMB share.

  • @area51xi
    @area51xi Před 6 měsíci

    When I try to deploy I keep getting a "Request failed with status code 500." error message.

  • @leokeuken9425
    @leokeuken9425 Před 2 lety

    You could just simply push your builds to a remote server running a private docker registry and have that server run daily backups.
    Out the box docker on overlay2 doesn't like nfs *at all*. Better off using the cache to and cache from flags on buildkit if you are looking for any advantages of sharing layers across multiple machines provided you do have the cache target mounted on a shared fs. And even with that the time it costs to import and export to cache puts question marks on the absolute performance gains for builds.

  • @Photograaf11
    @Photograaf11 Před 2 lety +1

    Hi!
    Would it also be possible to do something similar for the "stacks" that are created with portainer???
    Or maybe this is stupid... images are not downloaded over and over again when there is an update for example (like a local cache system).
    Great video, as usual!

    • @christianlempa
      @christianlempa  Před 2 lety +1

      Thanks! I guess that's working as well, but I haven't looked into compose yet

  • @Weirlive
    @Weirlive Před 2 lety

    have you seen any issues with DB’s specifically SQLite? I tried to move my containers to an NFS share… some work just fine but anything using SQL seems to just break.

    • @christianlempa
      @christianlempa  Před 2 lety

      I, personally, haven't. I heard it doesn't work great for databases, that's why I used NFSv4, as it was improved to work better with that. But you still have problems, you might just switch your workflow for your databases to something else, I'd say.

    • @Weirlive
      @Weirlive Před 2 lety

      @@christianlempa yeah, I'm also using v4 and DB's just didn't work. Currently looking for a solution as I don't like having all of my containers using the local storage of the VM.

    • @chris.taylor
      @chris.taylor Před rokem +1

      @@Weirlive Hey, did you find a solution? I am also finding that SQLITE wont play nice with network shares

    • @Weirlive
      @Weirlive Před rokem

      @@chris.taylor I just use local storage.

  • @Earendur08
    @Earendur08 Před 2 lety

    What's that split console you are using? Is it screen?

    • @christianlempa
      @christianlempa  Před 2 lety +1

      It's Windows Terminal

    • @Earendur08
      @Earendur08 Před 2 lety

      @@christianlempa is it really? Must be a windows 11 thing. I've never seen a split window like that other than when I've used screen on Linux.
      Very cool though. I like it.

  • @SebastianSchuhmann
    @SebastianSchuhmann Před rokem

    Did you experience problems with containers using NFS mounts after a reboot?
    Until now I used nfs only via mounting it to the host and bind mounting docker volumes to the host
    Since I now switched to the "direct mount" of nfs to docker host, specified in the stack code, after rebooting my CoreOS server, all these containers fail
    After restarting them they start fine
    Seems like a not available nfs service at boot time where the containers try to start but are not able to be mounted yet

    • @christianlempa
      @christianlempa  Před rokem

      I mostly reboot both of my servers, so the NAS server and the Proxmox Server, then it works fine.

  • @Sama_09
    @Sama_09 Před 10 měsíci

    once installing nfs-common things just worked !! nfs-common was the missing piece

  • @rileysalm3108
    @rileysalm3108 Před 2 lety

    I did this a few days ago and it corrupted several of my containers. Please be carful and have backups if you do this.

  • @jonathanprak6563
    @jonathanprak6563 Před 10 měsíci

    i can't seem to create volume from qnap to the docker. can you help?

    • @jonathanprak6563
      @jonathanprak6563 Před 10 měsíci

      this is my export: "/share/CACHEDEV1_DATA/Dockerdata" *(sec=sys,rw,async,wdelay,insecure,no_subtree_check,no_root_squash,fsid=9e50b469aef8f8a22013f16b7d3f69f9)
      "/share/NFSv=4" *(no_subtree_check,no_root_squash,insecure,fsid=0)
      "/share/NFSv=4/Dockerdata"

  • @GSGWillSmith
    @GSGWillSmith Před rokem

    I don't think this is working anymore. It used to work, but now on TrueNAs 13, I keep getting this error with new volumes I create (both via stack editor and in portainer):
    failed to copy file info for /var/lib/docker/volumes/watchyourlan_wyl-data/_data: failed to chown /var/lib/docker/volumes/watchyourlan_wyl-data/_data: lchown /var/lib/docker/volumes/watchyourlan_wyl-data/_data: invalid argument

  • @Jayknightfr
    @Jayknightfr Před 2 lety

    Hey, thanks for the video, unfortunately i have an error "500 request failed" when trying to deploy the container.
    I have no issues adding the NFS on other machines, but on container it doesn't work unfortunately.

    • @christianlempa
      @christianlempa  Před 2 lety

      Thats likely a problem with the NFS connection. Check IP, path, user settings and permissions

    • @pWAVE86
      @pWAVE86 Před rokem

      @@christianlempa Same issue ... already checked and entered all IP's possible. Also set "mapall" to root in TrueNAS ... no success. :(

  • @jp_baril
    @jp_baril Před 2 lety

    Could we just have mounted a nfs share on the local docker volumes directory?
    I suppose that because such docker native nfs mecanism exists then the answer would be no, but i'm curious of why.

    • @christianlempa
      @christianlempa  Před 2 lety

      I guess that should also work, but in that case the Linux Host would be responsible for the NFS connection mangement and not Docker

    • @a.x.w
      @a.x.w Před 2 lety

      That's what I do in my (older) setup. For some reason I couldn't get ACLs to work if mounted through docker (also tried docker-volume-netshare)
      I mount my nfs shares to a seperate location on the host and symlink the volumes' _data directories to that, though.

  • @MalcomJPrince
    @MalcomJPrince Před 2 lety

    Cool danke

  • @ViktorKrejcir
    @ViktorKrejcir Před 2 lety +1

    Next level: Longhorn :)

  • @TheTyphoon365
    @TheTyphoon365 Před rokem

    I'm about to do an Unraid server for hosting my NAS and so many docker containers, I can't use NFS in Unraid on my nas though right? I'm watching the video now ...

    • @christianlempa
      @christianlempa  Před rokem

      Im not sure, haven’t used unraid but I’m pretty sure it does NFS

  • @shetuamin
    @shetuamin Před 2 lety

    I have to reboot docker host if NFS server hang. May be I need stable freenas server.

  • @nevoyu
    @nevoyu Před 2 lety

    You don't need a "NAS operating system" any operating system can act as a nas as long as it supports some firm of network file share (ssh, nfs, smb, isccsi, ect)

  • @gjermundification
    @gjermundification Před 2 lety

    I run my local storage lofs across several zpools. Not sure why anyone would do anything as complicated as docker when there are open solaris zones on zfs. In essence I run the server application part on a zpool that is in RAM and NVMe, and storage in RAM and spinning drives. ZIL, L2ARC, and all...
    I use NFS between the Mac and the media servers.

    • @christianlempa
      @christianlempa  Před 2 lety

      There are a couple of reasons why Docker is useful ;)

  • @a5pin
    @a5pin Před rokem

    Can someone help me where I'm going wrong? I've created the volume, but when trying to save the volume in the container, I always get a "request failed with status code 500" error when clicking deploy.

    • @christianlempa
      @christianlempa  Před rokem

      Most likely there is a network connection error or permission error.

    • @old-school-cool
      @old-school-cool Před rokem

      Getting the same, and I've gone over everything I can find. I can only imagine this is something that has been broken in Truenas Core 13

  • @Dyllon2012
    @Dyllon2012 Před 2 lety +3

    For databases, I feel you'd be better off just taking backups and keeping a read replica or two. You'll almost certainly get better performance plus you'll be able to recover faster with the replica.
    If your app isn't a database, it should probably not be saving important data directly to disk unless you're doing some ad hoc operation (like running tests) where a local volume is fine.
    The NAS is probably more convenient for transferring files, I'll give it that.

    • @christianlempa
      @christianlempa  Před 2 lety +1

      I hear that a couple of times, but never found any resources or details why this should be the case. Could you kindly share some insights? Thanks

  • @martinzipfel7843
    @martinzipfel7843 Před rokem

    Everytime I try to bind my NAS volume to a container the container doesn't deploy with error code 500 (deploys fine without binding the volume so I'm sure that is the issue). I tried it with 2 different Truenas scale instances now with the same result. Anyone got an idea what I'm doing wrong?

    • @martinzipfel7843
      @martinzipfel7843 Před rokem

      I figured it out. My Docker hosts are running in Proxmox containers and they don't allow nfs if they're not run privileged.