We DESTROYED Our Production Ceph Cluster

Sdílet
Vložit
  • čas přidán 5. 09. 2024

Komentáře • 24

  • @TechJackass88
    @TechJackass88 Před 2 lety +24

    This is what I miss from tech channels, absolute disregard to office ethics and that “let’s see what happens” attitude, without overly dramatic work up. Walk up to the rack, yank several power cables out, stick your head out of server room to listen for screams, go back, plug equipment back in, bring it back up and go with your day.
    It doesn’t matter if it was scripted or not, I can appreciate a professional in their field doing this just for the funk of it. You folks got yourselves a subscriber. Keep it up.

    • @45Drives
      @45Drives  Před 2 lety +1

      Alexander, this feedback really helps us especially as we venture into a new series of videos. Thanks again for the kind words!

    • @swainlach4587
      @swainlach4587 Před 2 lety

      Yeah, I like submitting my work late because someone is fckg around in the server room.

  • @hz8711
    @hz8711 Před 2 lety +3

    I am surprised that i see this channel for the first time, man, you don't need to compare with LTT, your videos are way more professional and technical, and still you keep the content on very high level, salute for the camera man that stays at the server room the hole time! Thank you for this video

  • @joncepet
    @joncepet Před 2 lety +4

    I was internaly screaming when you just pulled power cords from servers! Nice video though!

    • @VincentHermes
      @VincentHermes Před 2 lety

      In my datacenter you can pull anything and even full nodes and you're not going to notice anything. If you build well, you can sleep well. If anything only exists once, it's not good.

  • @MaxPrehl
    @MaxPrehl Před 2 lety

    Got this whole video as an ad before a Tom Lawrence vid. Watched the whole thing and subscribed! Really enjoyed the end to end experience here!

  • @wbrace2276
    @wbrace2276 Před 2 lety

    Am I the only one that had to stop the video, then go back, just to hear him say “shit the bed” again?
    Point is, while I like your tech tips videos, this was a welcome change. Go fast, break shit, see what happens. Love it

  • @redrob2230
    @redrob2230 Před 2 lety

    In bubble’s voice ”what the hell Ricky? You’re pulling parts out, that can’t be good”

  • @gorgonbert
    @gorgonbert Před 2 lety

    The transfer probably failed because the file handle died and it wasn’t able to re-establish it on one of the surviving cluster nodes. CIFS/SMB received a ton of mechanisms over time to accomplish file handle re-connection and I lost track which one’s which. The server and client need to support whichever mechanism for it to work. Would love to see a video about how your solution solves that problem.
    I would like to host SQL databases via CIFS/SMB (I have reasons ;-) )

  • @Darkk6969
    @Darkk6969 Před rokem

    Wow.. 72TB used out of 559TB available. It's gonna take awhile for CEPH to check everything after being shutdown like that. How are the performance of the VMs on ProMox when it happens?

  • @ztevozmilloz6133
    @ztevozmilloz6133 Před 2 lety

    Maybe you should try samba cluster, I mean CTDB. By the way my tests seems to conclude that for file sharing it's better to mount a cephfs than create a RBD volume from the VM. But not sure....

  • @toddhumphrey3649
    @toddhumphrey3649 Před 2 lety

    New platform is great, keep the content coming. This is like a 45Drive Cribs episode. Lol Love the product, keep up the great work

    • @45Drives
      @45Drives  Před 2 lety

      Thanks, Todd. We really appreciate the feedback.

  • @Exalted6298
    @Exalted6298 Před rokem

    Hi.
    I am trying to build Ceph on a single node using three nvmes (the CRUSHMAP has been modified, like this 'Step Chooseleaf firstn 0type OSD') . But the results of the 4K random read-write test were very poor. I don't know what the reason is. According to the FIO test results, RND4K Q32T16:4179.80 IOPS read 10368.50 IOPS write in RBD. The results of testing directly on the physical disk are as follows, RND4K Q32T16:35262.53 IOPS Read 32934.7 IOPS write.

    • @45Drives
      @45Drives  Před rokem +1

      Ceph Bluestore OSD backend was able to increase performance and latency of the OSD technology considerably over filestore. However, it was not designed specifically for NVMe as NVMe was not extremely prominent when it was being developed.
      There are some great optimizations for flash, but there are also limitations. Ceph has been working on a new backend called SeaStor which you may also find under the name Crimson if you wish to take a look, however it is still under development.
      With that being said, the best practice for getting as much IOPS as possible out of NVMe based OSDs is to allocate several OSDs to a single NVMe. Since a single bluestore OSD cannot come close to saturating an NVMe's IOPS, we then should partition our NVMe into at least 3X partitions and then create 3X OSD's out of a single NVMe.
      Your mileage may vary, but some people recommend 4X OSD's per NVMe but 45Drives recommends 3. This should definitely give you additional performance.
      Hope this helps! Thanks for the question.

  • @LampJustin
    @LampJustin Před 2 lety

    Haha awesome video! ^^ Can't wait for more of these!

  • @alexkuiper1096
    @alexkuiper1096 Před 2 lety

    Really interesting - many thanks!

  • @blvckblanco2356
    @blvckblanco2356 Před 10 měsíci

    im going to do this in my office 😂

  • @intheprettypink
    @intheprettypink Před 2 lety

    That has got to be the worst labgore in a server Ive seen in a long time.

  • @bryannagorcka1897
    @bryannagorcka1897 Před 2 lety

    A Proxmox cluster of 1 hey. lol. Regarding the windows file transfer that stalled, it probably would have recovered after a few more minutes.

  • @user-cl3ir8fk4m
    @user-cl3ir8fk4m Před 2 lety +3

    Ooh, yay, another video about people who have enough money to set their own shit on fire during a global recession while I’m combing through six-year-old cold storage drives to find a couple extra GB of space! Not tone-deaf in the least!

  • @logananderon9693
    @logananderon9693 Před 2 lety

    You should nose breathe more.