I Colocated My HomeLab in a Data Center

Sdílet
Vložit
  • čas přidán 8. 06. 2024
  • After a few months of planning and building, I colocated some of my homelab servers in a data center! There were so many unknowns like, how much does colocating server cost? Do you need to bring your own networking? How do you even prepare for this? Join me as we figure this all out! And don't worry, I still have servers at home too!
    - Huge thanks to William for inviting me to his colo rack!
    - Thanks to Ubiquiti for sending a UDM for my rack!
    - Find your UniFi cloud gateway here: l.technotim.live/ubiquiti
    Video Notes: technotim.live/posts/homelab-...
    Support me on Patreon: / technotim
    Sponsor me on GitHub: github.com/sponsors/timothyst...
    Subscribe on Twitch: / technotim
    Become a CZcams member: / @technotim
    Merch Shop 🛍️: l.technotim.live/shop
    Gear Recommendations: l.technotim.live/gear
    Get Help in Our Discord Community: l.technotim.live/discord
    Tinkers channel: / @technotimtinkers
    (Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
    00:00 - Why did I move my servers to a data center?
    00:44 - How much does colocation cost?
    01:49 - Preparing servers for the data center
    02:47 - Taking servers to colocation
    03:26 - William on using a data center as HomeLab space
    03:49 - Physical Security
    04:15 - Racking and networking servers
    04:57 - Testing servers
    05:44 - Testing Site to Site VPN
    06:21 - Next Steps, I need YOUR help!
    Thank you for watching!
  • Věda a technologie

Komentáře • 355

  • @aure_eti
    @aure_eti Před 2 měsíci +322

    "it sounded like this inside" didn't hear any diffrence with my PowerEdge running behind me lol

    • @jonathan.sullivan
      @jonathan.sullivan Před 2 měsíci +7

      PowerEdge as in singular? lol

    • @xtlmeth
      @xtlmeth Před 2 měsíci +2

      lol as my full rack is humming away 15ft away from me.

    • @aure_eti
      @aure_eti Před 2 měsíci +4

      @@jonathan.sullivan yes, as only one is currently powered up. But it's a R730xd it's not that loud usually. Expect when it's 25° in my room

    • @JohnWeland
      @JohnWeland Před 2 měsíci +3

      dude right, I have three running about 30' from me. I was like dang that's quiet

    • @GabrielFoote
      @GabrielFoote Před 2 měsíci

      Haha, relatable

  • @FinlayDaG33k
    @FinlayDaG33k Před 2 měsíci +65

    One major downside of the way you've set it up: If your UDM dies, your entire cluster state may be compromised as nodes are no longer able to see eachother.
    I would personally have added a 2-port NIC (I bought some refurbed SFP+ ones for 60 bucks a pop - tho I'm from Europe so your market may differ) in that unpopulated PCIe slot then hook up all nodes directly to each other in a mesh (A->B, B->C, C->A) with some SFP+ DAC cables (they cost like 15 bucks a pop from fs).
    Then use the onboard NICs _just_ for traffic leaving the cluster.
    It would add some extra costs (and some configuration complexity) but the benefits are worth it in my opinion:
    - Ceph can now run over dedicated interfaces (that are also faster when using SFP+), lowering the burden on the other interfaces (less congestion).
    - Your UDM failing only affects your uplink (but your cluster state itself will otherwise remain unaffected).

    • @juri14111996
      @juri14111996 Před 2 měsíci

      an the 8 lan ports an the udmp are internally connected over just 1gb/s. Its basicaly a 9port gigabit switch, 8 port facing the outside, 1 used internally to connect to the rest of the system.

  • @zeddy893
    @zeddy893 Před 2 měsíci +193

    Dude, you made it into the 511 building... that's insane! That's the hub for all the Midwest Backbone is located. I'm so Jealous.
    Just a bit of background: when US Bank was constructing the stadium, there was an idea to demolish it since it appears just like any ordinary building. However, they were told that wasn't an option. It was then that they discovered the true significance of that building.

    • @TechnoTim
      @TechnoTim  Před 2 měsíci +37

      Maybe that explains the sweet, sweet ping time! Thanks for the history!

    • @zeddy893
      @zeddy893 Před 2 měsíci

      @@TechnoTim Yes, the company I work for utilizes a direct connection to the backbone, connecting all the way back to our main data center. It's not an inexpensive setup, and that location serves as a major hub for all the leading internet providers. Depending on your access level, if you venture down to the basement, you'll come across secure rooms that are off-limits, reserved for major companies like CenturyLink, Xfinity, Spectrum, and others.

    • @stephendennis5969
      @stephendennis5969 Před 2 měsíci +28

      Haha I would have loved to have been the one who told the developers “no you can’t tear down the major communications hub for the city and half the country. “

    • @TheDillio187
      @TheDillio187 Před 2 měsíci +9

      the 511 building is legendary. The TW Telecom colo in MInnetonka, Cyxtera in Shakopee, and the 2 Databank colos are good visits, too.

    • @mattaudio
      @mattaudio Před 2 měsíci +1

      I choose ISPs based on 511 peering.

  • @izatt82
    @izatt82 Před 2 měsíci +154

    A tip. Mount the UDM on the back side of the rack and gain back that rack space you used for the cabling to run to the back.

    • @TravisNewton1
      @TravisNewton1 Před 2 měsíci +30

      Exactly. Those cables are now consuming an additional U. Even in a shared rack, that extra U costs something and a U wasted on cables is an expensive waste of money.

    • @TheSHELMSY
      @TheSHELMSY Před 2 měsíci +18

      You need to worry about airflow and where the UDM pulls its air. if its front to back like all the other servers, it would be pulling hot air from the back of the cabinet and dumping it out the front of the cabinet. This is why most enterprise network switches have models with back to front airflow.

    • @TechnoTim
      @TechnoTim  Před 2 měsíci +41

      Thank you! Great idea! I proposed this a few times but they said it was fine in front. We'll see if they change their minds once the rack starts to fill up! 😂

    • @jonathan.sullivan
      @jonathan.sullivan Před 2 měsíci +11

      I'm actually surprised there isn't a top of rack switch they all just plug into and get their static IP's from. I rarely had to bring my own networking equipment for my colo's.

    • @hw2508
      @hw2508 Před 2 měsíci +3

      It might be tight, but I think there was space to run the cables to the sides. If this 1U ever becomes a problem.

  • @zeddy893
    @zeddy893 Před 2 měsíci +95

    Also Regarding your question: given that your ISP is located in the same data center as you (lol), I recommend sticking with the hardware Site-to-Site VPN. It's hard to find a better or more reliable connection. From my perspective, opting for a service like ZeroTier would only introduce unnecessary overhead to your current setup.

    • @Krushx0
      @Krushx0 Před 2 měsíci +9

      I also would stick with the site to site vpn, i would never trust others to handle or be part of it in any manner of my private vpn connection. Tried and tested trough ages. The thing you should ask your self is that, why would you replace it? What is that you are not satisfied with in the current site to site vpn setup? What benefit would give you over site to site vpn? That benefit would improve your situation or possibilities?

    • @victorzenteno1166
      @victorzenteno1166 Před 2 měsíci +2

      Hardware site to site for sure, nice setup

    • @RogueRonin2501
      @RogueRonin2501 Před 2 měsíci +1

      What about option with self hosted Zerotier controller. I'm using such option for quite a while now and got lots of benefits from it, but I'm not keeping my hardware in data center. Also Zerotier can be good access granulation tool.

    • @zeddy893
      @zeddy893 Před 2 měsíci

      Underneath zero-tier and all those other easy configurations, VPNs run on WireGuard underneath. If you're hosting the self-hosting solution at home, self-hosting is great as long as peering is good by your ISP. If the ISP doesn't have good peering, your VPN can become unstable. However, self-hosting does give you some privacy if you have privacy concerns.

    • @denton3737
      @denton3737 Před 2 měsíci

      As an ISP network engineer, I second this.
      Although you can do some cool things with Tailscale and ZeroTier, what you want from co-located equipment is reliability. The more complex things get, the more likely they are to have problems.

  • @JeffGeerling
    @JeffGeerling Před 2 měsíci +50

    Hey I like that shirt you're wearing at the end! 😂

    • @TechnoTim
      @TechnoTim  Před 2 měsíci +8

      Thanks for a great design Jeff!

    • @AndyIsHereBoi
      @AndyIsHereBoi Před 2 měsíci

      Funny seeing you here

    • @JeffGeerling
      @JeffGeerling Před 2 měsíci +4

      You're welcome! I have your dark mode shirt too, it just hasn't hit the rotation for a day when I've been recording yet. But it'll show up soon enough :)@@TechnoTim

  • @EricInTheNet
    @EricInTheNet Před 2 měsíci +20

    I went tailscale after having an openVPN, the biggest upside was the integration of every device: iPhone, iPad, random laptop(a), NAS in tertiary location, suddenly they were all part of an overlay network. Since then, I literally have forgotten where some devices are located because it has become so seamless. 😂
    100% recommend Tailscale. I just wish UDM have a native support (in mgmt interface) for an exit node with tailscale.

  • @keyboard_g
    @keyboard_g Před 2 měsíci +18

    It would be interesting to see ping time over a Tailscale network to those same machines.

  • @ivanmalinovski7807
    @ivanmalinovski7807 Před 2 měsíci +36

    Man, $45/month is so cheap for that service. I wish we had something like this in Denmark.

    • @emanuelpersson3168
      @emanuelpersson3168 Před 2 měsíci

      I bet there is in Köpenhamn?

    • @ivanmalinovski7807
      @ivanmalinovski7807 Před 2 měsíci

      @@emanuelpersson3168 Nothing that I've been able to find. It all targets organisations at much higher costs.

    • @RobinCernyMitSuffix
      @RobinCernyMitSuffix Před 2 měsíci +4

      Start your own community rack? It's not that common, but some computer clubs and similar groups do it, they rent a rack, or multiple, and you all share the expenses, usually with a little bit of an extra for the organization.

    • @karliszemitis3356
      @karliszemitis3356 Před 2 měsíci

      Try contacting hackerspaces. For example, when I lived in CPH I went to Labitat. Okay, they dont really have a data center, but you could get rack space with decent internet for cheap. Or they would know a place to co-locate cheaply.

    • @kjartannn
      @kjartannn Před 2 měsíci

      I very barely know a guy with colocation space in Denmark. His company is called something like Stacket Group (I think?) and he runs some brands and stuff from it. Maybe you can get in contact with them and see if they will rent you space. I believe they're connected via GlobalConnect and TDC.

  • @armedscubasteve
    @armedscubasteve Před 2 měsíci +4

    I've always wanted to colocate, so this is pretty cool from a HomeLab perspective of how this all works. Yeah I can look at colocation videos online but probably none from a homelabber. Thanks Tim!

  • @dhelmick
    @dhelmick Před 2 měsíci

    This is awesome! I moved to the Twin Cities a year and a half ago and to know these things are a short drive away is really neat. I am currently working on my RHCSA cert and you have been a good source of motivation and inspiration during that journey. Thank you for doing what you do.

    • @TheDillio187
      @TheDillio187 Před 2 měsíci

      there are a lot of colo facilities here. Lots of cool stuff to see out there.

  • @matthewlandon1697
    @matthewlandon1697 Před 2 měsíci

    I did something similar a few years back and still continuing to do this! It’s great to have it in a dc where the temperature remains the same and you can add / expand where required 🎉

  • @diegoalejandrosarmientomun303

    Amazing video! thanks for all your advise Tim, it has helped me out a lot during my homelab journey. Now its time to take it to the next level 🥳

  • @seantellsit1431
    @seantellsit1431 Před 2 měsíci +11

    BTW, you can save 1u of space (above your UDM) by locating your UDM to the back of the rack....this is where all of your eth/sfp ports live for your servers. This is how most network0 their servers. Also a reason way enterprise switches have a back to front air flow.

    • @npham1198
      @npham1198 Před 2 měsíci

      That’s also if depths are within reason

    • @kyrujames
      @kyrujames Před 2 měsíci

      the UDM probably doesn't have a black to front airflow and would just be eating hot air at that point.

  • @ronm6585
    @ronm6585 Před 2 měsíci

    Looks good Tim, thanks for sharing.

  • @TylerBundy260
    @TylerBundy260 Před 2 měsíci +3

    Good ol' 511. I'm definitely going to look into getting some stuff moved!

  • @Trains-With-Shane
    @Trains-With-Shane Před 2 měsíci +6

    It's been a couple years since i've been in a data center but it's amazing how really cold air can become really warm air in the very short amount of time it's inside the components of a server rack. I was able to observe when they built out the new air handler for the datacenter at work and the ducts were big enough to walk around in... upright!

    • @marcogenovesi8570
      @marcogenovesi8570 Před 2 měsíci +1

      they watched Die Hard and thought "why crawling through when you can walk"

  • @edb75001
    @edb75001 Před 2 měsíci +1

    Thanks for this. I've been thinking of doing something similar here in the Dallas/Ft. Worth area. Running mine at home is getting loud, expensive and dumps so much heat.

  • @kenrobertson8239
    @kenrobertson8239 Před 2 měsíci

    Would love to see you cover more colo type stuff! I had equipment in colo back in the 2000s and loved it, just recently set up some stuff in a colo to augment my homelab.
    In colo, power is almost always your biggest expense. So half rack vs full rack is a small difference for a simple circuit. I've had high setup costs before when they have to set up additional racks. Its odd they make you pay for them to set up the space for you.

  • @CRK1918
    @CRK1918 Před 2 měsíci +1

    Hi from Minnesota! I watched your CZcams channel for a while, but I did not know your live so close to me.

  • @mllarson
    @mllarson Před 2 měsíci +1

    Oohh I had no idea you also were in Minnesnowta! Hope you are ready for the almost two feet of snow coming for us this weekend ❄

    • @TheDillio187
      @TheDillio187 Před 2 měsíci

      I want to thumb down this comment but you're not a bad person. lol.

  • @CharlieMartorelli
    @CharlieMartorelli Před 2 měsíci

    Cool project, can't wait for more videos .

  • @jonathan.sullivan
    @jonathan.sullivan Před 2 měsíci +1

    Thanks for showing the leg work, I kinda had a feeling you got in on a deal when you agreed to colo, prices are insane these days. Cheaper to rent a dedi server and not worry about hardware failure costs.

  • @mne36
    @mne36 Před 2 měsíci

    I was thinking about doing this for awhile. Excited to watch this video.
    Fellow Minnesota resident😄

    • @mne36
      @mne36 Před 2 měsíci

      Thank you for the very educational video! Some how every time I think of doing something you make a video a month later explaining how it can be done haha

  • @DavidPerrettGM
    @DavidPerrettGM Před 2 měsíci

    Hey Tim, cable ties and wire trigger my DC-OCD - velcro is your friend. I would also caution you on the Unifi in the DC, having a single point of failure in front of the cluster could lead to sad times. Opnsense clustering is extremely robust, it's also getting a lot more updates than PFSense and it runs on lightweight hardware (I repurposed a couple of old Sophos xg-115's about 18 months ago - super stable) - Love the vids - thanks for putting them out there.

  • @jlt4219
    @jlt4219 Před 2 měsíci

    Cool topic! Wondering what everyone uses for remote connections in their homelabs. Mesh by software vs hardware sounds like a great video idea!

  • @krystophv
    @krystophv Před 2 měsíci +2

    I'd love to see some content around Nebula as an overlay network. Defined Networking has a pretty generous free tier in the hosted space.

  • @thegreyfuzz
    @thegreyfuzz Před 2 měsíci +3

    I almost miss having to drive 30 miles to my ISP were my servers were co-located......25 years ago! Back then it was a real treat to have 100M between servers with a T1 to the internet, and dealing with 2 dialup lines in MLPPP to access them from home. Looking at my full 42U cabinet now....maybe co-lo is a real option again?

  • @shadowperson9
    @shadowperson9 Před 2 měsíci

    The 511 building is a pretty cool building if you're into tech. I worked out of it for a short time a few years ago. The tenant list is interesting and it has a storied history as well. My understanding is that it was built as an R & D facility for Control Data Corp. Across 6th St to the SW is the Strutwear Knitting Company building, made historic for other reasons.

  • @jsbaltes
    @jsbaltes Před 2 měsíci

    Wows! Cool stuff. Pinging your remote units faster than the ones in your house !? That had to feel good.

  • @G4rl0ck
    @G4rl0ck Před 2 měsíci +1

    Been using tailscale and it keeps blowing my mind!

  • @SierraGolfNiner
    @SierraGolfNiner Před 2 měsíci

    My buddy and I did this a few years ago. Around here there is a company, Hurricane Electric, that basically has the Costco of datacenters. They are mostly a transit provider, but have a few LARGE datacenters in the bay area. You can get 1 gig, 15 amp, full cab for $400/mo.

  • @mrhidetf2
    @mrhidetf2 Před 2 měsíci

    I use nebula as an overlay network and am really happy with that so far. Seemless connection between all the server and client devices no matter where they are, as long as there is an internet connection

  • @youwut8378
    @youwut8378 Před 2 měsíci

    omg to find out that you are in Minnesota this video is awesome!

  • @Whiskey7BackRoads
    @Whiskey7BackRoads Před 2 měsíci +2

    I vote for tailscale, one I would like to see more of it on videos and it works great. I have remote repeater sites connected and 2 ranches in different states. It does require a very consistent update but that seems to be the only drawback besides not hosting it myself. Thanks for the videos, enjoy them a lot.

  • @Deepfreezing
    @Deepfreezing Před 2 měsíci

    Excellent move! Not having to worry about power issues is a big one.
    Here are my 2c's:
    Moving the switch to the hot side of the rack is something you want to think through. I was dealing with this for years, until Cisco finally started offering fans with reverse air flow, so a) you're not obstructing the airflow in general and b) trying to cool you switch with hot air from the servers.
    It seems there is no side cable management in the racks? I started to use slim cables - less space needed, better air flow. Plus they might fit through the side of the rack.
    If you are using single PSU servers, you might want to invest in an ATS so you can take advantage of dual power sources. As a bonus they offer environmental monitoring and some even remote access to reboot your equipment (Hands up who had to run to a Cisco switch and pull the plug ;)

  • @GeekOfAllTrades
    @GeekOfAllTrades Před 2 měsíci

    sharing a shared space? it's like colo-ception!
    Flipping Genius! 🖖

  • @toryelo
    @toryelo Před 2 měsíci +3

    If you have full network administrative privileges, a hardware-based site-to-site VPN is the best choice, rather than mesh. Although a mesh network seems to solve many complex network configurations at first glance, from a site perspective, mesh addresses the complexity of peering between multiple sites. Moreover, you only have two sites here.

  • @martinwashington3152
    @martinwashington3152 Před 2 měsíci

    I did a half-way to going from home DC to colo, I purchased a /28 subnet from ZEN internet while also too allowing some clients to usilise the shelves within my home.

  • @smalltimer4370
    @smalltimer4370 Před 2 měsíci

    Excellent pathway, collocation being the natural evolution to homelab'ing :)

  • @rlocone
    @rlocone Před 2 měsíci

    Thanks for sharing.

  • @brock2633
    @brock2633 Před 2 měsíci

    Didn’t know Techno Tim was in MN. I’m in the South Metro. Cool video again.

  • @movax20h
    @movax20h Před 14 dny

    Nice indeed. I also started moving my homelab to colo, and managed to snap some 10G connectivity, 1U spot, and good pricing, and location. Used modern hardware (cpu, memory, storage, nic), and super speedy. Easily getting 10G to my home (and will expand to 25G once the colo owners upgrade their gear, to 25 or 100G), and getting 0.54ms rtt form home. Nice. I already want another server somewhere (maybe other DC), just for fun.

  • @PedroFonseca5
    @PedroFonseca5 Před 2 měsíci

    Definitely a nice future video, how to connect have a hyper converged dual site proxmox cluster using some Routing and tunnelling tech.

  • @izproximity
    @izproximity Před měsícem

    Another side note the 511 building is a carrier hotel aswell. It has pretty much every single ISP that is in Minnesota.

  • @zuzezimzulze
    @zuzezimzulze Před 2 měsíci

    Awesome idea. What speed do you get from your site to site vpn from the UDM's?

  • @blackphidora
    @blackphidora Před 2 měsíci +3

    If I was you, I would host all my coordination servers at the colo, it has a static IP, you can set up a netbird/tailscale subnet router, and still have a ssh back door if the SDN fails. You can also set up A subnet router at home.
    The benefits will be similar to a site to site.

  • @kenny45532
    @kenny45532 Před 2 měsíci

    Hi Tim, Use the head scale control plane. I use it myself and it can double as a good video tutorial.

  • @orsonc.badger7421
    @orsonc.badger7421 Před 2 měsíci

    Tim that’s so cool! Rock on sir

  • @saulgoodman1390
    @saulgoodman1390 Před 2 měsíci

    Hey Tim. Sweet vid. Weird question, though: What glasses do you wear? I can never find any I like but I like those ones you wear

  • @Kaidesa
    @Kaidesa Před 2 měsíci

    I would honestly do both. Having the hardware-based VPN is nice, but if something ever happened and that UDM messed up, instead of a visit, with something like Twingate or Tailscale, you could connect remotely and fix things so long as the network connection were still somehow in tact. Redundancy is never a bad thing.

  • @aflawrence
    @aflawrence Před 2 měsíci

    I think that's the old AT&T buidling. I did grad school in St Paul and rember passing by that area numerous times when I went across the river.

  • @accik
    @accik Před 2 měsíci +1

    Had issues with self hosted TailScale server, would like to see content around that. I too think that colocation is cool but too expensive for hobby projects.

  • @jsnfwlr
    @jsnfwlr Před 2 měsíci

    I have a few VPS on different cloud providers that i wanted to link together over a pricate network, plus provide access to backup storage on a server in my homelab. Since this doesn't require multiple users or acceas control lists, Tailscale was overkill, so I just setup my own Wireguard mesh which has been working really well for almost a year now.

  • @thadrumr
    @thadrumr Před 2 měsíci +3

    FYI the 2680v4 is a 14 core 28 thread CPU. I know we are talking cores vs threads but just pointing it out.

    • @TechnoTim
      @TechnoTim  Před 2 měsíci

      Thank you, good call! You’re right, threads not cores. Editing Tim should have caught that!

  • @franciscotapia3144
    @franciscotapia3144 Před 2 měsíci

    Awesome dude!

  • @JoshDike-lx8gl
    @JoshDike-lx8gl Před 2 měsíci

    Personally friends of mine and i run talescale between our houses so we can backup each other data. Also plan to soon add family to it for their backups as well.

  • @Amwfilms
    @Amwfilms Před 2 měsíci

    Awesome journey. If what you have is safe and secure. You maybe adding more latency and speed bottleneck with using Tailscale.

  • @ronwatkins5775
    @ronwatkins5775 Před 2 měsíci

    Very interesting. Im in a similar position, I need to find a cheap colo with no frills. I have about 1/4 rack and a shared space would be perfect. How do you go about finding these? My research so-far has only turned up actual datacenter hosting facilities, while im looking for cheap rackspace even if it's not 5-nines of uptime.

  • @jimmyscott5144
    @jimmyscott5144 Před 2 měsíci

    I don't know the pro and cons of either one so I'd like you to cover a little bit of both if possible in the next

  • @TomasJonssons
    @TomasJonssons Před 2 měsíci

    I am running wireguard (not tailscale) for my site to site and then I do BGP routing between the sites and it works just perfect. I am using tailscale for "road warrior"-style connectivity from my laptop/phone etc if I'm not at home and I need to connect to my DC/Home
    I think tailscale is great, but it just didn't fit for site to site for me running a similar setup to what you have.

  • @rallisf1
    @rallisf1 Před 2 měsíci

    I haven't used collocation for about a decade. Renting individual VPS is way cheaper and better maintenable than anything else without sacrifing performance (as long as you pick a good host). That said; I use tailscale for pretty much anything. My homelab, my office, my commercial servers, my clients... I love how I can manage ACL easily and quickly give access to anyone to exactly what he needs access to. Keep in mind that I also run my own DERP server. It shouldn't make much of a difference (speed/safety), but it was easy enough to self-host.

  • @rexeus
    @rexeus Před 2 měsíci

    I have 7 node k3s cluster running on bunch of public VPS nodes and of which 2 nodes are on home servers, all meshed via Tailscale. Tailscale has k8s operators too.

  • @TrTai
    @TrTai Před 2 měsíci

    I'd stick to the site-to-site VPN, you've basically stumbled into the most ideal setup using that and I don't see a lot of benefit to going for the overlay network route in this scenario, as cool as something like tailscale is. Awesome seeing something like plover and I'll have to see if I can find something like that more local. Been kind of wanting to move to a colo for some of my equipment but even getting quotes is a bit of a headache locally.

  • @JasonTurner
    @JasonTurner Před 2 měsíci

    Curious as to why you chose to mount your UDM in the front of the rack instead of in the rear? That would have allowed you to put spacers in the front above your servers. Cleaner look, and also shorter cables from switch to servers :) As for how I connect to our data center, I use Tailscale. I have a VPN client as a backup in the event that the endpoint running tailscale in the datacenter goes down for whatever reason.

  • @Bill_the_Red_Lichtie
    @Bill_the_Red_Lichtie Před 2 měsíci

    Being tied to to a supplier's hardware/software dependent solution, I would set up a tailscale/headscale solution. There is far more flexibility in a VPN/SDN mesh than a site-to-site vendor specific solution.

  • @maxmustermann9858
    @maxmustermann9858 Před 2 měsíci

    I would use a mesh network mainly because I’m a big fan of Zero Trust, also you become more independent of the network at home or in the DC. But I would recommend something like Nebula. It’s super fast and lightweight. It won’t has a nice UI unless you use the hosted version but when you use Ansible for everything like setup key rotation it becomes really easy. On top you can use something like NetBird, it’s a German product. Also a mesh VPN solution, mainly does the same but with nicer Auth integration like SMLA etc. That I would use for things like mobile devices or PCs and Nebula for the Backend stuff.

  • @itsthebofh
    @itsthebofh Před 2 měsíci

    I would keep my main physical infra on the site to site then setup a software defined network for the virtual systems. That way you get the best of both worlds flexibility that comes with software and reliability that comes with hardware solutions.

  • @LucS0042
    @LucS0042 Před 2 měsíci

    Even if you go the 'regular' VPN route, definitely an overlay network like tailscale (or headscale) for the fun of it.

  • @maxmustermann194
    @maxmustermann194 Před měsícem

    IF YOU PLAN FOR MULTI-GIGABIT ROUTING TO THE INTERNET: The integrated switch in the UDMP is like a GBe switch connected to the UDMP. So this always limits internet and inter-vlan routing.
    If you don’t use the internal switch but connect a 10G switch via the UDMP SFP+ port, then you can use 10G towards the internet and for inter-vlan routing. This is reduced to 3.5 with IPS enabled.

  • @Franchyze923
    @Franchyze923 Před 2 měsíci

    Very cool!

  • @friendlydawusky
    @friendlydawusky Před 2 měsíci

    I Personally run tailscale my self to connect my cloud and homelab stuff together for security and only expose what i need to and when/were i need to, though as of recently been looking into a self hosted solution for privacy/security reasons

  • @donovangregg5
    @donovangregg5 Před 2 měsíci

    May want to put your UDM Pro on the back side of the aisle for cable management, the heat shouldn't be too much of a concern with it. If you moved it to the back, you will also save 1U of space where your cabling runs through.

  • @novistion
    @novistion Před 2 měsíci +1

    I have a very close setup as you Tim. I have free colocation space from my employer, and my stuff at home. I messed with this a lot over the last year, and site-to-site in my option is the way to go. (Even using site Magic as you seem to be) the convenience (and troubleshooting) are worth it. I have tailscale on a few devices but that is mainly for a "oh shit" when I break something. I'll post some more in the discord.

    • @TeslaMaxwell
      @TeslaMaxwell Před 2 měsíci

      agreed. it doesn't hurt to run both. For instance; past few days my TS exit node container was acting super weird and rebuilding it didn't fix it.. it was not until few hours I discovered snort was blocking part of the traffic.. I'd def have both implemented for a PROD deployment.

  • @RyanJones26
    @RyanJones26 Před 2 měsíci +1

    Tailscale with subnet routers for the win. Site to site vpns are cool but if you add a third site or more that becomes annoying to manage unless you use something like ospf or bgp.

  • @jbaenaxd
    @jbaenaxd Před 2 měsíci

    I have self-hosted and cloud servers in different countries, and I connect to them from many different places, so that's why I'm using a mesh VPN. But since you have them both with the same IPS, it doesn't really matters. But if you are trying to join cloud instances to the network, I'd go with a mesh VPN. Sometimes I need to test something in the cloud, so I automated the instance deployment that automatically connects it to Tailscale and after the instance is terminated or power off for some time, it gets remove from my Tailscale account. That's very handy.

  • @xhacks519
    @xhacks519 Před 2 měsíci

    I have recently started to mess with tailscale and i love it but i dont know which would be better for that circumstance

  • @Redd00
    @Redd00 Před 2 měsíci +1

    I had a site to site VPN for the longest time just to connect to my permanent address but it got less and less reliable to the point where I installed tailscale as a container (second one on another node in the cluster) and haven't looked back since. However I think because your ISP is located in the same data center I would just keep it to a site to site VPN.

  • @base64d
    @base64d Před 2 měsíci

    Do the switch ports on the UDM support full duplex gigabit speeds across all of the ports simultaneously? I remember reading that the switch was somehow underpowered in that regard.

  • @stephanj2261
    @stephanj2261 Před 2 měsíci

    In regards of this video... I'm curious about using Unifis site to site magic VPN... can you have kinda "regular" firewall rules like you'd have if you just subdivide your home network in several VLANs or using OpenVPN to connect to another Unifi network or are there limitations which come with the site to site magic VPN?

  • @techaddressed
    @techaddressed Před 2 měsíci

    I've got part of my homelab services running in the cloud ... currently using Zerotier but migrating to Nebula.

  • @anthonydefallo9295
    @anthonydefallo9295 Před 2 měsíci

    Pretty jealous. Wish I could get some lab colo-space haha

  • @alldaytherapy2919
    @alldaytherapy2919 Před 2 měsíci

    All I can say is, Tailscale slaps brother. I have been extremely grateful that it is an available option for small homelab users like myself. It may not be a bad idea to at least test it out.

  • @Monsieur2068
    @Monsieur2068 Před 2 měsíci +1

    I am wondering if Cloudflare is in there also? When running their speedtest (I am in MPLS too), it pings to MPLS but does not give exact location.

  • @zjihf
    @zjihf Před 2 měsíci

    What kind of network connection did you get? Was the Unifi box necessary, and how much power can you draw?

  • @drkavngr1911
    @drkavngr1911 Před 2 měsíci

    I'm doing wireguard site-to-site with my colo, but I am also running opnsense virtually in the colo box. Able to saturate my 2gig home connection over the vpn tunnel.

  • @shiysabiniano
    @shiysabiniano Před 2 měsíci

    BGP for site to site and an overlay network like zero tier with a self hosted controller would be a great setup

  • @ToucheFarming
    @ToucheFarming Před 2 měsíci

    I ran a few hosts years ago and looked into colocating, and the cheapest places I found was either NJ, NY, or TX. idk why they're so cheap but they're pretty cheap and is why lots of hosts have servers in those areas

  • @marcosoliveira8731
    @marcosoliveira8731 Před 2 měsíci +1

    It´s amazing how tech costs are cheap in USA. I have to pay, at least, 5 times more the values you showed.
    I´d use hardware to connect to the remote data center.

  • @dastiffmeister1
    @dastiffmeister1 Před 2 měsíci

    Either site-to-site wireguard or an overlay network such as Tailscale or Netbird. Prefferably one of the latter.

  • @ryderholland
    @ryderholland Před 2 měsíci

    not sure you mean by hardware/software solution but I would use wiregaurd vpn between the two sites, tailscale is also wiregaurd vpn but with some cool nat transversal logic built in and a clean ui, so I would just stick to regular vpn unless you cant as it remove tailscales service as a point of failure.

  • @squalazzo
    @squalazzo Před 2 měsíci

    what about the additional security needed in this kind of setup? I mean, did you think to disk and data encryption to avoid anyone accessing your disks in an eventual theft, of remote offsite backups?

  • @nicholasl.4330
    @nicholasl.4330 Před 2 měsíci

    To answer the question at the end - yes! ZeroTier has the option of integrating with OPNsense, so the devices behind OPNsense aren’t relying on software, but you could just as easily install the ZeroTier client on your laptop and reach it just fine. That would (obviously) require you to change the UDM Pro, but that’s (again obviously) a decision for you to make. It’s a super neat solution nonetheless!

  • @annihilatorg
    @annihilatorg Před 2 měsíci

    Also verify that your power supplies are 220v capable. Most real server PSUs are full-range, but I've seen smoke on more than one occasion.

  • @PcaplLite
    @PcaplLite Před 2 měsíci

    I'm sold on overlay networks like, Tailscale and Zerotier. Enjoy the new digs!

  • @sarah1202
    @sarah1202 Před 2 měsíci +1

    HI, I personaly manage multiple colo space, and i use hardware, redudent path vpn + dynamic routing protocol (OSPF).
    Also be sure to have a second way to access if your main tunnel goes down (eg, your ipsec endpoint down).
    Ps: i'm not using ubnt stuf for that setup.
    Also, don't forget to document your ip usage on something like an ipam solution and think about a good adressing plan. It help alot

    • @TechnoTim
      @TechnoTim  Před 2 měsíci +1

      Great tips, thank you! I wish I were better at networking!

  • @GodAtum
    @GodAtum Před 2 měsíci +1

    why didnt you consider hetzer servers or just using Digital ocean or AWS?

  • @autohmae
    @autohmae Před 2 měsíci

    In the Netherlands we have an organization which does something similar: Coloclue
    (I'm not with them, because they are on the other side of the country)

  • @user-ky1ud6zx7h
    @user-ky1ud6zx7h Před 2 měsíci

    How is this with electricity costs? Are they typically included in the costs per month or do one need to pay them additionally depending on how much one used?

  • @SHUTDOORproduction
    @SHUTDOORproduction Před 2 měsíci

    It's no question do the site to site VPN, it's more secure, easier to configure and manage the whole 9 yards. Also what some have already said you would likely benefit from having the udm on the back, typically only goes forward in home lab scenarios or in dedicated all networking racks. That's pretty cool though I never knew you were so close I used to work for an ISP/MSP that owns the fiber into that stadium and colocates at that DC. Although I never went there.