Monolith vs Microservices vs Serverless

Sdílet
Vložit
  • čas přidán 12. 05. 2023
  • Today, we'll do a comparison between Monoliths, Microservices and Serverless for backend architecture. There are a few factors to consider.
    Follow me on Twitter: / ryancodez

Komentáře • 231

  • @codewithryan
    @codewithryan  Před rokem +71

    Here are some things worth mentioning:
    - When using a monolith in a production environment, you should definitely scale horizontally for increased reliability/uptime. It's still more reliable than running a single process.
    - For single-threaded runtimes like Node.js, the process can only use a single thread, so you actually *have to* scale horizontally. Scaling Node.js vertically only helps with memory availability, but not CPU performance. (thx Shahab Dogar for pointing this out)
    - In microservice architecture, the "pure" approach is for each service to have its own database/store so that the DB doesn't become the single point of failure. If my examples took that approach, there would be 2 databases: one for the Auth service and another for the Products service. However, in real life this approach isn't always taken, because you'd eventually end up with 20+ different databases that need to be secured, backed up, replicated, upgraded, etc- a maintainability nightmare. Nonetheless, if you want full decoupling, then you should go the pure approach.
    I'll periodically update this comment with any other pertinent info/corrections.

    • @danielgruner
      @danielgruner Před rokem +4

      May you pin it, please? Then it’s always on top of comment section. 🙂

    • @kalmanjudin1336
      @kalmanjudin1336 Před rokem +1

      Thank you for this addition. I almost wanted to add that Microservices were referencing the same DB, and it adds additional coupling that, in the world of microservices, is not acceptable, according to several posts that I've read earlier. But your video and this comment gave me a realization that in the real world, architectural paradigms adjust to factual matters. Also, this video brought to mind that different architectural ideas are like network topologies. Each with its pros and cons, but in the end, the Internet uses all of them.

    • @jibreelkeddo7030
      @jibreelkeddo7030 Před rokem +1

      Some details to clarify:
      NodeJS is single threaded, but that is rarely a bottleneck outside FAANG scale when designed correctly because by design you should only be using Node to dispatch asynchronous tasks, like DB queries.
      Single threaded apps DO NOT SCALE HORIZONTALLY!
      NodeJS spawns *child tasks* on EventQ that take advantage of additional cores, allowing it to handle million+ requests per second in ideal cases.
      NodeJS is single threaded yes but because you should use it to spawn “child tasks” for EventQ, it not crazy to treat it as a multithreaded program where Node is the main thread and child tasks (written as promises normally) are on child threads.

    • @vuktodorovic4768
      @vuktodorovic4768 Před rokem

      @@jibreelkeddo7030 Am I right that managing NodeJS with pm2 (or do the same things manually) to run it in cluster mode where you would use number of cpus of the machine minus 1 (to leave one for the OS to run smoothly) as the number of spawned processes that share the same network socket so practically you can use the CPU of the machine to the maksimum of its possibillity and then you can scale that vertically to get more memory/cpu power or maybe even buy another machine with the same exact setup and put the load balancer in front of both of those two

    • @johnswanson217
      @johnswanson217 Před rokem +1

      NodeJS isn't single threaded. V8's event loop has a single thread architecture.
      LibUV observes and reacts to multiple file/socket descriptors' status asynchronously, and it uses 4 to 1024 threads as required.
      So underlying OS networking/filesystem stack will benefilt from increased thread count, avoiding head-of-line blocking.
      Also, to handle large buffers, you can always utilize additional worker threads to handle them on multiple event loops.

  • @mr.random8447
    @mr.random8447 Před rokem +152

    Microservices don’t always mean strong domain boundaries, you can have a distributed monolith (aka, a nightmare) services that are dependent on each other and not decoupled.

    • @96shahab
      @96shahab Před rokem +12

      From what I've seen, this unfortunately happens a lot

    • @ruyvieira104
      @ruyvieira104 Před rokem +5

      ball of worms pattern

    • @utubetvux5170
      @utubetvux5170 Před rokem +1

      That's what we did for our own service T_____T

    • @dandogamer
      @dandogamer Před rokem +16

      Tends to happen when companies build microservices first without fully knowing what their building. I find if you go the other way of cutting up a big monolith into a microservice it works out better.

    • @DevCasey
      @DevCasey Před rokem +1

      Help, am here

  • @judewestburner
    @judewestburner Před 11 měsíci +29

    We had a CTO that was completely bought into micro services to its truest form, allowing different teams to code in different languages and different backend infrastructure.
    When those teams blow up and the CTO is fired you're then left supporting small pieces of functionality written in non familiar languages that you will end up writing back into familiar languages.

    • @fennecbesixdouze1794
      @fennecbesixdouze1794 Před 2 měsíci

      The phrase "teams blow up and CTO is fired" basically just means "everything is going terribly", which means your comment reduces to "when everything is going terribly then everything is terrible" which is true but not a meaningful contribution to the discussion.

  • @Voidstroyer
    @Voidstroyer Před rokem +27

    That is why I love the Elixir programming language. It makes writing monoliths easier because the language kind of behaves like microservices. The language uses something called supervisors which spin up processes and monitors them (These are not OS processes, but erlang VM processes). If a process is killed unexpectedly, then the supervisor will spin up a new process (kind of like how kubernetes starts/restarts pods if they go down. This removes the single point of failure point you mentioned. Elixir also runs on top of the Erlang VM which is already built to be able to scale both vertically and horizontally. It also has ETS which is like a built-in redis. If you use Phoenix (which you should if you are building a backend) you also get Pubsub out of the box, and it is setup to automatically connect to your cluster if your app is distributed. You don't get Rust or CPP level performance (Even GO is still slightly faster than Elixir), but you do get a lot of other benefits.
    The biggest thing people complain about when it comes to Elixir is that it is dynamically typed.

    • @codewithryan
      @codewithryan  Před rokem +7

      Interesting. I’ve never worked with Elixir but this makes me want to take a look!

    • @aslkdjfzxcv9779
      @aslkdjfzxcv9779 Před rokem

      nice. need to take a look.

    • @lunarmagpie4305
      @lunarmagpie4305 Před rokem

      Have you checked out gleam? its a typed BEAM lang. Its one of my favorite languages right now because its simple and the BEAM is amazing. The biggest issues with it right now is its tiny so there isn't many packages, the erlang package doesn't include bindings to ets tables, timers, etc so you need to do that yourself, and there is no macros so you can't use a lot of elixir libraries easily.

    • @Nellak2011
      @Nellak2011 Před 5 měsíci +1

      So I actually looked into using Elixir as a Full Stack Language. While I like its' concurrency model and its' functional paradigm, I don't like Phoenix.
      So I looked into Phoenix as a potential replacement for Next.js, as I am a Front End Dev first, Back End Dev second.
      At first Phoenix seemed promising, but later I realized that it is completely backend rendered, even for client side operations!
      So if a user simply wanted to do a client-side operation like sort a list (where the order doesn't matter to the back-end), they would have huge Latency!
      Imagine pressing sort and then 1 second later it sorts. The only solution is to somehow have CDN nodes distributed for every user, but that is complex.
      So what I found that is the solution of the Javascript problem (JS being a shit language to use) is to actually use Clojurescript on the Front End and Clojure or Elixir or anything on the Backend. Clojurescript with Reagent or Fulcro will give you that functional expressiveness and elegance on the front-end while also giving you the option to SSR, CSR, or Statically Generate sites just like Next.js.
      I would look into Clojure/Clojurescript if you want a good Full Stack Experience.

    • @Voidstroyer
      @Voidstroyer Před 5 měsíci

      @@Nellak2011 Phoenix Liveview has something called Phoenix hooks where you can have javascript that purely runs on the client. So your example of sorting an array purely on the client is still possible using this method. There is even a video of someone using svelte connected to liveview using the phoenix hooks method. In your case it would seem that you probably didn't look at liveview (or maybe not as closely) but liveview does indeed allow you to do client side stuff either using the JS interop, or with hooks. This way you aren't needlessly sending requests to the backend.
      Edit: When I said "using svelte connected to liveview" I meant that the svelte app still runs entirely inside Phoenix. It is still just 1 app that gets started using mix phx.server

  • @Maniac-007
    @Maniac-007 Před rokem +3

    The first video that I've watch from your channel (somehow got recommended to me), very clear explanation (for someone who's only been working as a backend dev for

  • @anasazkoul4899
    @anasazkoul4899 Před 10 měsíci

    I have just come across your channel and I have been looking for this type of content for ages even though it's above my level. It answers a lot of conceptual questions for me. thank you so much.

  • @96shahab
    @96shahab Před rokem +78

    For monolith scaling there is another factor to consider, which is language. Usually these days node is used for everything, which is a single threaded language. Having multiple cpus (vertical scaling) has no impact on the process since it doesn't use those cpus (at least not without the devs going out of their way to add workers and fork processes) and in this case horizontal scaling is actually the only available option. Just putting this here for future viewers, it's not always this simple

    • @codewithryan
      @codewithryan  Před rokem +6

      Good point!

    • @MyNameIsPetch
      @MyNameIsPetch Před rokem +18

      node is used for everything? in what world?

    • @96shahab
      @96shahab Před rokem +9

      @@MyNameIsPetch mostly startups, a lot of companies use node for their backend so that the same team that builds the front end can also work on the backend and the team stays tight, while also saving the company money

    • @danlee1996
      @danlee1996 Před rokem +1

      Another thing to add though is that there are technologies that allows servers to maximize CPU cores with single threaded languages.
      Concrete example: Most ruby on rails apps use Puma that can fork your ruby on rails apps to maximize your CPU threads. At work I have a single EC2 instance able to run 5 instances of my app server.
      However it seems that most NodeJS projects probably do not use technologies that allow for this type of scaling.

    • @96shahab
      @96shahab Před rokem +2

      @@danlee1996 yeah it's a combination of most people not using this method to scale as well as the fact that this is required in order to scale. Even with Ruby on rails for example, using puma to make multiple process instances will introduce complexity, and a lot of times either developers or management or both are not willing to introduce the complexity, so they end up just deploying to multiple nodes instead

  • @YoussefElGharbaoui
    @YoussefElGharbaoui Před 9 měsíci

    It always makes me smile when I see your face. I love the way you explain things, where I learn by understanding everything you say. Keep it up!

  • @tashima42
    @tashima42 Před rokem +1

    It’s great to see you back man, love your videos

  • @awksedgreep
    @awksedgreep Před rokem +10

    It's hilarious to see people finally understanding round trip times and n* stacking round trip times. It's one of the primary things that drove me to Elixir.

  • @michaelutech4786
    @michaelutech4786 Před rokem

    You have an awesome speaking voice, just having you talk in the background creates and air or relaxation and confidence.

  • @diego.almeida
    @diego.almeida Před 11 měsíci

    The best video on the topic I have seen so far, great work man!

  • @gary6lin
    @gary6lin Před 11 měsíci

    This is the best video that compares the differences between each backend architecture in a clear and simple way. Thanks👍

  • @msinaanc
    @msinaanc Před 10 měsíci

    Brilliant, your way of presentation with visuals is very useful for people like me. I'm kinda new in software, trying to learn front-end but I know that I'll need all the back-end knowledge and all of these features in no time. Thanks a lot.

  • @timbrecht2297
    @timbrecht2297 Před 7 měsíci

    Your channel and your videos are gemstones you really got the talent to explain

  • @vishaldinesh
    @vishaldinesh Před rokem +17

    Hey Ryan, that was an amazing explanation. I am an intern at a startup and I was getting confused on all these terms they were speaking and you really made it more clear and now i don't feel stupid, thanks

  • @tnypxl
    @tnypxl Před rokem

    These topics haven't never made more sense until this video. This is great!

  • @shahbazali878
    @shahbazali878 Před 8 měsíci

    The best way to clear concepts is making comparison. Thanks brother!!!

  • @themichaelw
    @themichaelw Před rokem +8

    11:35 Slight correction: you'd use a protocol buffer like protobuf, cap'n proto, FlatBuffers, etc. which are independent from gRPC. The protocol buffers are the IDL (Interface Description Language) for your binary data for SerDe, typically sent _over_ gRPC, but not necessarily.

  • @coffeeowl86
    @coffeeowl86 Před rokem

    I really enjoyed the explanation - clear and precise.
    Plus, you have a great voice! 👍
    Thank you for making the video!

  • @masmullin
    @masmullin Před rokem

    Wow, your presentation skills have gotten so good!

  • @GetOffMyyLawn
    @GetOffMyyLawn Před 11 měsíci +1

    This is a great video that covers the main differences between architectures. I work on a project that is almost fully serverless. The importance of integration tests and e2e tests are key to building a stable system that minimizes breakages when introducing changes. Performance testing is also important to get your scaling configuration set to meet your goals while keeping costs minimized. We deploy every branch to an isolated stack to run e2e tests before it gets merged, and run most of the e2e tests when deploying to the production stack. Another important consideration is retry logic when needed. If you are calling an external service, what will you do if it is unavailable... retry or fail? SQS and Lambda make a great pair for implementing a scalable fault tolerant system. All of these architectures have their place... picking the right one for the task at hand is the most important first step.

  • @dyanosis
    @dyanosis Před 4 dny

    One small tweak to thinking about the latency of Microservices - They are communicating (hopefully) over an intranet, not the internet. The problem with 3rd party APIs (like, say, using Google for Oauth) is that you have to traverse the internet to access it.
    To be clear about what I'm saying, here's an example:
    Accessing data from a microservice is like going to your neighbor's house in the same neighborhood (within a specific distance, like 1/4 of a mile, let's say).
    Accessing data from a 3rd party service, like Google Oauth, is like having to get on the highway.
    One is much less busy and a much shorter route while the other is a much longer route and potentially packed with traffic.
    Not saying that there is no latency with an intranet, but it's negligible compared to 3rd party services.

  • @Alex-mu4ue
    @Alex-mu4ue Před 9 měsíci +1

    With Azure Functions, which is the Azure equivalent of lambda functions. You can integrate them into your private network and also allocate dedicated infrastructure to run the functions which solves cold start problem you mentioned for lambda functions in this video.

  • @francogiulianopertile279

    I really liked this Video, finally I could understand the key differences between these architectures

  • @moonsteroid
    @moonsteroid Před rokem +2

    cool video! in case of native dependencies for lambda function, you can bootstrap the functions beforehand and install required dependencies before the function gets executed. Also good to mention is that the cold start is not happening on every request and you have the option to provision the lamdas to avoid cold starts but this affects the cost point 😄

  • @danlee1996
    @danlee1996 Před rokem +9

    I think your final take on the video was spot on. The best back-end architecture is the hybrid architecture. I think once your app reaches a certain point of complexity, there is really no way you can go all in 1 paradigm of architecture. I have no doubt in Netflix that they probably have a "monolithic micro-service" somewhere in their tech stack. My current workplace we have a "monolithic API Gateway" where we extracted our own internal micro-services and use company wide micro-services who themselves are monolithic in scope / complexity.

    • @ruslan_yefimov
      @ruslan_yefimov Před 6 měsíci

      Isn't api gateway supposed to be a monolyth?

  • @caneryldz3632
    @caneryldz3632 Před rokem

    Excellent content keep it coming Ryan

  • @LouisianaNative13
    @LouisianaNative13 Před 8 měsíci

    Very good video, I am use to using one big server, or even clusters, but our new platform will use Micro Services, and that is all very new to me. so this helped a bit.

  • @shoobidyboop8634
    @shoobidyboop8634 Před rokem

    Great rundown.

  • @metheglin1986
    @metheglin1986 Před rokem +1

    You have to understand that most of the “Cost” depends on “Development Experience” after all. Reliability is easy to die due to application matters (Who gets benefit only Auth Server running while Product Server down?). Monolith is also horizontally scalable enough when its functions keep good response time, though long duration tasks could certainly cause bad effect on scalability. Conclusion: Think Monolith first, then consider idiosyncratic features to be separated as Microservice and Serverless according with the requirement, technology, and dev resources.

  • @kisstamas6675
    @kisstamas6675 Před 3 měsíci

    thank you for the video, its very inforamtive and useful. I started to develop an e-learning system at my company, and I also started in a monolith architecture, but in a modular way.
    I use NestJS for backend and NextJS for frontend. With NestJS its very easy to develop the system in modules, so I hope if it will necceserry I can split up to services.
    With NextJS I can also can make serverless functions, but firts of all I want to keep the buisness logic in one place, and later I can separetly.

  • @thomas6502
    @thomas6502 Před rokem

    Great summary. Thanks Ryan!

  • @FlippieCoetser
    @FlippieCoetser Před rokem +6

    Excellent 🎉 love this style of comparing different architectures!

  • @BluParkour
    @BluParkour Před rokem

    The best video at the topic I've seen! Thank You :)

  • @marcgirard475
    @marcgirard475 Před rokem

    Great video, clear and to the point, I learned something. Thank you! 😀

  • @osoverlord
    @osoverlord Před 7 měsíci

    Ryan, cool videos, thank you!

  • @howardkearney7989
    @howardkearney7989 Před rokem

    Coming from an IBM Mainframe background and trying to learn web development. Mainframes are often considered Monolithic and outdated. But seeing this helps in discovering what works best, including Hybrid. Effective IT solutions are rarely just one solution. :) Thanks!

  • @paulgaiduk709
    @paulgaiduk709 Před rokem

    Than you Ryan! That was a very nice explanation!

  • @terrylennon
    @terrylennon Před rokem +1

    Top quality content! Thanks for making this often-drab content seem really engaging! Also you have a really top speaking voice! Don't know if you've ever been told that?

    • @codewithryan
      @codewithryan  Před rokem

      Glad you enjoyed the video and I appreciate that!

  • @selinjodhani2634
    @selinjodhani2634 Před rokem

    What a great video ryan. Subscribed 👍🏻

  • @Fanmade1b
    @Fanmade1b Před rokem +6

    I am obviously late to the party here, and my points probably have already been mentioned here, but I just saw the video and I need to write that down ^^
    My first point would be that one very important point should always be mentioned when talking about these approaches, and that is the quantity of developers.
    One of the main reasons why big companies like Netflix and Amazon implemented microservice architectures is that they had trouble scaling up their development teams when they were all working on the same codebase. So they added complexity by moving to microservices to allow their very large team to split up into smaller teams which could each work on very specific functionalities.
    Of course this also provides benefits like becoming more flexible in what language to use in each service, but it is very important to know that this will always introduce more cost and a lot more complexity first. Not only do those services now need to communicate with each other (including authentication and proper contracts), but also the development teams. And as soon as one service is used by multiple other services (authentication is a good example for that) you not only have a possible single point of failure again, but you also have to consider all the consuming applications when you want to introduce any changes, which in turn can slow the whole development process down.
    Talking about issues: I have never seen that a single service crashing was less problematic than the same problem happening in a monolith.
    First: The error needs to be matched anyway. So if the consuming app doesn't handle a server exception properly, it will crash anyway.
    Second: What is the difference between the outbox of the monolith crashing or the outbox service crashing? Both will prevent the users from using the outbox, but the rest of the application should still work in either case
    Third: Debugging in a microservice environment can be a bitch. I've had way too many occasions where a crash was first investigated in one application, then moved to another team because it apparently happened in their service, but they delegated it to another team and so on. Especially if you have a chain of services, this can get ugly. And don't even get me started if you need to handle a rollback through those services if one of the steps fails ...
    I could go on and on about this and give a lot of practical examples of what I heard and experienced, but in the end I totally agree with your conclusion.
    Just build your MVP as a monolith, but always keep a proper architecture. Keep to the SOLID principles and always consider YAGNI. Don't over-engineer/over-anticipate/over-complicate things in the beginning and keep in mind that you can always refactor when required.
    When you then reach a point where one specific part of the app keeps slowing the whole application down and you reached the point where switching to another programming language would help way more than refactoring in the current one -> extract that one functionality into a service.
    Or you see that one part of your application is the same as it is in multiple other applications in the company -> Consider extracting that one into a microservice so that this part doesn't have to be implemented and maintained in different applications multiple times.
    Or your sixteen or more devs constantly keep running into each other when working on the application and it is hard to scale their work -> Consider extracting parts of the application into services that can be maintained by one team each.
    Going from monolith to microservice is by the way not always that easy and you should be very careful with that.
    I've seen multiple cases where that was very contra-productive.
    Just check out the AWS blog for example, where they recently had a post about one time where they rolled back from a microservice architecture to a monolith and discovered that this did not just reduce complexity, but also costs by 90%.

  • @suheybabdi2070
    @suheybabdi2070 Před 11 měsíci

    Easy to understand and fellow. Thanks

  • @yoelczalas
    @yoelczalas Před rokem

    Awesome! Thank you so much for your explanation

  • @ramonsantiago4573
    @ramonsantiago4573 Před rokem

    Such a great video, I had to sub!

  • @utubetvux5170
    @utubetvux5170 Před rokem

    Clean & informative. Subscribed.

  • @ericsilinda437
    @ericsilinda437 Před rokem

    Great video Ryan, keep them coming, you're doing God's work! :)

  • @HaydonRyan
    @HaydonRyan Před rokem

    Great video! Would love you to do a follow up with cloud native monoliths

  • @liquidpebbles
    @liquidpebbles Před rokem

    Great break down of the different backend architectures. Leaving a comment for the algorithm.

  • @GreenSmi1e
    @GreenSmi1e Před rokem +2

    Thank you for such a great video.
    I wonder what you think about monolith-serverless approach. You start with single function that has modules to handle all API requests and then you only split it if it is really needed (for example if one particular API endpoint is used a lot comparing to others or if one function requires specific dependancies that are not needed in others). This way you limit number of cold-starts in case of usage fluctuations, as every instance of cloud function/lambda can handle any type of incoming requests.
    I used this approach on couple projects so far and it really helps to start fast. On the other hand all my projects were relatively small.

  • @mmkvhornet7522
    @mmkvhornet7522 Před 8 měsíci

    very good explanation , thank you

  • @momenttomoment1007
    @momenttomoment1007 Před rokem +1

    Keep the videos coming!

  • @Davidlavieri
    @Davidlavieri Před rokem

    Dependencies on serverless can be mitigated by a extra step, using dockerized images to run lambda, downside (aws) forces the use ECR

  • @passocadev
    @passocadev Před rokem

    awesome content bro continue doing it

  • @RyanTipps
    @RyanTipps Před rokem

    great comparison, thank you

  • @SonAyoD
    @SonAyoD Před rokem

    This video was gold thank you!

  • @bl_int
    @bl_int Před rokem

    cool, i have some improvement to compare that differenct architecture. thanks ryan.

  • @mgs_4k198
    @mgs_4k198 Před rokem +3

    You gave an example that if the Auth service is down, we can still create products. I assume to create a product you will need the data of the Auth user to associate it with the created product.
    How would you create a product if the Auth service is down?

    • @izidorizidor-pj7ny
      @izidorizidor-pj7ny Před 2 měsíci

      i understand that it will continue if the auth use some kind of JWT. if you are already log in, you just keep sending the JWT and the create product use it for the bind with the user, it will not use the auth service. if you are not already log in, you can't use the create product.

  • @user-dz6il2bx5p70
    @user-dz6il2bx5p70 Před rokem

    Very good content. Thanks a bunch.

  • @HenryLoenwind
    @HenryLoenwind Před rokem

    9:00 One may add that traditional enterprise architectures look exactly the same, just that the different components are not spun out into their own server process. In some architectures, they are grouped into layers (e.g. presentation, processing, database), which may live in their own container, but in others, they live side-by-side in the same process. This is one of the reasons microservices got adapted quickly---in many cases there was nothing to change in code, only in packaging it.

  • @strutyk.888
    @strutyk.888 Před 10 měsíci +6

    Hi dude, you need to make more web development videos, you have very good diction and explanation! 👍

  • @michaelsefeane7707
    @michaelsefeane7707 Před rokem +3

    I've started learning Golang because of one of your videos. It's a really cool language

  • @victorBrapp
    @victorBrapp Před 11 měsíci

    Amazing video.

  • @Stragunafen
    @Stragunafen Před 10 měsíci

    Learnt a lot. thanks!

  • @Computerix
    @Computerix Před rokem

    Goodjob, detailed and beneficial 👏

  • @JoeHacobian
    @JoeHacobian Před rokem

    Hi Ryan! Nice channel!

  • @ILikeMoewsowo
    @ILikeMoewsowo Před rokem

    cool presentation!

  • @joelboardgamerpger5393
    @joelboardgamerpger5393 Před 10 měsíci

    Omygosh!! what a pleasure to listen to. So many lecturers dont do it for me. I listened to the whole thing. Keep up the good work.
    Alternative to lambda on AWS? is there something i can grab from somewhere that would work on a docker or ec2s on aws? Be well, Regards, Joel

  • @armynyus9123
    @armynyus9123 Před rokem

    great one, earned another subs.

  • @user-cp9kg5nm2y
    @user-cp9kg5nm2y Před 2 měsíci

    Explain very well thanks

  • @alexnezhynsky9707
    @alexnezhynsky9707 Před rokem

    Well done!

  • @calebvear7381
    @calebvear7381 Před rokem +1

    “It’s difficult to write spaghetti code with micro services” 9:03 (a little before that time stamp).
    No it isn’t difficult at all. You just end up with spaghetti spread over multiple processes instead of it being all together.

  • @michaellatta
    @michaellatta Před rokem

    Every question is a “it depends”. Staff has a lot of impact. A single team is better off with a monolith in most cases, but 20 teams are better off with 20 loosely coupled services. As you said cost management is constant in any of these solutions. Predictable pricing lpoints to monolith in many cases.

  • @MarcinCebula
    @MarcinCebula Před rokem

    awesome video

  • @igordasunddas3377
    @igordasunddas3377 Před rokem +1

    Development process for monoliths can be a nightmare, because you always have to have the whole application opened. And if you want to split it, the simplicity goes out the window. Same goes for anything, that's not standard.
    I developed applications to run on JBoss and We logic as well as microservices (Spring Boot and Node stuff) and I'd never want to switch back to working on a monolith ever again - unless it's for moving to microservices.
    However: if code reviews are not really great, it's easy to mess up microservices. If the developers aren't on the same page about when to have a new microservice and when to include something in an existent one, it can be a nightmare.
    Then again integration tests can be easy depending on what you use and if you split the stuff correctly. For example you can choose to only have E2E integration tests from the user perspective and test only microservice-wide tests (and unit tests) within each microservice.
    Also: customer barely ever pay for refactorings, unless it's really necessary and you can explain the benefits in $$$ (many customers don't speak any other language). The problem is, that sometimes you don't know for sure and after a few more months or years the stuff becomes too expensive to refactor. At that point you end up patching stuff instead of refactoring it for real. I've seen this happen many times in my 14 years of software development.

  • @specimen-ch7zi
    @specimen-ch7zi Před rokem

    High quality video, good job! :)

  • @Filaxsan
    @Filaxsan Před 2 měsíci

    Nice vid bro! 💪

  • @fille.imgnry
    @fille.imgnry Před 8 měsíci

    Yowza! I think I just found a new fav channel. Noice! 🎉

  • @biswajitrout4710
    @biswajitrout4710 Před rokem +1

    Bro can you explain why some of the small product based startups code their backend own by building jars and libraries and don’t use pre build framework and libraries like spring or spring boot .Is there any benifit to it like customisation or something else ……… btw great video

  • @nameless4014
    @nameless4014 Před rokem

    Suggestion for a light video of making a language tier list based on your opinion

  • @brianmorin7022
    @brianmorin7022 Před rokem +10

    You left out a dimension. Team Size. Past a certain number of developers a Monolith becomes a bottleneck as the entire team is within blast radius of each other and tied to the same deployment schedule. This was a driver for Amazon's initial move towards microservices. Microservices provide a level of isolation between teams letting them work with less friction as long as they keep the contracts between services fulfilled and have a rollback strategy.

    • @codewithryan
      @codewithryan  Před rokem

      Yeah that’s a big one that I wish I mentioned. Microservices are great from an organizational standpoint.

    • @brianmorin7022
      @brianmorin7022 Před rokem +2

      @@codewithryan It depends on team size. I've also seen very small teams go nuts with Microservices when they're not necessary from an organizational perspective.

    • @fieldmojo5304
      @fieldmojo5304 Před 5 měsíci

      You can highly encapsulate features which dramatically increases the monolith size before the labor becomes an issue

  • @arnavhazra8806
    @arnavhazra8806 Před 10 měsíci

    Great vid! The part where you said "scraped 50k websites/sec using Lambda functions" where could I find it? would be immensely helpful for r&d for my lil bootstrapped startup!

  • @g.paudra8942
    @g.paudra8942 Před rokem

    My point of view : If you want to build an application(s) and you expect a lot of users in a few days after release, with high security, the option is Monolith. If you don't expect a lot of users for a couple of months after release, then microservices is your suitable option, if you want to build a small to medium app with minimum security then serverless is your option.

  • @AungusMacgyver
    @AungusMacgyver Před rokem

    A minor point but I think a clearer term for what you call Reliability is Fault Tolerance. Reliability to me would be how many times a system fails, whereas Fault Tolerance would be how well it handles problems which is what you seem to be talking about.

  • @the-antroy
    @the-antroy Před rokem

    Thanks sir!

  • @krispekla
    @krispekla Před rokem +1

    Nicely done. You missed SOA architecture which is between Monolith and Microservices I would say!

  • @martinmusli3044
    @martinmusli3044 Před 11 měsíci

    The first video on the Internet that says these things can coexist :D

  • @scotts6264
    @scotts6264 Před 29 dny

    Epic video

  • @changNoi1337
    @changNoi1337 Před rokem

    with that radio voice, you gonna get to a mill in a short time ;)

  • @kokizzu
    @kokizzu Před rokem

    you can use jelastic, they bill only based on CPU and RAM usage
    also by default my monolith have redis and load balancer, so that's not an issue

  • @debuffer
    @debuffer Před rokem

    17:43 I would disagree it can scale till a certain point, but after certain point it just fails to scale especially on burst traffic where cold starts are issue (I am talking about when you have 12k invocations per second)

  • @codeWithNoComments
    @codeWithNoComments Před rokem

    For the Serverless, I don't agree with low rating on Development Experience and native dependicies limitation. AWS has docker images which we can download, add all the libraries and code, and simply upload it to work as the lambda function.
    Most likely, people will be using one vendor anyways so security limitation is not a solid point for serverless as well.
    To make response time better we can aim for more async architecture.
    In the end its like choosing the correct tool for the correct architecture. Everything works if the choice is right.

  • @Pscribbled
    @Pscribbled Před rokem +4

    Monoliths don’t mean no horizontal scaling. They just are that you don’t have individual responsibility services so instead of network hops for functionality, your service has everything it needs in the runtime
    You’d be nuts to not have multiple hosts running your service with respect to an availability point of view (what happens if you have bad hardware) or if you ran your server with your customers accessing the host directly not through some sort of load balancer or gateway. Huge security risk

    • @Steelrat1994
      @Steelrat1994 Před rokem +1

      They also don't mean easy develpment. Tiny monoliths for your school science fair project sure are easy to develop test and deploy, but anything on the enterprise scale will be terrible development experience.

    • @codewithryan
      @codewithryan  Před rokem +2

      Those are all good points and I agree.
      My main argument is that scaling a monolith horizontally doesn’t entirely cure the single-point-of-failure problem.
      All instances of the monolith run the same code, and if there’s a fatal uncaught runtime error (e.g. null pointer exception), then during high traffic you *may* find that all instances crash in the same window of time.
      If availability is a concern, then splitting logic into separate services (and scaling those horizontally) makes more sense IMO.
      Nonetheless, if you’re sticking to a pure monolith, then having multiple instances is certainly better, and using a load balancer (or cloudflare to proxy at the DNS level) is a good idea to obfuscate the server IP.

    • @Pscribbled
      @Pscribbled Před rokem +4

      sure, valid points but I was addressing your slide at 5:16. I’m typically not a shill for monoliths but that slide is just wrong. Tbh I didn’t finish the video after that point so I don’t know if you corrected it or addressed it after that slide
      With respect to your point about having uncaught runtime errors causing crashes in your servers, that argument still applies to poorly handled errors returned by micro services on (or off) the critical path. Like what happens if your auth service has a bug? What micro services do provide is a lower chance of this scale of event from happening because you aren’t necessarily touching the service code of your critical and stable services everyday. An exception to this that happens very often though is shared libraries between services. Anyways, a lot of things are not cut and dry and depends a lot on how you have set up your processes, development, error handling and testing.

    • @kaypakaipa8559
      @kaypakaipa8559 Před rokem

      @@Pscribbled yikes you completely and totally missed his point, I work in a large Telecoms company, and bro, without microservices its bloody impossible to even imagine deploying our architecture as a monolith and scaling it.
      Trust me if something goes wrong the entire spaghetti goes down, and it will take us light years to fix, and cost of that is unimaginable!
      The Developer experience would be 0/100!
      The guys slide is totally correct, with respect youre wrong bro!

    • @Pscribbled
      @Pscribbled Před 11 měsíci +2

      @@kaypakaipa8559 lmao telecoms doesn’t tell me anything about your credentials. That’s no flex. I work with micro services as well. My stance is to use whatever architecture makes sense for the scale, cost, performance and requirements of your service. Generally I shy away from monoliths but not because of their inability to scale horizontally… because they can scale horizontally…
      If I’ve completely missed the point on how I believe Cody is wrong on slide 5:16 where he says it’s not feasible to scale a monolith horizontally, please give me your data and not your anecdotes

  • @BloodEyePact
    @BloodEyePact Před rokem +1

    Good presentation, but a few points from another active practitioner:
    * Typically, each microservice has its own database, and makes up a "vertical slice". Multiple different microservices sharing a database is often seen as an anti-pattern, sometimes informally called "miniservices". Generally, your architecture should have most requests use a single microservice, and "horizontal" movement should be limited, which mitigates latency issues.
    * Docker/kubernetes actually make integration tests a lot easier, since you can use declarative setups like docker compose to spin up a complete copy of your entire app in miniature, assuming you have docker on your workstation/ci server
    * Kubernetes ease of installation has come really far the last few years with tools like KinD, k3s, and rke2, where you can launch an entire cluster with a single command on each node, or even realistically run on a single node (though you don't get the server-level reliability that way).
    * As you mentioned with cold starts, serverless actually tends to be less scalable, and more expensive under heavy load. It really only makes financial sense for extremely infrequently used functionality, and at that point, you might as well just use a cron job or a batch workflow engine. The other use case, as you mentioned, is extremely small projects with a handful of users. I think amazon did an article the other day basically coming to the same conclusion.
    One suggestion:, instead of just giving static good/bad grades, I'd recommend considering the grades under different circumstances. For example, monoliths have great developer experience when you have one or two engineers, because its so simple, but absolutely miserable when you have 100 engineers, as you now have to all get in line for a maintenance window to deploy your changes and fixes.

    • @codewithryan
      @codewithryan  Před rokem

      Great points, especially about the monolith dev experience. For large teams, a monolith may become an organizational bottleneck.

    • @almohhannad
      @almohhannad Před rokem

      cold starts are not a per request problem in serverless, if other lambda instances already handled a request they stay up to 15 min or more handling subsequent requests, also there are multiple solutions to minimize them to the point where their effect on users is negligible, microservices on the other hand, due to their architecture and separate databases as you mentioned, a single request might go through multiple servers and multiple authorizations and 3 way handshakes plus multiple database look-ups in order to query or mutate some specific data, which is a performance hit that you will get on every request to that specific route, also "less scalable and more expensive" is not accurate, it depends on your use case and how you architectured it

  • @maxbarbul
    @maxbarbul Před rokem

    Well, talking about reliability, it’s only fair to compare the systems on whole. To calculate it you need to multiply the reliability if every block. So, more blocks you have less the reliability goes (multiply number that are between 0 and 1 for probabilities). It means, even if reliability of a monolith is less but more parts in microservices setup would make it less. Duplication helps but with cost overhead.
    Also, it’s worth to mention master/slave or write/read replicas for monolith.
    I agree, that you just trade one complexity for another. With monolith you need better software quality, with micro services you need better infrastructure management. Which experts of high quality is easier to find - devs or devops :)

  • @Jackson_Zheng
    @Jackson_Zheng Před 3 měsíci

    It seems like all of the problems of microservices can be solved if you just learn kubernetes and run your own server.
    With the monolith and the serverless functions, there's limitations in the infrastructure that physically prevent you from scaling, so getting smarter and more skilled doesn't solve the problem.
    That's why I like microservices. You front the effort and pain of learning in the beginning, but once you get good, everything becomes easy.

  • @fieldmojo5304
    @fieldmojo5304 Před 5 měsíci

    Monoliths can also have the best performance. And you can scale horizontally with monoliths which combined with their lower usage of resources per action, will be more than 99% of servers need.

    • @fieldmojo5304
      @fieldmojo5304 Před 5 měsíci

      For strong domain boundaries with monoliths just use multiple packages. It forces even jr devs to decouple. And you can then require intercolunn communications to go through an event pattern to enforce loose coupling.

  • @derelicts9503
    @derelicts9503 Před 11 měsíci

    I like it. ANOTHER! :D

  • @user-cu4bk2gm6q
    @user-cu4bk2gm6q Před 5 měsíci

    The problem with scaling horizontally like caching, load balancer, etc mentioned is not specific to monolith. Its horizontal scaling issue and applies to microservice.

  • @remmo123
    @remmo123 Před rokem

    Helpful :)