How internet communication works: Network Coding
Vložit
- čas přidán 19. 06. 2024
- Information Theory Society presents a brief history of internet communication and packet switched networks leading to the idea of network coding.
Paper featured in this video: Network Information Flow - citeseerx.ist.psu.edu/viewdoc/...
Link to full IEEE playlist: czcams.com/play/PLbg3ZX2pWlgJOTf5YXNq-rdXXuUkJTXHm.html
When he says 2020, my eyes just got bigger!
I freaking love information theory! This is such an elegant explanation of it and I can’t wait to watch and learn more!
Most underrated channel on earth.
Happy to have you as a viewer
@@ArtOfTheProblem my huge thanks for creating very clear, concise content. I highly recommend to promote yourself through subreddits (Reddit), FB groups, Instagram and post your work on boards and forums... 52K is nothing for a great content you produce.
Create a schedule like : 30 min to promote my video on Reddit (Monday)
30 min on FB (Tuesday) ..... etc.
Don't do it on all platforms at once... it's overwhelming.
and of course keep it up, great work!
An Art of the Problem video in the feed is a good day. And this week has already had 2 good days!
I like the minimalist music!
Wow a new video already!?!? Are you going to make this a habit?
Great video as always, thanks so much!
Just 46 K subscribers for this channel is probably humanity's biggest flaw
That's really clever idea. Taking advantage of the fact that you can increase information without increase information size.
Really high quality video, thank you!
That's absolutely brilliant 😍
Great video. Informative and accessible.
Video needs way more views! Great stuff
Excellent video - finally understand how the internet works! :-D
Great videos, great explanations. I also like the music in all of your videos very much.
much appreciated and great to hear
Great videos. Thanks ^^
Great video !! Really appreciate it !!
Can we get a more technical explanation or in depth overview on how this works? As presented it seems to be missing many key details including:
How does the key to decode each mixed packet arrive at the host?
What happens when packets are lost in transit or corrupted?
Isn't this really just a bandaid for poor design? If the internet of old is starting to show signs of being unscalable, perhaps the real solution is to look at distributed networks?
What about security, isn't this really dangerous?
Is this HTTPS compliant? (seems like no?)
imo the biggest flaw seems to be security. In a world where intelligence agencies dominate the lines companies are increasingly moving towards encrypting everything, but this theory seems fundamentally incompatible with the idea of unique discrete packets. The hub would cause the host to throw everything out because it would see this as tampering with the message.
For more detail see the paper itself (linked in description)
@@lynmyd I guess you didn't get it either from the video, or you could have answered all of his points.
Thanks for clarifying. I've never heard of "network encoding" as being "adding packets together and then subtracting them".
It doesn't make sense. You have mixed 2 packets together, and then sent one without mixing? So why not send the other without mixing? What is being "mixed"? Everything or just metadata or packet data? Is it actually mixing?
I'm probably just an idiot, but I expect to learn something from videos that proport to teach.
I love how active you guys are getting, love all your content. Do you happen to have a patreon or something similar?
Thanks for your support! www.patreon.com/artoftheproblem
Great video! Really made the concept clear
thanks for the feedback
nice one!! homie explained it in a very essy manner.
Excellent presentation, I didn't understand network coding until seeing this
great to hear, that was the goal
Another issues is, that this method seems to rely on content being multicasted, which today it is not. AFAIK multicast is not even allowed on the Internet (your ISP won't route multicast traffic into the Internet).
True, multicast is typically restricted to local network groups for protocols like ARP.
But IPv6 incorporates the concept of multicast streams, for precisely this purpose.
But even IPV6 multicast streams are usually restricted to LAN, no? When I investigated multicast for BitTorrent like protocol, my conclusion was that while theoretically possible, it practically would not work because of the blocking.
So one thing to note is that generally speaking, research involves phrasing things like "if instead of what we do now, we did A, then we would get X, Y, Z." So it's true, we don't do multicasting now but in future systems we might, precisely because you would be able to get the advantages of network coding.
What software do you use for the animations?
Very nice explanation..
Hi
I would like to know how I can implement the intersessional network coding with the COPE protocol
I use as simulator GNS3 on a network SDN
thank you
cordially
And he is so right about 2020- post covid19 6:35
great video
your video is so good , i believe your channel will thrive like 3blue1brown , he worked in khan academy as well!
best wishes
you are so unique
thanks so much for the positive feedback
How will the router undo the encoded packets, if the non-encoded packet fails to arrive? Where is this ever used? What type of network?
it's not used.
can you link to the paper? Thanks!
thanks, added to the description
thanks !!!
6:11 there must be more video packets than audio packets because video has more data than audio so how would this system solve the problem when all audio packets are sent but there are more video packets to send ??
But how recieving node will understand what package was used for addition? I mean middle node can sum packages that won't be at the recieving end. For example if there will be third household, that request package "7". Middle node sum it and get 11 (7+4). But recieving end has only package "1", and result in package 10. Сompletely wrong package.
What is missing from the description of the scheme (which is to make it easier to explain at a first pass but then on later inspection needs explication) is that we think of packets as being large so that there can be a small "header" which can store the coefficients. This already exists in current packet structure so it's not a stretch. So basically the packet header will contain something like "this packet corresponds to 2 * packet 5 + 3 * packet 2" and then the body (the "payload") of the packet would have 2 * packet5 + 3 * packet 2 -- the multiplication and addition is done over a "finite field" so it's not exactly an XOR but if the final system of equations received by the destination can be solved then decoding would be successful.
ok, you can add some information in "header", but here we talking about summing unique packages. Second package will not arrive at destination node, so you can't refer to it in some small code. So in my mind "header" can not be less than the original package. Am I missing something?
I mean that at the end there can be neither package 5 nor packet 2 in the clear. To decode one based on another.
So this will depend on the persistence of the session and whether or not the number of unique packets in a session is on the order of the number of potential packets. Suppose a packet payload is 1000 bits: that's 2^1000 possible payloads so would require 1000 bits to encode in the header, naively.
However, in a real network, not all 2^{1000} packets are possible. So if the number of messages *actually* in the network in a certain time window is smaller than the number of messages *potentially* in the network then there is some advantage to be had.
Finally, you might raise the issue of coordination between packet originators. In a real system there might be some segmentation: for example, Netflix could multicast more of its content out to ensure greater reliability to individual subscribers.
As a network engineer, I was confused, since this wasn't put out as a new idea, but instead was put forth as how things work today.
I feel like this video doesn't go into enough detail. There ought to be more than this. I imagine that instead of literally adding the two packets together, which could produce a carry, you XOR them. Secondly, does this mean that you have to rely on the fact that other people using the same connections will happen to download the same content as you do, if not all the time, then at least most of the time? I feel like such an assumption would at least need some data to back it up.
So yes, it's an XOR-like operation (actually addition over a finite field). It would rely on users receiving extra content to help with the unmixing. This falls out of the multicast assumption which would have to be a feature of the network you are using.
In general a lot of comments have been of the form "this wouldn't work right now in today's internet." This sort of misses the point of research in general, which is to say "if we build networks somewhat differently, we can do this thing."
Finally, the ideas behind network coding are useful in distributed storage systems -- they become a lot more efficient to repair when drives fail. See
simons.berkeley.edu/sites/default/files/docs/2681/slidesgopalan1.pptx
Has that prediction of 75% video streams by 2020 come to fruition?
InterPlanetary File System (IPFS)
Please make a video on this topic ... very interesting topic...
Or Tahoe LAFS =)
NULL yeah these 2 technology are the future of internet unless earth explode...
pklz, i want video about Bell libraries space time with MIMO
we have a mimo video: czcams.com/video/cbD4NsZQKYw/video.html
Security could be a problem when your data is mixed with other routes.
Yup, you can become a MITM without even trying.
Not really. You would have to know all the other data someone else is receiving.
Yeah, so maybe it would only be used for like video streaming on CZcams and not for sensitive data.
Soo... data compression?
We are doing videos on compression next (LZW)
modulated udp
Unfortunately, this feels like a really impractical solution. It assumes there is more than one way to get information to a destination. And how do the routers know the receiver knows part of the mixed information, allowing to undo it? What if one of the packets gets lost?
I believe a more practical approach is distributed networking, like peer to peer applications. I actually recently came across an open source project called cjdns, that allows to create a completely encrypted, redundant, peer to peer network, using either wired connections or tunneling over existing internet infrastructure. The protocol has a few scaling issues, but I imagine a futuristic internet would look somewhat like that - connect to the two nearest nodes (via some wireless protocol), and be connected to everybody while you forward traffic for other people.
The payload of the packet is quite large compared to the header. The coefficients/information about the mix can be encoded in the packet header.
With regard to packet loss, this is already an issue in regular internet communication: packets are resent or re-requested when they are lost. The same thing can happen here.
No, animowany111 has a point. Information can be placed into the header of the packet at the source (in this example, the video source V or audio source A) or it can be introduced into the mixed packet at the midpoint router M. So the question is, when midpoint router M receives a packet from V to user 1 and a packet from A to user 2, how can M know that some *other* midpoint router is also sending an exact replica of packet A to user 1 as well in order to mix the packets at M and know that user 1 is even going to receive a duplicate of that packet A in order to unmix it again?
Because in reality that *would* never happen unless users 1 and 2 happened to be watching the exact same pair of media streams (with the exact same bitrate!) at the exact same time indices (on both streams) simultaneously.. and with no retransmissions as that would slew the users back out of sync again, and no differences in mtu which ought to become even more pronounced with the adoption of IPv6. But even if they did meet all these criteria, then communicating this confluence to midpoint routers (or the presumably unrelated media sources even realizing the confluences exist to begin with) still doesn't make sense to me at all.
Also bear in mind that a majority of streaming content today is both encrypted on the wire and watermarked per-user due to the draconian concerns of copyright cartels. Users getting duplicated packets of content flies in the face of those requirements, which already serve to undercut a thousand other well-meaning network optimizations, CDN strategies, and client-side caching patterns.
I don't why but i love reading people write shit they know way better than i could ever hope to understand. It really puts me in perspective of how much there's to know about every little detail around our lives that we usually just ignore
The background tones at the 3:30 mark are a bit loud and distracting. Great video though.
Lots of issues to contend with here.
Lost/corrupted/dropped packets seem like a big concern with this concept, since they would simultaneously impact multiple data streams negatively. The working scenario seems beneficial, but you have to think about how bad the situation could get when things start to break down.
Additionally, there would be practical limits to combining packets. Say your data traverses 10 routers/hubs. It would be possible that packets could be combined multiple times along the way, requiring multiple excisions to get back to your original packet. This could easily come to take longer than waiting for your data to pass through the bottleneck.
Then there's security; today, if someone wants to obtain my enciphered data, they have to find a way to become a man-in-the-middle to snoop the packets. But with this method, my secure data could become combined with someone else's data, with the resultant packet coming to both myself and that other. After excising his data, he could hold onto my data instead of discarding it, resulting in potential data breach without any need to setup MITM.
I'm sure there are a lot of other issues that have to be thought through, carefully. Just a few off the top of my head.
True, lots of issues between theory and practice. I might contend that spending the time to figure out the issues (better crypto protocols, for example) in exchange for the benefits in terms of congestion control and throughput.
When packet-based systems were first proposed, congestion control was not well developed either. Lots of issues had to be dealt with incrementally or on the fly. And yet, the world did not end and the status quo seems semi-comfortable. It's only thanks to that experience that we can ask to think through these issues.
I'm not saying the prescription is to move fast and break things but rather to see if the benefits justify the costs/time to implement. Your points are valid and just go to show that this (or any) future way of thinking about networks/communication have a steeper implementation path than their predecessors. But unless you're of the "stone tools were good enough for my parents so they're good enough for me" mindset, I think there are interesting implications for how we design future networks.
Quando il robot diventa pericoloso, c'è una confusione che, per caso, è dannosa
This idea wont work. I get that 1+4=5 but how the hell can you combine 2 packets into 1 packet?
LotusEater This idea has been working for almost two decades my dude
Yeah at first it looks like information is lost. since you transformed two bits (two packets) in just one. But the truth is that the information isn't lost since you are still receiving two bits (the other one in a different line and having been part of the process of making the mixture). The information does not get compresed in one packet but gets distributed in a way that can get to its destination at the same time. You needed two bits and two bits you recieve (only that those bits have changed in a way that is usefull for the transportation)
With XORing the packet data.