2:09 : Concurrency is the composition of independently executing processes. 2:23 : It is about *dealing* with a lot of things as once. It is about structure. The goal is to structure things so that you could possibly employ parallelism to do a better job. But parallelism is not the end goal of concurrency. When you structure task/things into pieces, you need to coordinate those tasks/pieces with some form of communication. Ref: Communicating sequential processes (CSP) by Tony Hoare. 2:13 : Parallelism is the simultaneous execution of multiple tasks/things that may or may not be related. 2:27 : Parallelism is about *doing* a lot of things at once. It is about execution.
10:20 We don't have to worry about parallelism when we're doing concurrency. If we get concurrency right, the parallelism is a free variable that we can decide. 12:04 Concurrent Decomposition: Conceptually, this how you think about parallelism. You don't just think about running a problem statement in parallel. Break the problem down into independent components(that you can separate, understand, and get right), and then compose to solve the whole problem together.
22:30 I solved a concurrency issue lately, and part of the solution was to pair up the channel with the request. I'm glad to see that I came up with a solution that the designer of Go considers valid.
At 7:48, the codel with 4 gophers runs faster than the one in previous slide with 3 gophers, given the gopher bringing the cart loaded with books, and the gopher that returns an empty cart happen in "parallel" ( meaning there are more than 1 carts) right?
At his last slide he used a buffered channel with the length of the amount db connections, but wouldn't a channel with the size of 1 make more sense since we only need 1st result?
With an unbuffered channel, each worker will block until someone receives their result over the channel. Since only the first result will be received from the channel , the remaining workers will block forever. Blocked goroutines cannot be garbage collected, so we would end up with leaked goroutines. Using a buffered channel allows all workers to send their work to the channel regardless of there being a receiver or not, and finish execution cleanly.
Actually, there is no need to create a buffered channel at all. You can just wrap a writing to the channel with a select statement and add a default clause to prevents a goroutine leak.
@ernestnguyen1169, in this case, you are right. But in general, would it be a good idea for a server to swallow its result if it cannot be sent immediately?
This is actually how I do things.. It's way more easier this way to code. It's like I am leading my code to do something structurally. Each worker has a job. That worker only does one job. I never tried any other design pattern in an OOP language. Just the concurrency way because it is easier to understand "which code does this". I just didn't know about pararellism.
CompletableFuture, ha? Virtual Threads in Java 19 are implemented even better than goroutines. Short disclaimer: I also love Go, but it's not ideal. Java is still superior in many aspects.
@@jackdanyal4329 nice, but how many companies are using Java 19? almost none. Like 80% of companies still using Java 8 or 11, so you will not be able to use the new features on your work.
@@rj7250a in the company where I'm working right now (one of the biggest in the Netherlands) the lowest version of Java is 17. I don't care about companies who still use 1.8 ver. It's their problem, not mine. By using virtual threads in production is just a matter of a short time. But you will get a much better implementation of concurrency + solid java with all libraries, community, etc. At least we see that Java does smth right and trying to become more modern. But what does Go do? For several years they said "we don't need generics", and what? now they have generics. but still doesn't have a usable collection types, error handling, etc and etc. and as I mentioned CompletableFuture allows you use concurrency w/o locks and sync problems. and tbh it's much easier to understand the completable future rather trying to understand the whole flow of channels in a not trivial project.
@@jackdanyal4329 Virtual Threads in Java 19 are implemented even better than goroutines. They're not... even ... remotely ... close. Java and Go have stolen a lot of concepts from one another, but Java has the burden of being a 20 ton gorilla that you need to wrestle, and Go just works. Go is faster, uses less resources, and so on. Java is so inferior, it's not even close.
He should have started with performance parity between the processor and the other components. Processor is way way more faster than any other components eg ram, disk, network, without even considering wire latency. Since it is fast, it can switch tasks quickly while the interacting components can keep doing their own work when the processor attending to another task. Otherwise it is impossible to understand concurrency without paralellism.
While I know he is knowledgeable and I learned things here, I am kinda surprised by the example in the beginning: As if solving a problem concurrently and not sequentially is an "unexpected discovery" will improve the performance... isn't this obvious? or did I miss something? (I am not joking in this comment and not belittling the talk)
This talk was from eight years ago, long before AI was invented. If you are now typing a response to this to inform me that AI was not in fact invented in the past few years, you make me sad.
Correct me if i'm wrong, the summary of this video is: if something is concurrent this don't mean that its also parallel but if something is parallel it will always be concurrent
Those terms are related, but they do not describe disjoint sets of things. The meanings overlap and vary by situation. In the context of programming, concurrency is the ability of your code to be "composed" into bits of logic that could be run at the same time. Parallelism (when combined with concurrency) is taking said code and running it on a 100-core machine.
That is so true, Concurrency is not Parallelism, two very different things, 99% "programmers" wouldn't care since it is irrelevant to them since Concurrency programming gives the illusion of Parallelism. When "professional" programmers need "performance"... thats when "Concurrency" and "Parallelism" needs to be understood exactly what they are. Looking down to the physical hardware of a CPU, it can only then be understood how much is it concurrent vs parallelism. When reading the whole OS and computer architecture, I instantly questioned today's CPU consumer grade architecture, having more cores really doesn't help when needing RAW performance. This is why Intel and AMD have sperate hella expensive CPUs series specifically designed for raw performance in true Parallelism, these CPU lines are sold for web servers and research companies, its sad that consumer grade stuff doesnt have "true Parallelism". The Apple engineers are smart, it makes sense why they use server grade Intel CPUs in their consumer desktop hardware, it also makes sense why Apple ditched Intel and started making their own custom CPUs from ARM.
"Dealing at the same time, doing things at the same time...." Philosophers call this demagogy. concurrency and parallelism is the same, man. Don't try to mix definitions to create new ones, because if you can't explaine smth for 7 years old kid using "only" your hands, you don't understand it good enought yet.
Disciplines usually have precise definitions of words to encapsulate concepts. Math, computer science, etc, all do this. The formal inter-discipline definitions usually doesn't perfectly match the colloquial definitions, which you are alluding to in the dictionary. The formal definition of concurrency in computer science has been around for decades.
Lets say your right, and hes somehow wrong for trying to distinguish between paralellism and concurrency: What phrases do you suppose people like Rob Pike and the plethora of library/language designers who are reasoning about these problems do? Should they invent new phrases? I think you're a better software developer if you're able to think about this talk when you find yourself speaking with someone who is carefully differentiating between "parallelism" and "concurrency". To me this phrasing becomes very important when we thinking about concurrency in UI design where thread-confinement is a factor: we very much want concurrent decomposition, but not for a throughput increase but for a latency reduction. That is, we want concurrency without parallelism as a way to control for responsiveness, and if you don't understand this, well its good that you don't have my job. Further, to your use of Enstein's maxim [I've never heard the "hand gestures only" variant]: I wonder what you think Rob's use of Gofers is supposed to be if not an attempt to explain something to a six year old? He's trying, I think rather successfully, to explain a very fine point using very elementary components.
"these ideas are not deep, they're just good". My new favourite quote.
"It's time to have some drinks" - Creator of Go
Meanwhile Rust programmers: "Cannot borrow as immutable"
Shame the video doesn't show the slides at the right times. Had planning for such a good talk
2:09 : Concurrency is the composition of independently executing processes. 2:23 : It is about *dealing* with a lot of things as once. It is about structure. The goal is to structure things so that you could possibly employ parallelism to do a better job. But parallelism is not the end goal of concurrency. When you structure task/things into pieces, you need to coordinate those tasks/pieces with some form of communication. Ref: Communicating sequential processes (CSP) by Tony Hoare.
2:13 : Parallelism is the simultaneous execution of multiple tasks/things that may or may not be related. 2:27 : Parallelism is about *doing* a lot of things at once. It is about execution.
10:20 We don't have to worry about parallelism when we're doing concurrency. If we get concurrency right, the parallelism is a free variable that we can decide.
12:04 Concurrent Decomposition: Conceptually, this how you think about parallelism. You don't just think about running a problem statement in parallel. Break the problem down into independent components(that you can separate, understand, and get right), and then compose to solve the whole problem together.
brilliant! thank you!
"8 gophers on the fly and books being burned at a horrific rate" lol
This guy is good, I think few people may write some code using this new language he's proposing
😂
sarcasm?
@@user-ij9vc1lw9r As an ai model you do not get sarcasm!
22:30 I solved a concurrency issue lately, and part of the solution was to pair up the channel with the request. I'm glad to see that I came up with a solution that the designer of Go considers valid.
excellent explanation of how go make concurrency work out! Appreciate it!!
"Burning up those C++ books" 🤣
lord praise the commander
amazing talk
At 7:48, the codel with 4 gophers runs faster than the one in previous slide with 3 gophers, given the gopher bringing the cart loaded with books, and the gopher that returns an empty cart happen in "parallel" ( meaning there are more than 1 carts) right?
Is there any way to tag all programmers here? Every programmer should know this. Honestly, this guy should write dialogues for hollywood.
Every Gopher should watch this talk.
At his last slide he used a buffered channel with the length of the amount db connections, but wouldn't a channel with the size of 1 make more sense since we only need 1st result?
With an unbuffered channel, each worker will block until someone receives their result over the channel. Since only the first result will be received from the channel , the remaining workers will block forever. Blocked goroutines cannot be garbage collected, so we would end up with leaked goroutines. Using a buffered channel allows all workers to send their work to the channel regardless of there being a receiver or not, and finish execution cleanly.
Actually, there is no need to create a buffered channel at all. You can just wrap a writing to the channel with a select statement and add a default clause to prevents a goroutine leak.
@ernestnguyen1169, in this case, you are right. But in general, would it be a good idea for a server to swallow its result if it cannot be sent immediately?
Lol he's named the "Commander, Google" at 20:57
Where is the video editor? Forgot conpletly to change the camera as the presenter show new slide. Ouch
This is actually how I do things.. It's way more easier this way to code. It's like I am leading my code to do something structurally. Each worker has a job. That worker only does one job.
I never tried any other design pattern in an OOP language. Just the concurrency way because it is easier to understand "which code does this".
I just didn't know about pararellism.
"No locking or synchonization". Indirect attack to Java 🤣
CompletableFuture, ha? Virtual Threads in Java 19 are implemented even better than goroutines. Short disclaimer: I also love Go, but it's not ideal. Java is still superior in many aspects.
@@jackdanyal4329 nice, but how many companies are using Java 19? almost none.
Like 80% of companies still using Java 8 or 11, so you will not be able to use the new features on your work.
@@rj7250a in the company where I'm working right now (one of the biggest in the Netherlands) the lowest version of Java is 17. I don't care about companies who still use 1.8 ver. It's their problem, not mine. By using virtual threads in production is just a matter of a short time. But you will get a much better implementation of concurrency + solid java with all libraries, community, etc. At least we see that Java does smth right and trying to become more modern. But what does Go do? For several years they said "we don't need generics", and what? now they have generics. but still doesn't have a usable collection types, error handling, etc and etc. and as I mentioned CompletableFuture allows you use concurrency w/o locks and sync problems. and tbh it's much easier to understand the completable future rather trying to understand the whole flow of channels in a not trivial project.
@@jackdanyal4329why specifically do you think that Java virtual threads are implemented better than goroutines?
@@jackdanyal4329 Virtual Threads in Java 19 are implemented even better than goroutines.
They're not... even ... remotely ... close. Java and Go have stolen a lot of concepts from one another, but Java has the burden of being a 20 ton gorilla that you need to wrestle, and Go just works. Go is faster, uses less resources, and so on. Java is so inferior, it's not even close.
He should have started with performance parity between the processor and the other components. Processor is way way more faster than any other components eg ram, disk, network, without even considering wire latency. Since it is fast, it can switch tasks quickly while the interacting components can keep doing their own work when the processor attending to another task. Otherwise it is impossible to understand concurrency without paralellism.
I think that's a given for any Comp Sci student.
@@metabolic_jam that’s true, but not everyone who wants to learn about concurrency and parallelism is a cs student
@@metabolic_jam except not everyone watching this is/was a comp sci student
Need a deeper stack for that
Nice shirt
What does "Commander" mean? Is that his job title or nickname? Google is not helping me with this haha....
I just assumed it somehow was related to Commander Pike in the original Star Trek TV series pilot. We are of that age. . .
Who said that concurrency is parallelism?
While I know he is knowledgeable and I learned things here, I am kinda surprised by the example in the beginning:
As if solving a problem concurrently and not sequentially is an "unexpected discovery" will improve the performance... isn't this obvious? or did I miss something?
(I am not joking in this comment and not belittling the talk)
No, it won't automatically do that.
Isn't this the guy from the Penn & Teller episode of Letterman?
yes
wait what
@@AlbertBalbastreMorte czcams.com/video/fxMKuv0A6z4/video.html
He savagely roasted the folks who can't differentiate between "parallelism" and "concurrency".
What about employing AI to identify the optimal concurrency structure of an algorithm/program?
This talk was from eight years ago, long before AI was invented. If you are now typing a response to this to inform me that AI was not in fact invented in the past few years, you make me sad.
who's here watching the video are for operating system assignment?
did he talk over Rust Programming Language at the end? I couldn't handle that.
Beta
Terrible production value, can barely read the code on the slides.
😂
Correct me if i'm wrong, the summary of this video is: if something is concurrent this don't mean that its also parallel but if something is parallel it will always be concurrent
you are wrong, he says about sawzall: "its an incredible parallel language but has absolutely no concurrency"
"concurrency makes parallelism easy"
Those terms are related, but they do not describe disjoint sets of things. The meanings overlap and vary by situation. In the context of programming, concurrency is the ability of your code to be "composed" into bits of logic that could be run at the same time. Parallelism (when combined with concurrency) is taking said code and running it on a 100-core machine.
That is so true, Concurrency is not Parallelism, two very different things, 99% "programmers" wouldn't care since it is irrelevant to them since Concurrency programming gives the illusion of Parallelism. When "professional" programmers need "performance"... thats when "Concurrency" and "Parallelism" needs to be understood exactly what they are. Looking down to the physical hardware of a CPU, it can only then be understood how much is it concurrent vs parallelism. When reading the whole OS and computer architecture, I instantly questioned today's CPU consumer grade architecture, having more cores really doesn't help when needing RAW performance. This is why Intel and AMD have sperate hella expensive CPUs series specifically designed for raw performance in true Parallelism, these CPU lines are sold for web servers and research companies, its sad that consumer grade stuff doesnt have "true Parallelism". The Apple engineers are smart, it makes sense why they use server grade Intel CPUs in their consumer desktop hardware, it also makes sense why Apple ditched Intel and started making their own custom CPUs from ARM.
The word "concurrent" literally means "same time".
At 1:42 he literally said "[...] as it is intended to be used in computer science"
idiot
Am I the only one who finds the example of burning books highly misplaced and disrespectful?
Yes, yes you are.
It is C++ joke, it would be distasteful if it was real
[insert that bjarne gigachad quote about people loving to hate the most useful programming languages here]
"highly misplaced and disrespectful"
burn the snowflakes!! or as he said, the "lesser minds" : D
It is provocative, but clearly not an endorsement of actual book-burning
"Dealing at the same time, doing things at the same time...." Philosophers call this demagogy. concurrency and parallelism is the same, man. Don't try to mix definitions to create new ones, because if you can't explaine smth for 7 years old kid using "only" your hands, you don't understand it good enought yet.
Disciplines usually have precise definitions of words to encapsulate concepts. Math, computer science, etc, all do this. The formal inter-discipline definitions usually doesn't perfectly match the colloquial definitions, which you are alluding to in the dictionary. The formal definition of concurrency in computer science has been around for decades.
Артем Арте, dude, go home. You are drunk and/or high.
what the fuck are you talking about
Lets say your right, and hes somehow wrong for trying to distinguish between paralellism and concurrency: What phrases do you suppose people like Rob Pike and the plethora of library/language designers who are reasoning about these problems do? Should they invent new phrases? I think you're a better software developer if you're able to think about this talk when you find yourself speaking with someone who is carefully differentiating between "parallelism" and "concurrency". To me this phrasing becomes very important when we thinking about concurrency in UI design where thread-confinement is a factor: we very much want concurrent decomposition, but not for a throughput increase but for a latency reduction. That is, we want concurrency without parallelism as a way to control for responsiveness, and if you don't understand this, well its good that you don't have my job.
Further, to your use of Enstein's maxim [I've never heard the "hand gestures only" variant]: I wonder what you think Rob's use of Gofers is supposed to be if not an attempt to explain something to a six year old? He's trying, I think rather successfully, to explain a very fine point using very elementary components.
LMAO! Right, it's Rob Pike that doesn't understand parallelism. Good one, Ya almost had me. Hilarious!