CppCon 2019: Eric Niebler, Daisy Hollman “A Unifying Abstraction for Async in C++”

Sdílet
Vložit
  • čas přidán 14. 05. 2024
  • CppCon.org
    -
    Discussion & Comments: / cpp
    -
    Presentation Slides, PDFs, Source Code and other presenter materials are available at: github.com/CppCon/CppCon2019
    -
    Async in C++ is in a sad state. The standard tools -- promises, futures, threads, locks, and std::async -- are either inefficient, broken, or both. Even worse, there is no standard way to say where work should happen. Parallel algorithms, heterogeneous computing, networking & IO, reactive streams, and more: all critically important foundational technologies that await a standard abstraction for asynchronous computation.
    In this talk, Eric Niebler and David Hollman dig into the Standard Committee's search for the basis operations that underpin all asynchronous computation: the long-sought Executor concept. The latest iteration of Executors is based on the Sender/Receiver programming model, which provides a generalization of many existing paradigms in asynchronous programming, including future/promise, message passing, continuation passing, channels, and the observer pattern from reactive programming. It also has surprising and deep connections to coroutines, which further demonstrates the model’s potential to be a truly unifying abstraction for asynchronous programming in C++20 and beyond.
    Eric and David will present the short-term and long-term directions for Executors in ISO Standard C++, illustrating the design by walking through several implementation examples. They will talk about the direct connection between coroutines and the Sender/Receiver model and discuss what it means for the future of asynchronous APIs in C++. Finally, they will cover how the restrictions imposed by the Executors model should affect the way you write code today so your code is ready for the next big revolution in parallel and concurrent C++ programming.
    -
    Eric Niebler
    Facebook
    Sr. Dev.
    Seattle
    I've been doing C++ professionally for the past 20 years, first for Microsoft, then as an independent consultant. Right now, I'm working on bringing the power of "concepts" and "ranges" to the Standard Library with the generous help of the Standard C++ Foundation. Ask me about the future of the Standard Library, or about range-v3, my reference implementation for C++11.
    Daisy Hollman
    Sandia National Labs
    Senior Member of Technical Staff
    Livermore, California
    Dr. Daisy Hollman has been involved in the ISO-C++ standard committee since 2016. He has been a part of a number of different papers in that time, including `mdspan`, `atomic_ref`, and-most prominently-executors and futures. Since finishing his Ph.D. in computational quantum chemistry at the University of Georgia in 2013, he has mostly worked on programming models for computational science, data science, and related fields. He joined Sandia National Labs in 2014 and has worked on several different programming models projects since then.
    -
    Videos Filmed & Edited by Bash Films: www.BashFilms.com
    *-----*
    Register Now For CppCon 2022: cppcon.org/registration/
    *-----*

Komentáře • 31

  • @BowBeforeTheAlgorithm
    @BowBeforeTheAlgorithm Před 3 lety +5

    This one was a bit of game changer for me. Had to listen to it 3 three times to solidify things. I just wish that Eric had kept the figure showing the promise and future structure on more of the slides, as I’m not an expert on that specification. Eventually, I drew the figure and referred to it as Eric worked through examples. I suspect he’s probably talking to a room full of experts who might not need the visual aid. Regardless, this was a great talk. Much Appreciated.

  • @mamahuhu_one
    @mamahuhu_one Před měsícem

    @16:19 was a good joke : "ups, got ahead(a head) of myself"

  • @SamMason0
    @SamMason0 Před 4 lety +5

    in case anybody else is interested in the talk that uses coroutines to hide memory latency, it's:
    Gor Nishanov, Nano-coroutines to the Rescue! (Using Coroutines TS, of Course), CppCon 2018
    whole talk is good, but this is where the background stops and he starts showing how coroutines can help: czcams.com/video/j9tlJAqMV7U/video.html
    and performance change is shown here: czcams.com/video/j9tlJAqMV7U/video.html

  • @think2086
    @think2086 Před 4 lety

    Can someone comment on the use of std::forward and R-value casts used throughout Eric's presentation before things like the call operator, etc? Having trouble wrapping my brain around those and what they actually achieve/avoid. Thanks!
    For example, @36:45
    forward(f)( (R&&) r);
    as opposed to simply:
    f( move(r) );
    So here, he's recasting f as r-value if it bound to the incoming argument as r-value. But then he's just calling the operator() on it anyway, so why was this necessary? Operator() isn't doing anything with f itself anyway, right (which presumably is passed in implicitly by the C++ compiler as a pointer in the first parameter, i.e. "this").
    Thanks!

    • @BowBeforeTheAlgorithm
      @BowBeforeTheAlgorithm Před 3 lety +4

      The short version is that passing the move reference (R&&) allows him to avoid the penalty of copying until he is ready. In your example move(r) becomes a temporary_object and then f( temporary_object) is forced to copy that temporary_object.
      With his version he could pass the move reference R&& up several layers and then do just 1 move operation at the final destination with no copying between functions. Hope that helps.

    • @rinket7779
      @rinket7779 Před 4 měsíci

      He said it's just due to slideware, in practice he'll use std::forward or std::move

  • @alexmopleen7944
    @alexmopleen7944 Před 4 lety +8

    Great talk, really enjoyed it! Thank you, David and Eric!
    And it's just as interesting seeing Vinnie Falco in the comments finding it complex. Nothing but respect for him too, don't get me wrong.
    Can anyone help me figure out if this can be done in C++11? I'm completely failing to come up how to fake returning a generic lambda from a function.

    • @EricNiebler
      @EricNiebler Před 4 lety +7

      Thanks! Generic lambdas are nothing but class types with templated function call operators. You can go that route pre-C++11. The only drag is having to define the class at namespace scope.

  • @xinpingzhang4506
    @xinpingzhang4506 Před 3 lety

    10:15 If you take the same process A and B, make then concurrent, you are still not guaranteed to get a 2. I don't get the presenter's point.

  • @YourCRTube
    @YourCRTube Před 4 lety +1

    Wonder how much in compile times and code size this will all cost, TANSTAAFL and all.

    • @manuelfehlhammer6424
      @manuelfehlhammer6424 Před 2 lety +1

      Compile times/code size, because generated template classes? Really? Even the old/critizised std::future/promise are templates ...
      But - if you follow the presentation - they are far inferior regarding runtime performance opposed to the approach shown here by Eric! And if you are in the area of async/multithreaded programming, runtime performance is the central thing, for which you are doing all this ... and you give a sh*t about a maybe slightly bigger build-time!

  • @TerminalJack505
    @TerminalJack505 Před 4 lety +1

    So, at 30:00, you don't actually execute the task until you are ready to wait for it to complete. I don't see how this is any different than simply running the task on the same thread.

    • @EricNiebler
      @EricNiebler Před 4 lety +10

      That's true. Now imagine algorithms like when_all or when_any, which encapsulate different fork/join strategies, programmed to the same abstraction, letting you build a whole task graph. Further imagine a spawn algorithm that launches the task graph and returns a future to it. The possibilities are endless.

  • @thevinn
    @thevinn Před 4 lety +23

    Does anyone else think this is over-engineered and overly complex?

    • @steamyprogramming666
      @steamyprogramming666 Před 4 lety

      Yeah, honestly I was under the impression that the standards committee was working toward automatic parallelism where applicable.
      But a lot of this talk is covering futures and promises which is all old hat. The senders and receivers are just an abstraction over futures and promises to eliminate the potential errors you could make leveraging futures and promises.

    • @iddn
      @iddn Před 4 lety +9

      No more so than C++'s current async offerings. They're totally right about std::future being crap though

    • @Omnifarious0
      @Omnifarious0 Před 4 lety +1

      Not me. I think it's just poorly explained. I made something a lot like this as an attempt to create a way of having the compiler automatically handle the inversion of control problem you get with event driven systems.
      I prefer Mercurial and need to find new Mercurial hosting, but for now it can be found on Github. It's called Sparkles - github.com/Omnifarious/Sparkles

    • @jamesjanzen2604
      @jamesjanzen2604 Před 4 lety +2

      Vinnie Falco in the comments section saving me from using this video to procrastinate on learning Asio/Beast. Thanks to you good sir.

    • @tiagocardoso4702
      @tiagocardoso4702 Před 4 lety +2

      Dunno... I'm using a lot of laziness and composition to improve my C++ code's parallelism these days... Mainly by using a pool of threads stacked at an io_context's "run()" and using bind to "post()" to io_context or a strand... But "post()" completely lacks a return (error, value or cancel) model... I don't like general function callbacks, so I'm stuck to passing a pointer to the caller when "posting" which requires writing code that is strictly coupled to the caller (it improves code readability (plus navigability on IDE's) and/or easy of understanding). IMHO