CppCon 2019: Hartmut Kaiser “Asynchronous Programming in Modern C++”

Sdílet
Vložit
  • čas přidán 8. 06. 2024
  • CppCon.org
    -
    Discussion & Comments: / cpp
    -
    Presentation Slides, PDFs, Source Code and other presenter materials are available at: github.com/CppCon/CppCon2019
    -
    With the advent of modern computer architectures characterized by -- amongst other things -- many-core nodes, deep and complex memory hierarchies, heterogeneous subsystems, and power-aware components, it is becoming increasingly difficult to achieve best possible application scalability and satisfactory parallel efficiency. The community is experimenting with new programming models that rely on finer-grain parallelism, and flexible and lightweight synchronization, combined with work-queue-based, message-driven computation. The recently growing interest in the C++ programming language in industry and in the wider community increases the demand for libraries implementing those programming models for the language.
    In this talk, we present a new asynchronous C++ parallel programming model that is built around lightweight tasks and mechanisms to orchestrate massively parallel (and -- if needed -- distributed) execution. This model uses the concept of (Standard C++) futures to make data dependencies explicit, employs explicit and implicit asynchrony to hide latencies and to improve utilization, and manages finer-grain parallelism with a work-stealing scheduling system enabling automatic load balancing of tasks.
    We have implemented such a model as a C++ library exposing a higher-level parallelism API that is fully conforming to the existing C++11/14/17 standards and is aligned with the ongoing standardization work. This API and programming model has shown to enable writing highly efficient parallel applications for heterogeneous resources with excellent performance and scaling characteristics.
    -
    Hartmut Kaiser
    CCT/LSU
    STE||AR Group
    Hartmut is a member of the faculty at the CS department at Louisiana State University (LSU) and a senior research scientist at LSU's Center for Computation and Technology (CCT). He received his doctorate from the Technical University of Chemnitz (Germany) in 1988. He is probably best known through his involvement in open source software projects, mainly as the author of several C++ libraries he has contributed to Boost, which are in use by thousands of developers worldwide. His current research is focused on leading the STE||AR group at CCT working on the practical design and implementation of future execution models and programming methods. His research interests are focused on the complex interaction of compiler technologies, runtime systems, active libraries, and modern system's architectures. His goal is to enable the creation of a new generation of scientific applications in powerful, though complex environments, such as high performance computing, distributed and grid computing, spatial information systems, and compiler technologies.
    -
    Videos Filmed & Edited by Bash Films: www.BashFilms.com
    *-----*
    Register Now For CppCon 2022: cppcon.org/registration/
    *-----*

Komentáře • 10

  • @matrixstuff3512
    @matrixstuff3512 Před 4 lety +5

    "Parallelization looses all its threadening behavior"

  • @DanielElliott3d
    @DanielElliott3d Před 4 lety +3

    I love Hartmut's Talks.

  • @rontman
    @rontman Před 4 lety +3

    Nice talk. Going to try HPX now.

  • @headlibrarian1996
    @headlibrarian1996 Před 3 lety +4

    It isn't clear from the code examples how the co_await steps are executed by different threads. co_await spawns a coroutine, which normally executes in the same thread as the suspended coroutine waiting for the result of the co_await.

    • @llothar68
      @llothar68 Před rokem

      Coroutines have nothing to do with Threads. They are just function calls with implicit state variables

  • @think2086
    @think2086 Před 2 lety

    @23:48, should be `return v < *p` i think, right?

  • @deanroddey2881
    @deanroddey2881 Před 3 lety +3

    The first rule of thumb is really wrong' It should be "Parallelize until the payoff is no longer worth the extra complexity." That break point can be wildly different depending on the algorithm and/or type of application. Complexity kills, and no one cares how fast it is if it can't be reliably maintained. There's a huge culture of premature optimization in C++ these days.

    • @n_x1891
      @n_x1891 Před 2 lety +1

      I don't think maintainability necessarily has to be related to the complexity of code.

    • @llothar68
      @llothar68 Před rokem

      I don’t think the problem is premature optimization but premature generalization

  • @ZapOKill
    @ZapOKill Před 2 lety

    I would argue that the magic and power of OMP and MPI is that it is NOT part of the C++ standard. I would not try to make HPX a part of C++.