Trading at light speed: designing low latency systems in C++ - David Gross - Meeting C++ 2022

Sdílet
Vložit
  • čas přidán 15. 05. 2024
  • Trading at light speed: designing low latency systems in C++ - David Gross - Meeting C++ 2022
    Slides: slides.meetingcpp.com
    Survey: survey.meetingcpp.com
    Making a trading system "fast" cannot be an afterthought. While low latency programming is sometimes seen under the umbrella of "code optimization", the truth is that most of the work needed to achieve such latency is done upfront, at the design phase. How to translate our knowledge about the CPU and hardware into C++? How to use multiple CPU cores, handle concurrency issues and cost, and stay fast?
    In this talk, David Gross, Auto-Trading Tech Lead at global trading firm Optiver will share industry insights on how to design from scratch a low latency trading system.
  • Věda a technologie

Komentáře • 68

  • @pranaypallavtripathi2460
    @pranaypallavtripathi2460 Před rokem +121

    If this man is writing a book; something like "introduction high performance trading"; then I am buying it !

    • @payamism
      @payamism Před rokem +5

      Do you know any material or anyone who publishes regarding the subject?

    • @workingaccount1562
      @workingaccount1562 Před rokem +2

      @@payamism Quant Galore

    • @boohoo5419
      @boohoo5419 Před 3 měsíci

      this guy is totally clueless and your are even more clueless..

    • @draked8953
      @draked8953 Před 2 měsíci

      @@boohoo5419 how so?

  • @hhlavacs
    @hhlavacs Před rokem +2

    Excellent talk, I learned a lot!

  • @edubmf
    @edubmf Před rokem +19

    Interesting and always love speakers who give "further reading".

  • @statebased
    @statebased Před rokem +50

    Array oriented designs are a the core of the low level model of a trading system. And while this array view is much of what this talk is about, it is important enough to reemphasize it. Also, template based objects are handy to glue your arrays together so as to fully optimize the result.

    • @sui-chan.wa.kyou.mo.chiisai
      @sui-chan.wa.kyou.mo.chiisai Před rokem +2

      Is it what similar to data oriented programming in game?

    • @santmat007
      @santmat007 Před 8 měsíci +4

      @@sui-chan.wa.kyou.mo.chiisai Yes.... DOP rules over all... OOP to the trash 😋

  • @nguonbonnit
    @nguonbonnit Před rokem

    Wow ! So great. You help me a lots.

  • @IonGaztanaga
    @IonGaztanaga Před rokem +22

    At 23:00, when stable_vector is explained (built using boost::container::static_vector), just mentioning additional info for viewers. boost::container::deque has a feature to allow configuring the chunk size (called block size in Boost).

  • @wolpumba4099
    @wolpumba4099 Před rokem +3

    Nice! Some examples of discussion of queues for few producers and many consumers.

  • @khatdubell
    @khatdubell Před rokem +21

    "its hard to crack the S&P 500"
    Explain that to congress.

  • @melodiessim2570
    @melodiessim2570 Před rokem +6

    Where is the link to the code for Seqlock and SPMC shared in the talk ?

  • @firstIndian-ez9tt
    @firstIndian-ez9tt Před 3 měsíci +1

    Love you sir from India Bihar ❤❤❤

  • @aniketbisht2823
    @aniketbisht2823 Před 2 měsíci

    std::memcpy is not data-race safe as per the standard. you could use std::atomic_ref to read/wrie individual bytes of the object.

  • @kolbstar
    @kolbstar Před rokem

    For the SPMC Queue V2 at 45:00, why does he have an mVersion at all? If the block isn't valid until mBlockCounter has been incremented, then readers don't risk reading during a write, no? Or, if you are reading while it's writing, it's because you've lagged so hard that the writer is lapping you.

  • @thisisnotchaotic1988
    @thisisnotchaotic1988 Před 20 dny

    I think there is a flaw with this design. Since the spmc queue supports variable-length messages, if a consumer is lapped by the consumer, the mVersion field the consumer thinks it is spinning on is probably not the version counter field at all. It may well be spinning on some random bytes right in the middle of mData. Then if the random bytes happen to be the version of the consumer is expecting(although the probability is very low), it could be disastrous. The customer does not know it is lapped at all, and continue processing with the meaningless data.

  • @pouet843
    @pouet843 Před rokem +14

    Very nice, I'm curious how do you log in production without sacrificing performance ?

    • @JoJo-fy2vb
      @JoJo-fy2vb Před rokem +3

      only memcpy raw args in the main thread and let the logging thread format to the string and create the logs

    • @Michael_19056
      @Michael_19056 Před rokem +13

      Record args in binary form, record format string only once. Use thread local buffers to avoid contention. NEVER rely on delegating work to another thread except for handing off full instrumentation buffers. View logs offline by reconstituting the data back into readable format.
      I've been using a system like this for 10-15 years. Logging overhead, if done wisely, can easily reach single digit nanoseconds per entry. Even lower if you consider concurrency of logging many threads simultaneously.

    • @mnfchen
      @mnfchen Před rokem +3

      He mentioned this but all log events are produced to a shared memory queue, which is then consumed by a consumer that then publishes it to, say, TimeseriesDB. Using the SeqLock idea, publisher isn't blockable by consumer, and the consumers are isolated from each other.

    • @_RMSG_
      @_RMSG_ Před rokem

      @@Michael_19056 Hi , why is using another thread for logging bad? let's say theoretically that we could garuntee that the logging thread will never thrash the same cache as the main fn, would it still interfere? & if the added instructions required to save that data "in the same breath" are so light that it only impacts on the nanosecond scale, does it become complicated to implement?

    • @Michael_19056
      @Michael_19056 Před rokem

      @@_RMSG_ sorry, I only saw your reply just now. In my experience, it would take longer to delegate the data to another thread than to simply record the data with the current thread. Again, the most efficient approach is to use a thread_local buffer to copy the arguments into so there is not locking or synchronization required for the thread to log its own args.

  • @yihan4835
    @yihan4835 Před 8 měsíci

    My question is std::unordered_map is still not very efficient because the pointer itself still lives in heap and you are getting one indirection at least because they are stored as pointer to a pointer in the bucket. Am I mistaken somehow?

    • @dinocoder
      @dinocoder Před měsícem

      I was wondering the same thing. I have three theories. One, most instruments are added to the store at construction time (or in one large chuck) and the memory for the pointers are luckily allocated sequentially/contiguously which is easier due to the size of the the pointer being significantly smaller than the Instrument struct. And two, they know how the allocator they're using works or have implemented their own (they do say they don't include all the details), and know it will more likely allocate into contiguous addresses being made easier by the smaller size of the pointer vs the Instrument struct. Thirdly, they could reserve space for the map at construction time (again, they say they don't include all the details).
      Imo, reserving space for this seems pretty straightforward and I would imagine they could be doing something like this. Would be easier to tell if we knew how dynamic the number of instruments is... but... I imagine for a given application it is relatively consistent and is something that would be configurable or deducible.
      Good chance that I'm missing something too, but these are just my thoughts.

    • @sidasdf
      @sidasdf Před 22 dny

      Yes, you are right in that it is a couple jumps, but this is missing the bigger picture about what this design choice accomplishes.
      Better locality. You want the data in your program to be close together. Everything on your computer wants the data to be close together. Your hardware, if it sees you make consecutive memory accesses, WANTS to preload a big chunk of memory. Your page table address converter wants you to be playing in the same few pages so you don't have to do an expensive page table walk. Your L2/L3 cache don't want to have to constantly be cleaning themselves out.
      And so part of the game is the tiny optimizations - the instruction level battle (such as avoiding the indirection that you mention). But individual instructions are so fast anyways - all your latency in a single threaded program like this is really coming from TLB lookups and calls to RAM.

  • @broken_abi6973
    @broken_abi6973 Před rokem +4

    At 33:00, why does it use memcpy instead of a copy assignment?

    • @manit77
      @manit77 Před rokem +5

      copying large blocks of memory or large nested structs is more efficient using memcopy.

    • @_RMSG_
      @_RMSG_ Před rokem

      @@manit77 Can't someone overload assignments for structs such as those to ensure the use of memcopy?

    • @shakooosk
      @shakooosk Před rokem +1

      Because a copy assignment might have control flow and branches.
      Imagine this, while the copy assignment is executing in the reader, a 'write' operation is taking place on another thread. At first glance that might seem OK since the value will be discarded when the version check fails in the reader. However, it is dangerous because it might result in unpredictable state in the logic.
      For example:
      if (member_ptr != nullptr) { use_member(*member_ptr); }
      You can see how the check can pass and before the body of the if-statement executes, the writer would assign nullptr to member_ptr and boom you crash.
      So, the solution is to either do memcpy and hope it works at all, if not it will crash spectacularly most of the time, which should be a good indication you're doing something wrong. Or a better solution is to constrain the template parameter to be trivially_copyable

    • @shakooosk
      @shakooosk Před rokem

      @@manit77 no this has nothing to do with efficiency. It's about correctness. check my reply to the OP.

  • @user-qh2le5dz3s
    @user-qh2le5dz3s Před rokem

    I want to know your tick-to-order and jitter.

  • @myhouse-yourhouse
    @myhouse-yourhouse Před rokem +1

    Optiver's competitors beware!

  • @sb-zn4um
    @sb-zn4um Před rokem +3

    can anyone explain how the write is setting the lowest bit to 1, is this a design feature of std::atomic? 34:23

    • @Alex-zq1yy
      @Alex-zq1yy Před rokem +2

      Note that the write increments a counter by one, copies, then increments by one again. So if the consumer reads in the middle of writing, the counter is odd (or the lowest bit is 1). Only when it is done writing is it even again

    • @kolbstar
      @kolbstar Před rokem +1

      Remember his logic is that if the mVersion is odd, then it's currently being written. (int & 1)==0 is just an ugly version of an "is even" function.

    • @gabrielsegatti8017
      @gabrielsegatti8017 Před 6 měsíci

      @@Alex-zq1yy What happens in the scenario where we have 2 writers: Writer A increments a counter by one, and is now writing. Then, while the writing is in progress, Writer B increments value by one as well (to then start writing). Now, before Writer A increments the counter again, consumer reads and counter is even, despite none of the writes being completed.
      Wouldn't that be possible to happen? Perhaps the full implementation also checks preemptively if the lowest bit is 1. Then this problem wouldn't exist.

    • @dareback
      @dareback Před 6 měsíci

      @@gabrielsegatti8017 The code comment says one producer multiple consumers, so there can't be two or more writers.

  • @guangleifu5384
    @guangleifu5384 Před rokem +4

    Which exchange can give you trigger to trade at 10ns? You probably not mean the exchange timestamp but more your capture timestamp on your wire.

    • @BlueCyy
      @BlueCyy Před rokem

      Haha, I see you are here as well.

    • @BadgerStyler
      @BadgerStyler Před rokem +2

      I was wondering about that too. If the wire between the exchange server and the clients' machines is more than 1.5m long then it's not even physically possible. He has to mean the wire-to-wire latency

    • @andrewcampbell9926
      @andrewcampbell9926 Před rokem +12

      I work at a similar trading firm to Optiver and when we measure trigger to trade the trigger is the time at which we see the exchange's packet on our switch. I think it's standard in the business to refer to it like that as no client of the exchange can see the packet before it reaches the client's switch.

    • @davejensen5443
      @davejensen5443 Před rokem +3

      The secret to low network latency is to be co-located in the exchange's data center. Even ten years ago it was worth it.

    • @Lorendrawn
      @Lorendrawn Před 6 měsíci

      @@davejensen5443 Occam's razor

  • @var3180
    @var3180 Před rokem +6

    how does rust compare to this?

    • @joelwillis2043
      @joelwillis2043 Před rokem

      trash

    • @isodoublet
      @isodoublet Před 4 měsíci

      I imagine it would be tricky to write the instrument container in (safe) Rust since it must hold a bunch of stable references. The concurrent data structure would probably be challenging as well since the same borrowing rules prevent the kind of "optimistic" lock-free operation (though keep note that, as written, the SeqLock & friends code is UB in C++).

  • @AndrewPletta
    @AndrewPletta Před rokem +1

    What advantage does stable_vector provide that std::array does not?

    • @BenGalehouse
      @BenGalehouse Před rokem +4

      The ability to add additional elements. (without starting over and invalidating existing references)

    • @JG-mx7xf
      @JG-mx7xf Před 10 měsíci

      @@BenGalehouse Just allocate an array large enough . If you know you have 100 instruments and 100 new created intraday on average ... just use normal vector preallocated for a size of 1k... that way you are sure you dont invalidate anything.

    • @thomasziereis330
      @thomasziereis330 Před 8 měsíci

      The stable vector shown here has constant lookup time if im not mistaken so thats a big advantage

  • @gastropodahimsa
    @gastropodahimsa Před rokem +4

    Undamped systems ALWAYS devolve to chaos...

  • @stavb9400
    @stavb9400 Před 3 měsíci

    Optiver is market maker so requirements are a bit different but generally speaking trading at these time scales is just noise

  • @JamieVegas
    @JamieVegas Před rokem

    The slides don't exist at the URL.

    • @MeetingCPP
      @MeetingCPP  Před rokem

      Seems like the speaker didn't share them. :/

  • @sisrood
    @sisrood Před 4 měsíci +1

    I really didn't understand the 10 nanosecond latency.
    Anyone here could help?

    • @dinocoder
      @dinocoder Před měsícem

      It says on the diagram that they have a trigger price at the FPGA... so I'm assuming they have something ready to send back to the exchange as soon as they receive a message as long as the incoming message fits certain criteria. So, most of the 10 nanoseconds is probably just physical time it takes for the message to get to the FPGA, compare bits, and send something back.

    • @dinocoder
      @dinocoder Před měsícem

      either that or a commenter below is correct and the 10ns just represents the time at the fpga

  • @doctorshadow2482
    @doctorshadow2482 Před rokem

    What is this "auto _" at czcams.com/video/8uAW5FQtcvE/video.html ? Is this underscore just a way to say "not needed variable" or there is something new in C++ syntax?

    • @MeetingCPP
      @MeetingCPP  Před rokem

      _ is the variable name, its in this case '_'. Likely because its not even used.

    • @doctorshadow2482
      @doctorshadow2482 Před rokem

      @@MeetingCPP thanks for the clarification. I remember that some years ago even use of '_' prefix in variable name in C/C++ was reserved for language system needs, now even the '_' alone is used. Funny usage, although.

    • @MeetingCPP
      @MeetingCPP  Před rokem

      @@doctorshadow2482 Well, its not a C++ invention, I've seen this used as a popular place holder variable (because its needs a name) in code snippets of other programming languages.

  • @BunmJyo
    @BunmJyo Před 9 měsíci

    ❤😂🎉😅看着发量,就是高手👍

  • @dallasrieck7753
    @dallasrieck7753 Před 6 měsíci

    who can print momey the fastest, same thing, 😉

  • @mohammadghasemi2402
    @mohammadghasemi2402 Před 9 měsíci +2

    He was very knowledgeable but his presentation was not very good. He should have slowed down his thought process for people like me who are not familiar with the subject matter so that we can follow him. But should thank you anyway for the things I picked up from his talk; like the stable vector data structure.