Video není dostupné.
Omlouváme se.

Prolog Antipatterns: How not to do it

Sdílet
Vložit
  • čas přidán 19. 08. 2024

Komentáře • 16

  • @musicarroll
    @musicarroll Před 3 lety +5

    Excellent! Everything you say, Marcus, is spot on! Sadly, I think I am one of those Prolog programmers who may have regressed. But I am inspired and encouraged by your talk to take heart and try harder to become a better one. Are you doing live zoom discussions these days?

  • @jorgschuster773
    @jorgschuster773 Před 3 lety +6

    Great talk. It reminded me of several things. I had to think of Mercury which enforces a more declarative style of programming. What is your take on it? I also had to think of Heidegger (I am not a believer in Heidegger) who likes to talk about "Lichtung", "clearing", a spot in the woods with less trees, where you suddenly are able to see a bit more. He constantly tells us: "Keep yourself in the Lichtung (to see more ...)", which is a lot like: "Do not fence in the horizon." This is a very abstract advice, but nevertheless, the most important one. However, sometimes you just need to get out of the woods TODAY, no matter how. And then you will have to use violence and that can include things as ugly as using the cut or even (yes, I know, this is horrible) doing things outside of Emacs.

  • @SylvainSoliman
    @SylvainSoliman Před 3 lety +5

    Even if I mostly agree with what you say (as usual 😉) I wouldn't call all these examples "antipatterns".
    Take the factorial example, by removing the cut, you forfeit another (though completely procedural) aspect of (depth first search) Prolog : tail recursion.
    Indeed, even if you change the order in the non-base case as when you study termination, because of the choice point you'll never get proper tail-recursion optimization.
    While this is "declaratively" non significant, it may well be *very significant* in practice. And teaching Prolog ignoring practical considerations is, in my book, a pretty bad antipattern.
    [and yes, I know about zcompare 😉]
    So, as much as I try to avoid teaching non-logical (incomplete as you call them) predicates when I teach Prolog, I also talk about them because they do have specific use cases, mostly related to optimization topics like indexing for instance (and WAM-related stuff)…
    In other words, worrying about performance is not what I'd call an anti-pattern, even if it may cost you monotonicity. The point is that it should be in full awareness (and I agree that it's not often the case…).
    Similarly, having side-effects should be a choice (not a default) but is not what I'd call an anti-pattern (and it's the case in most programming languages… [Haskell fans?]).
    Moving the state around is ok… until once again you worry about performance 😉 or even readability!
    Finally, as much as I like to teach about them, the analyses (or different execution strategies, etc.) made possible by pure monotonic Prolog actually (and unfortunately) are very difficult to demonstrate…
    Still I wholeheartedly agree that writing efficient and elegant code should go hand-in-hand and be easy, so…
    BTW I consider ISO Prolog to be a really bad standard (even if only because it is impossible to read!)…

    • @ThePowerOfProlog
      @ThePowerOfProlog  Před 3 lety +6

      These are very good points, thank you a lot for your thoughtful feedback!
      Regarding the choice point: Is this an indication that argument indexing should also "look into" the clause body a bit? For instance, a good "argument indexing" could easily recognize that X #= 0 and X #> 0 are mutually exclusive. If the Prolog system were to take such simple arithmetic constraints that appear in the clause body into account, then we would still get "indexing" in a sense. The only drawback I have heard about this so far is that it then may not be very clear which clauses are recognized as mutually exclusive. Maybe that's a good thing in itself, to encourage a discussion about what kind of "indexing" or code analysis is actually needed for most applications? There are Prolog systems that already "index" disjunctions that follow a certain pattern, such as ( X = a ; X = b ).
      Apart from that, the specific case of n_factorial/2 is probably not the right place to worry about the additional space overhead of missing tail recursion optimization., because it already involves a number that grows super-exponentially in the number of recursive invocations.
      Impure output is definitely less problematic than incompleteness, there is no doubt in general that some of these constructs are less problematic than others. One drawback of using impure output in applications is that it will delay improvements in Prolog systems: If we are using DCGs to describe output, then the system will run out of memory if the output becomes too large, unless the Prolog system provides phrase_to_stream/2. Therefore, declaratively describing the output will add an incentive for implementors to add or improve constructs that make it more performant. As I see it, to push implementors, we need to actually write code in the way we want to do it, otherwise there is little incentive for implementors.
      Regarding different execution strategies, I think we are only at the very beginning, with SLG resolution (tabling) in various forms now provided by a few Prolog systems as one of the first alternative execution strategies.
      Thank you again, I greatly appreciate your feedback!

    • @SylvainSoliman
      @SylvainSoliman Před 3 lety +2

      Well, you're very welcome, I'm only reacting to your very nice videos ;)

  • @AleksyGrabovski
    @AleksyGrabovski Před 3 lety

    Thanks for an insightful video, probably I'm gonna to re-watch it.
    I think the usefulness of (->)/2, lays in that it can suggest to the reader that no backtracking is possible at a particular point, so it can make a program easier to comprehend. Like in this example: `p :- a -> b -> c.` this clearly tells that *only* c/0 can succeed multiple times without the need to read implementation of all of predicates. This approach also may become useful if you experiment with different execution strategies and you still need to guarantee sequencing of predicates. The problem is that there is no way of enforcing this, so if by mistake a programmer will modify a/0 to have a choice point, Prolog system will just silently give an incomplete answer.
    Also, I think that (\+)/2 can be safely used in a pure logic program, when programmer has full control over parameters and can guarantee that they will be always sufficiently instantiated: `q :- \+ r(a).`. Actually I find this operator indispensable eg. when I'm trying to find a counter-example.
    My claim is that one can use non-pure predicates in a pure Prolog as long as this impurity doesn't interfere with the rest of a program.

  • @DaveRoberts308
    @DaveRoberts308 Před 2 lety

    You had me at grue cuts.

  • @remyclarke4020
    @remyclarke4020 Před 3 lety +2

    I agree with much of what you are presenting here, the more consistant the experience one can have with a programming language the better.
    I think that pushing for a consistant and powerful defaults (like monotonicity) is best. It makes learning Prolog much smoother, particularly when you can focus on logic, and not the programming language itself.
    Have you already made a video about scripting in Prolog? I found that often while working through a problem I have to reload and rewrite my queries often, and it would make more sense to just have the queries run on importing the code.

  • @wobey83
    @wobey83 Před 4 měsíci

    Supplement to former comment: Same n_factorial program executes in GNU Prolog 1.5.0.

  • @DougRansom1
    @DougRansom1 Před 7 měsíci

    Do you have this in written form somewhere for reference? Could it be an additional chapter to the power of prolog?

  • @w-mwijnja8919
    @w-mwijnja8919 Před 3 lety

    What you name 'separability' is closely related to the notion of 'referential transparency'. They might be synonyms.

  • @MarioWenzel
    @MarioWenzel Před 3 lety

    Are there actual useful programs that work without negation?
    In comparison, monotonic SQL queries are usually not that interesting and we would not make a database exam without non-monotonic constructs.
    The same is true for Datalog. I have yet to see any interesting programs that work without negation.
    But of course, those models (SQL and DL) mostly agree and all values are atomic values (with a bit of eye-squinting) and not nested terms (including variables) like in Prolog. So we there might be a bit of interestingness coming from there.

    • @ThePowerOfProlog
      @ThePowerOfProlog  Před 3 lety +5

      Yes, absolutely: There are many useful Prolog programs that do not need negation. As one recent example, take David Norris's implementation of cumulative-cohort designs from clinical oncology :
      github.com/dcnorris/precautionary/blob/main/exec/prolog/ccd.pl
      It uses recent constructs like if_/3 and arithmetic constraints in the monotonic execution mode to preserve desirable logical properties.
      Many combinatorial tasks and puzzles, such as describing timetables and schedules etc. can also be expressed without negation. The most frequent misuse of negation is in cases where dif/2 can and should actually be used instead: To describe disequality of terms, we use dif/2, and we use constraints like (#\=)/2 if we know more about the domains of the involved variables. These predicates work reliably in all usage modes.
      Sometimes, we need an impure layer "over" the pure core, for example, to support a defaulty syntax that is easier to type for users that work with the application. This is the case for example with Prolog code itself, which uses a defaulty representation in clauses (conjunctions cannot be symbolically distinguished from the set of all atomic goals). In such cases, it helps to provide a simple wrapper that then allows us to work, as far as possible, in the pure core, so that non-logical predicates are restricted to a few simple interfaces.

  • @audunstolpe7408
    @audunstolpe7408 Před rokem

    var(X):- findall(t, (X =a; X=b), [_,_]) is mysterious code to me. Why does this work? Why is the disjunction needed? Is it equivalent to var(X):- findall(t, (X =a), [_]). Anybody? It is driving me crazy right now.

    • @audunstolpe7408
      @audunstolpe7408 Před rokem

      Because the latter makes more sense to me: var(X) returns true if X can be unified with a nonvar. But why have two of them?