MathZero, The Classification Problem, and Set-Theoretic Type Theory - David McAllester

Sdílet
Vložit
  • čas přidán 5. 09. 2024

Komentáře • 3

  • @wdacademia2329
    @wdacademia2329 Před 2 lety

    Good talk. I'm currently working through Bourbaki Vol I and just want to mention the correspondence between the language used in this talk and the language used by Bourbaki (translated to English): A (boolean) formula in a formal language is a "relation" to Bourbaki. A structure type is a "species of structure". A signature (the structure type without the axioms) is a "typification". A formula that satisfies isomorphism congruence over structures of a given type is a "transportable relation" (no practical criteria for such is given). Cryptomorphism between structures on the same base sets is "equivalence by means of procedures of deduction", which is shown to be isomorphism congruent in the following sense: f is an isomorphism between s,s':S iff it is an isomorphism between equivalent structures t,t':T.

  • @XetXetable
    @XetXetable Před 3 lety

    I would say that, from a constructivist perspective, the problem with your semantics isn't that they're too simple, it's that they don't make sense in concrete terms. e1 = e2 isn't a boolean if equality isn't decidable on the type of e1 and e2. If they are real numbers, then there is, in general, no method that can render that expression into a boolean. If it's a heterogeneous equality working on raw expressions then any syntactically distinct but semantically identical e1 and e2 will have e1 = e2 evaluate to false. ∀x : s . P(x) also cannot be a boolean in general if s is an infinite domain. There are some technicalities; if P is already decidable and s has a compact synthetic topology, then ∀x : s . P(x) will be decidable even if s is infinite, but there's a need to describe the algorithmic content of such things to make it sensible. The problem with these sorts of non-constructive semantics is that they implicitly assume that questions that can never be reasonably answered can be trivially answered, and that's just not true. Until you build a phone to contact God (or Plato) to answer such questions the "semantics" is fundamentally disconnected from the real world. If you do make such a thing then it's unclear why we'd want AIs anymore.
    With all that being said, the general project seems misguided. Given an encoding for programs, p, denoted [p], all type checking algorithms can be rendered as simple syntactic tests. We might have a test "isSortingFunction" which may, at worst, shove [p] into a search procedure trying to prove that the specification of list sorting is met by [p]. If this search procedure halts, then that observation of halting has implications for many other possible tests. For example, from that one observation, we may come to believe that "∀x . isSortableList([x]) → isSorted(p(x))" corresponding to an infinite number of tests that we can't run in reality. Different type theories will often decorate programs with annotations leading to different encodings of programs which may correspond to the same program through appropriate unquoting functions. This idea is a common way to formulate types out of an untyped substrate. Usually, it's put in terms of a notion of "orthogonality", where a term, "t", is orthogonal to a context with a hole, "E(-)", whenever "E(t)" is strongly normalizing. Contexts are interpreted as types. See "Strong normalization as safe interaction" by Colin Riba, for example. This is the true™ constructive semantics of type theory. Unlike game semantics or many other things, this is what machines (including hypothetical AIs) would actually be doing when interpreting the semantics of types.
    Of course, this doesn't only apply to type theory. We can define a similar search procedure for the axioms of classical logic, or define an algorithm to enumerate truth tables, and we may come to relate such tests in a reasonable way. This is, on some level, how all of mathematics operates in concrete terms.
    There is the next question of where these relationships exist. One could define a metalogic for such reasoning to take place, but, really, shouldn't the AI be able to figure that out anyway? If we just had an AI that learned how to solve higher-order unification problems and statistically correlate the outputs of programs to the outputs of other programs, then that covers all of logic and mathematics without needing to ally oneself to a foundation at all. That seems a lot more reasonable to me. It also completely sidesteps foundational questions like size issues. The real world doesn't know what a universe level is; such things are mostly tooling issues rather than metaphysical mysteries and I don't see why such decisions have to be made for an AI.

  • @merlepatterson
    @merlepatterson Před 4 lety

    47:38 , 1:06:28