DS theory has never been popular. Even Bayesianism (cf. "Bayesian nonparametrics" two decades ago) took a back seat after LLMs came out. DS theory, with its greater computational complexity, is therefore even less attractive.
Yes and we are IMHO worse off, without a proper theory of reasoning under uncertainty, it's difficult to understand what LLMs are doing (or even hard to define what they should be doing).
I agree that DS is computationally prohibitive, but another way out (aside from probability, which I don't like either) is with various systems of fuzzy logic (or you can just go with the most expressive one under the lovely name \L\Pi 1/2).
(BTW I am also exploring approach to uncertainty based on untyped lambda calculus, where each term is interpreted as a kind of "model of the world". Uncertainty degree is given by whether the term has a normal form or not. If it has not, then it is certain, while if it has a normal form, it means that additional assumptions/arguments need to be supplied to specify the model further.)
I think that's overly reductivist. In the general case DS operates on up to 2^M sets where M is the cardinality of the hypothesis space: worst case scenario. That's not true if hypotheses are hierarchical, or if evidence is frequently about the same set, or there just isn't enough evidence to fuse to get to 2^M.
In the worst case scenario there are efficient approximation methods which can be used.
Thanks for this reference; I found this paper interesting, but it is a satisfiability solver. Inherently it cannot quantify the probability of a subset of events, but it can find a probability assignment given a set of constraints. I.e. prove possibility. More usefully it can show that no such assignment is possible.
DS theory has never been popular. Even Bayesianism (cf. "Bayesian nonparametrics" two decades ago) took a back seat after LLMs came out. DS theory, with its greater computational complexity, is therefore even less attractive.
Yes and we are IMHO worse off, without a proper theory of reasoning under uncertainty, it's difficult to understand what LLMs are doing (or even hard to define what they should be doing).
I agree that DS is computationally prohibitive, but another way out (aside from probability, which I don't like either) is with various systems of fuzzy logic (or you can just go with the most expressive one under the lovely name \L\Pi 1/2).
(BTW I am also exploring approach to uncertainty based on untyped lambda calculus, where each term is interpreted as a kind of "model of the world". Uncertainty degree is given by whether the term has a normal form or not. If it has not, then it is certain, while if it has a normal form, it means that additional assumptions/arguments need to be supplied to specify the model further.)
I think that's overly reductivist. In the general case DS operates on up to 2^M sets where M is the cardinality of the hypothesis space: worst case scenario. That's not true if hypotheses are hierarchical, or if evidence is frequently about the same set, or there just isn't enough evidence to fuse to get to 2^M.
In the worst case scenario there are efficient approximation methods which can be used.
This reminds me of `PrSAT`, a satisfier for probabilistic statements. ("Does a distribution exist that satisfies the following constraints?").
See: https://fitelson.org/PrSAT/, and the linked paper: https://fitelson.org/pm.pdf
The paper starts off slow, but have patience to read up to section 4, Applications, which is kind of surprising.
Thanks for this reference; I found this paper interesting, but it is a satisfiability solver. Inherently it cannot quantify the probability of a subset of events, but it can find a probability assignment given a set of constraints. I.e. prove possibility. More usefully it can show that no such assignment is possible.
I don't understand what the notation means.
For example when the author says:
P(Q ⊆ X | ∀ x ∈ Q (x = 1))
This is equivalent to P(Q ⊆ X | Q = {1}), which further simplifies to P(1∈X).
This seems to be a type error (isn't X supposed to be a set of binary variables?), and also an awfully cumbersome way to write P(1∈X).
Anyone have some idea what the article is trying to say?
It's a typo. Its supposed to be a comma not a pipe, and read P(Q ⊆ X , ∀ x ∈ Q (x = 1)). I.e. Q is some subset of X and for all x in Q, x=1.
What a dempster fire