gusl: (Default)
I've just improved my TrustOPedia vision:
I have an idea for a system for collaborative knowledge construction by skeptics, meant to avoid biases, and to allow the layman to make good decisions in the face of scientific controversies. Our beliefs rest on proverbial towers, which over time tend to either get stronger or collapse. The motivation behind this project is, for better or worse, to make these towers clear & explicit, and thereby easy to scrutinize.
... read more

Please comment, contribute your ideas, etc., either here or on the wiki. The Talk Page is here.
gusl: (Default)
I am all about working with multiple representations, combining induction with deduction, merging probability with logic, etc.

When we write software, we desire it to be correct, and demonstrably so ("demonstrably" in the sense of being able to convince someone: perhaps "demoably" is a better term). There are many ways of doing this:
* empirical testing: we see that it does the right thing.
* agreement: testing, not against the world or our belief of what it should do, but against an "independent" implementation. We could unify these two by defining the concept of "agent", which could encompass a human test-judger, another program, etc.
* formal proving: more definite than the above two, but we have to rely on the system in which the proof is built.

Software development routinely involves all of the above, except for the "proving", which is informal. But, (as would be typical of me to say) we do make real deductions when writing or judging a piece of code.

The mainstream of software development follows an engineering / instrumentalist epistemology: they know that the program has bugs, but they don't mind them as long as the program is still useful. As a consequence, they "abrem mão" of formal proofs and are satisfied if the program passes a particular class of empirical test cases.

The purpose of test cases is often to convince one that the program would work on a broader class of test cases (otherwise all you would be proving is that the program will work on a non-interactive demo). This is induction. Perhaps the most important epistemological question for software is: how far and how confidently can one generalize from a set of test cases?

Another question has to do with the dynamic development process: when and how should we test? Common-sense testing methodology tells us to write simple test cases first. This is Occam's razor.

Btw, this is similar to mathematical epistemology: how do we combine experimental mathematics with normal deductive math? How do we combine different pieces of intuitive knowledge in a consistent, logical framework?

--

If you ask a doctor (or any specialist) to write down probabilities about a particular domain that he/she knows about, these numbers will almost certainly be susceptible to Dutch book. Bayesians consider this to be a bad thing.

I believe that, by playing Dutch book with him/herself, our doctor would achieve better estimates. I would like to see experiments in which this is done. Actually, I should probably write some of this software myself. The input is a set of initial beliefs (probabilities), and the output is a Dutch-book strategy. This Dutch-book strategy corresponds to an argument against the set of beliefs. This forces our specialist to reevaluate his beliefs, and choose which one(s) to revise. This is like a probabilistic version of Socratic dialogue.

--

Do you see the connection between the above two? Please engage me in dialog!
gusl: (Default)
It has been said that being formal forces one to be honest. But I think it does more than that: by leaving nothing implicit, formal expositions also force the author to really understand what he/she is saying, to iron out one's contradictions and "semi-contradictions" (e.g. the Nixon diamond).

Gerald Sussman and Jack Wisdom, about their computational exploration of classical mechanics:
When we started, we expected that using this approach to formulate mechanics would be easy. We quickly learned that many things we thought we understood we did not in fact understand (1). Our requirement that our mathematical notations be explicit and precise enough that they can be interpreted automatically, as by a computer, is very effective in uncovering puns and flaws in reasoning (2). The resulting struggle to make the mathematics precise, yet clear and computationally effective, lasted far longer than we anticipated.
We learned a great deal about both mechanics and computation by this process. We hope others, especially our competitors, will adopt these methods, which enhance understanding while slowing research.


A few comments:

(1) It's probably safe to say that most physics professors don't understand those things either, unless they have undergone similar formalization efforts. Scientists' expertise is usually correct with regard to the particular exemplars that they study and similar systems, but, unless their understanding is really thorough, there will be examples in which their expertise will not generalize correctly.
There are many cases in which physicists' intuitions get tangled up (see, e.g. Feynman's inverted underwater sprinkler, or even the infamous Monty Hall problem), but these are the lucky cases, since a problem got detected, which subsequently led to the debate: but, in general, if we're going to just rely on intuitions, this could lead to plain oversights: we might never even suspect that we're wrong.
The solution to such shameful persistent disagreements is to have a formal framework for reasoning about such problems: the solutions to these problems are logical consequences of widely agreed-upon laws, but disagreement persists because there is no standard formal framework in use, and learning a new framework would be too steep a learning curve for most scientists. My guess is that we could settle many interesting questions just by checking whether they are in the deductive closure of different combinations of theories.

(2) I like the expression "puns in reasoning", because it's a very good analogy: in fact, I suspect that many errors in scientific reasoning come from a certain sloppiness in using words to denote concepts, which is like using the same name for different variables in one's mind. The cognitive practices of being careful not to confuse concepts are very similar to good programming practice.

On the costs of being formal: Formality might be an unnecessary burden for those who have a good intuitive understanding, but I think it is generally worth the trouble. It is certainly a much greater burden on those who don't understand.



... and how this relates to me:

Anyway, I think this tendency towards thoroughness makes me a rather slow researcher. Being eternally skeptical, interested in foundational questions, and unwilling to proceed without a solid understanding, I have always tended towards the "enhance understanding" side of the trade-off. It's hard to make fast progress in one's research when you're questioning your teachers' assumptions or trying to reformulate it in different terms or seeking analogies with other ideas (which are other ways of making one's understanding more solid).

I'm reminded of my high school physics teacher saying in my college recommendation letter that I was good at deriving things from first principles... I also think it's unusual for professors to recommend a student to "just memorize the formula", "just learn the cookbook". My grades have certainly suffered from ignoring such advice.

---

Quotes

~ "In general, I feel if you can't say it clearly you don't understand it yourself." - John Searle

... I'm looking for another relevant quote about programming being "universal". Possibly a Dijkstra quote. Anyone?
gusl: (Default)
Kowalski, Toni (1996) - Abstract Argumentation

We outline an abstract approach to defeasible reasoning and argumentation which
includes many existing formalisms, including default logic, extended logic programming,
non-monotonic modal logic and auto-epistemic logic, as special cases. We show, in particular,
that the admissibility" semantics for all these formalisms has a natural argumentation theoretic
interpretation and proof procedure, which seem to correspond well with informal
argumentation.


Dung, Kowalski, Toni (2005) - Dialectic proof procedures for assumption-based, admissible argumentation
We present a family of dialectic proof procedures for the admissibility semantics
of assumption-based argumentation. These proof procedures are defined for any
conventional logic formulated as a collection of inference rules and show how any
such logic can be extended to a dialectic argumentation system.
The proof procedures find a set of assumptions, to defend a given belief, by starting
from an initial set of assumptions that supports an argument for the belief
and adding defending assumptions incrementally to counter-attack all attacks.
...
The novelty of our
approach lies mainly in its use of backward reasoning to construct arguments
and potential arguments, and the fact that the proponent and opponent can
attack one another before an argument is completed. The definition of winning
strategy can be implemented directly as a non-deterministic program, whose
search strategy implements the search for defences.

In conventional logic, beliefs are derived from axioms, which are held to be beyond
dispute. In everyday argumentation, however, beliefs are based on assumptions, which
can be questioned and disputed...




Phan Minh Dung - ON THE ACCEPTABILITY OF ARGUMENTS AND ITS FUNDAMENTAL ROLE IN NONMONOTONIC REASONING, LOGIC PROGRAMMING AND N-PERSONS GAMES
The purpose of this paper is to study the fundamental mechanism, humans use in
argumentation, and to explore ways to implement this mechanism on computers.
Roughly, the idea of argumentational reasoning is that a statement is believable if it can be
argued successfully against attacking arguments.



Panzarasa, Jennings, Norman - Formalizing Collaborative Decision-Making and Practical Reasoning in Multi-agent Systems

Kenneth Forbus - Exploring analogy in the large
Read more... )
gusl: (Default)
Wikipedia hate: the problem is that no one is accountable for the content of the articles. There have been cases of people getting slandered, who could do nothing about it.

The Wikipedia badly needs a reputation system (which writers/editors do you trust?) or an endorsement system (which reviewers do you trust?), but preferably several parallel systems.

Once people become accustomed to the notion of different accrediting agencies, the world would become a much better place. I believe that much evil comes from monopolies in rating, which leads to a greyscale view of things, e.g. you only get 1 grade for each class, and lots of relevant information is kept off-record... when we have competition between raters, they will have to become smarter and less biased, their ratings will be more meaningful and will better reflect the bottom line.

My hope is that this will lead people to make smarter decisions about politics, hiring, etc. You can say I'm a dreamer, but I'm not the only one.
gusl: (Default)
If experts are rational, then they will assign equal utility to all the worlds in which they are dead. When betting about questions that are related to their death, these experts should therefore be biased towards optimism.

For example, betting that the human race will *not* be extinct by 2020 seems wiser than betting that it will, regardless of the actual odds and the agent's probability estimate. If a doomsayer is right, then his prize money will be no use to him (in fact, he will never be able to collect it).

While this scenario seems extreme and unrealistic, gambles about issues relevant to the probability of death of the agents (e.g. the Avian flu) should be similarly biased towards optimism.

Does anyone want to steal this idea? I'm surely not being original, am I?

---

Another possible bias is that experts with a strong time preference (i.e. agents who want money *now*, whether because they are currently making investments with big expected returns, or because they are expecting to die soon, or because are plainly short-sighted), will be reluctant to make long bets.

It seems possible that such people would have knowledge that the rest of us could use, but never will because they will not give their input in the form of bets.

---

Here's George Carlin's 2 cents on death-related biases.
gusl: (Default)
Does anyone actually evaluate weather forecasts? It's not a very hard thing to keep track of.

But, AFAIK, no forecasters publically archive their data...

When are they going to put money where their mouth is? When are we going to check the price of weather futures instead of listening to unaccountable forecasters in the news?
gusl: (Default)
Highly recommended: Michael Huemer - Why People Are Irrational about Politics (even if I disagree with his moral objectivism at first sight)
My highlights:
Read more... )

---

I have an idea for a system for collaborative knowledge construction by skeptics, meant to avoid bias.
Read more... )

---

Here's a scarier link about techniques of persuasion, manipulation, hypnosis, etc. He characterizes the US Marines and revivalist churches as "brainwashing cults". Persuasion and Brainwashing Techniques Being Used On The Public Today

---

finally, via Google Ads:
The Theseus Learning System. Maybe I can make some money this way: selling software for critical-thinking education / idea refinement / writing. But my real interest is to create systems to enlighten real debates.

Let me be almost original and invent the phrase "epistemic hygiene".

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags