gusl: (Default)
One argument against doing something like 23andMe on yourself is that there is very little potential placebo, but significant potential nocebo and the stress that comes with knowing that you have slightly higher odds of a deadly condition. This is because medical information is framed in negative terms: diseases, rather than e.g. healthy lifespan.

Humans, even the most educated, are bad at understanding small differences in probabilities. I wonder how the health gains that come from the increased knowledge compare to the losses due to increased stress and hypochondria. Is one better off not getting exams at all? Or leaving it all to the doctors?

See also: cyberchondria
gusl: (Default)
Marvin Minsky talks about cognition as "society of mind"... which replaces the homunculus with a society of homunculi. The point is to think of the brain as a collection of agents working in parallel, each with a "mind" of its own. Cognition arises from the parts plus their communications with each other.

Some people identify agents with modules, and there are many philosophical-ish debates about what this means (Fodor's "Modularity of Mind", Pinker, Spivey's "Continuous Mind"). One can also wonder how agents/modules relate to functional and anatomical connectivity between brain regions.

Anyway, it looks like multiple levels of the brain do reinforcement learning (by which I mean decision-theoretic reasoning and planning towards the goal of achieving delayed rewards). Although the planning capacity may be shared by different agents, I would suspect that each agent has its own reward function, and thus different goals... which is why "conflict resolution" is a necessary component of cognitive systems.

For example, imagine an addicted person who wants to quit: some agent disagrees with the higher level consciousness. This may boil down to different discount rates: the craver wants to "feel" good right now, whereas the conscious person wants to feel good over the next year. But regardless, it's not clear how one can control the lower levels of the mind.

Achieving self-control: can one align the goals of the higher levels with those of the lower level agents? Are there schools of self-help based on these ideas?

Tangential: Many agents, including many instinctive ones (fear of snakes, sexual attraction), depend on learned high-level percepts (newborns learn to recognize snakes over some time). The fact that we can't turn off these instincts means that we don't have direct control over the higher levels of the visual cortex.
gusl: (Default)
Suppose you (B) and your friend (A) are expecting to meet a friend of theirs (D) who you've never met before. You're running a bit late, and just as you get there, you see someone (C) who is walking away, so must decide ASAP whether this is the person you're supposed to meet, so you can decide whether to yell. Are C and D the same person?

A can't see C.

If humans were digital creatures, B could instantly communicate C's image to A. Better yet, A could have instructed B on how to recognize D as effectively as A herself.

However, as a human, A and B must use English to tell each other these things. This is an awful solution! It doesn't bother me so much when machines are dumber than humans. It bothers me more when humans are dumber than machines.
gusl: (Default)
Dan Sullivan - Greens and Libertarians: The yin and yang of our political future

... basic differences between the approaches of the two parties and their members. Libertarians tend to be logical and analytical. They are confident that their principles will create an ideal society, even though they have no consensus of what that society would be like. Greens, on the other hand, tend to be more intuitive and imaginative. They have clear images of what kind of society they want, but are fuzzy about the principles on which that society would be based.

Ironically, Libertarians tend to be more utopian and uncompromising about their political positions, and are often unable to focus on politically winnable proposals to make the system more consistent with their overall goals. Greens on the other hand, embrace immediate proposals with ease, but are often unable to show how those proposals fit in to their ultimate goals.
It is said that Libertarians have a conservative philosophy and Greens have a liberal philosophy. In reality, conservatism and liberalism are mere proclivities, and do not deserve to have the name "philosophy" attached to them. People who have more power than others are inclined to conserve it, and people who have less are inclined to liberate it. In Russia, as in feudal England, conservatives wanted more government control, as government was at the root of their power. Liberals wanted more private discretion.

In the United States today, where power has been vested in private institutions, conservatives want less government and liberals want more. What passes for conservative and liberal "philosophies" is merely a set of rationalizations that power-mongers hide behind.

Libertarians tend focus on means, Greens tend to focus on ends.
They are committed to different sets of beliefs. They are not incompatible, and they could form powerful political alliances, if only they could get over their differences. And if they got rid of radicals on both sides, I'd be quite happy to join them.

I find it very interesting that political ideology reflects a difference in psychoepistemology. I remember seeing an article about conservatives having fearful personalities:

Is there a conservative gene?
The psychological variables that the study claims might contribute to the adoption of a conservative ideology include anxiety regarding death, intolerance towards ambiguity, resistance to change, avoidance of uncertainty, need for order, structure and closure, fear of loss or threat, aggression and lower than normal levels of self esteem.

Correlation between MBTI and political affiliation
Comments on "Relations among Political Attitudes, Personality, and Psychopathology Assessed with New Measures of Libertarianism and Conservatism"
gusl: (Default)
Robin Hanson - Is Fairness About Clear Fitness Signals?

I've often struggled with the concept of "fairness" (distinct from "justice" in the sense of "being wronged", which is relatively unproblematic, and can be formalized in terms of (social) contracts). "Fairness" here is about what feels right, absent any agreements.

The concept of "blame"/"responsibility" is similarly problematic (e.g. "when can you blame someone? what about the environment where he grew up? his genes?"), but at least it has a reason to exist: assigning blame can prevent future problems.

"Fairness", OTOH, seems like a quirky concept with no clear purpose. Why is it unfair to compete against a disabled person, while it's fair to compete with (and merciless beat) someone who is chronically lazy? We say this is the case because the lazy person chooses to be lazy. So humans have a folk theory of free will. (never mind the free-will / determinism debates).

Anyway, Robin Hanson provides an interesting answer to the above, without going into this question of attributing "choice": instead he focuses on the hypothesis that the outcomes of unfair dealings are poor signals of genetic fitness... although this doesn't explain why unfair games can be repulsive. Is this because they waste people's time, not satisfying their immense curiosity about others' genetic fitness? or because unfair games could give deceptive signals? Could it be that people are repulsed by the sight/knowledge of others struggling helplessly?

What about people who say that sweatshops are "unfair", even when they are a good deal for everyone involved? Are they framing the situation as a competition between boss and worker? Does it bother them that rich and poor are interacting in a capitalistic way?
(I'm reminded of a link about this: a theory of how things get "morally contamined" by association. Maybe it was a response to Amitai Etzioni)


gusl: (Default)

December 2016

18 192021222324


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags