gusl: (Default)
How can we achieve direct, efficient communication?

Communicating about preferences is often problematic, because language does not provide us with reliable ways of communicating fine degrees of preference.

Imagine for instance: the sound of your neighbour's TV is annoying. This is a small annoyance that you'd rather not have to deal with, and you're not sure if they are watching it anyway... and even if they are, maybe they are indifferent about watching TV or doing something else. But maybe they do want to watch it, so you would like find out whether this is the case. You could say:

"would you mind turning down the TV"?

Just the fact that you bothered to mention it is evidence to them that it is a big enough deal... because there is a convention whereby you don't complain about small annoyances: you're only supposed to complain if it's a big enough deal. Even if you explain to them that it's actually a small annoyance, they may not believe you, because a standard politeness strategy is to downplay how big a deal something is.

This is likely the consequence of an important principle of politeness (which took me a while to grasp): don't force people to make difficult choices. i.e. make it clear to them that they have the option of declining without fearing the consequences. When you don't give people that option, you risk having them frame the situation as being one of your interests against their interests, and this could damage your relationship.

The problem here is that the linguistic community eventually adapts to the polite language: "would you mind doing X?" eventually acquires the meaning that "please do X" originally had. To be polite now, you need to make the sentence even longer: "would you mind doing X, if it's not too much trouble?". This is linguistic inflation, and the real losers are clarity and efficiency of communication.

Suppose you run a business, and you'd like an employee to work during the weekend, but NOT if this means they are going to hate you, i.e. you only want them to work if they don't mind it too much. How do you find out how much they mind? How do you know that they feel the freedom to say no?

Why don't we use numbers to communicate how much we want something?
Why don't we use dollars to communicate how much we want something? One answer is that this creates perverse incentives: if you customarily pay off your neighbour to turn down his TV, he now has the incentive to turn it on exactly when it's likely to annoy you.
gusl: (Default)
I'm currently in love with the idea of memory-foam mattresses.

Apparently a memory-foam mattress is more expensive than a regular mattress + a memory-foam overlay separately. This is the opposite of what one would expect from buying a bundled product (which is somewhat similar to buying in bulk) (also there are fewer fixed costs). I'd like to find out more about this kind of economic phenomenon.
gusl: (Default)
Can anyone explain the identification problem (very short article) in a non-confusing way?

Is this stuff used in the context of estimating supply separate from demand from time-series data? How can one separate these two if we can only measure one figure, namely sales?

I doubt there are any big conceptual hurdles here, but the presentation is confusing. Why do they use Q for both supply and demand?


Aug. 5th, 2006 02:39 am
gusl: (Default)
If a drought destroys units of a certain crop, whose demand is inelastic, uniformly across producers, then all the producers benefit. This is slightly paradoxical, but it's true: cartels form precisely to artificially create such a situation of scarcity: if producing 1% less makes the prices 10% higher, then it makes sense to limit production.

But a natural disaster is still the best way of coordinating, preventing cheating from cartel members, and not discouraging producers from working hard.

Do game theorists have a simple, standard formal game for modeling cartels?
gusl: (Default)
Imagine a game in which you:

* win $2 with probability 1/2
* win $4 with probability 1/4
* win $8 with probability 1/8
* win $16 with probability 1/16
* win $32 with probability 1/32
and so on, ad infinitum.

Assume also that your payoff will be determined and paid up instantly.

How much would you pay to play this game?

The expected value of the game is
1/2 * 2 + 1/4 * 4 + 1/8 * 8 + ... =
1 + 1 + 1 + ... = infinity

St. Petersburg paradox
A naive decision theory using only this expected value would therefore suggest that any fee, no matter how high, would be worth paying for this opportunity. In practice, no reasonable person would pay more than a few dollars to enter. This seemingly paradoxical difference led to the name St. Petersburg paradox. [huh? why this name? -GL]

And I think this behavior is perfectly rational.

Your answer to the question above is a measure of your risk-aversity. But we should also investigate the effects of a lower-payoff vs a higher payoff game.

A game paying twice as much as the above game:
* win $4 with probability 1/2
* win $8 with probability 1/4
* ...
is obviously a better game to play (almost twice as good), and yet, the expected value of the game is also infinity.

My solution would be to make a transformation on the probabilities that makes the sum converge. The idea is that we have a "horizon", and very remote probabilities should count for less. But a 1/8 probability should count almost the same as a 1/2 probability. Any concrete solutions?

One problem is that you can always make a game with payoffs that make this sum diverge: just make the payoffs proportional to 1/p. This is not too problematic however: IMHO, the real problem is unbounded utility. Founding utility on human happiness is a good way out (I think).
gusl: (Default)
Should you ever buy rental car insurance?, via MR

My own take:

Most insurance is overpriced. This is because most people are irrationally risk-averse. One the one hand, you could say that such risk-aversion is rational because of the diminishing marginal utility of money. But on the other hand, that's only when we're talking about large quantities, or losses that would affect your long-term plans. Back on the first hand, a small loss can in practice become bigger if it affects your fluidity, taking away your "working space" money, since making a loan isn't always so quick. In both large and small fluidity-taking losses, there are replanning costs that we shouldn't ignore.

The other defense is "rational irrationality": losing a $90 cell phone hurts me more than it should, and since I know this, I will buy insurance.

On this particular question, though, there is another twist. If you don't have insurance, the dealership could try to rip you off by making you pay for tiny scratches that were there already (it happened to me once, back when I was naive and trusting). In this case, you can be sure that either they won't fix the scratch or that they're friends with the mechanic who will charge $300 to paint over the almost-invisible mark.
gusl: (Default)
My Albert Heijn is frequently out of soap. It can be really annoying when I don't have any left. I can't understand why they don't fix this.

I would love to see a small enterpreneur planting a booth right in front of the store, providing consumers with all the goods that the corporate giant failed to. How long would it take before the cops busted him/her? In Brazil, one might be able to get away with this, thanks to the semi-anarchical state of things. I understand that there's an issue of control (health inspection) and fair competition (tax collection), but as whole I think that this form of spontaneous & chaotic free trade is a good thing.

Sure, there might be some negative externalities (pollution, overcrowding of public spaces, disorderly traffic), but surely much of this piece is heavily biased in favour of the status quo: their (successful) agenda is to use the state's power to oppress their small competitors. An example that gets me really incensed was Recife's ban on kombi-taxis (the same thing happened in South Africa): when some creative people decide to do something about the sucky public transportation monopoly, their initiative gets oppressed. But sometimes the force comes a non-government group that is averse to progress: e.g. bus drivers in Colombia.


Shopping in Germany is apparently even more inconvenient than here. Maybe I should move in next to the Hauptbahnhof.
gusl: (Default)
Shopping is not patriotic, by Cafe Hayek
What are the roots of this strange belief that for our economy to be healthy, we need people to buy stuff?

Keynesian, probably. Something to do with the idea of the multiplier. That somehow, the more we buy, the more the money races around and the richer we all get. The biggest error in that way of thinking is thinking that the economy is separate from all of us.
gusl: (Default)
If experts are rational, then they will assign equal utility to all the worlds in which they are dead. When betting about questions that are related to their death, these experts should therefore be biased towards optimism.

For example, betting that the human race will *not* be extinct by 2020 seems wiser than betting that it will, regardless of the actual odds and the agent's probability estimate. If a doomsayer is right, then his prize money will be no use to him (in fact, he will never be able to collect it).

While this scenario seems extreme and unrealistic, gambles about issues relevant to the probability of death of the agents (e.g. the Avian flu) should be similarly biased towards optimism.

Does anyone want to steal this idea? I'm surely not being original, am I?


Another possible bias is that experts with a strong time preference (i.e. agents who want money *now*, whether because they are currently making investments with big expected returns, or because they are expecting to die soon, or because are plainly short-sighted), will be reluctant to make long bets.

It seems possible that such people would have knowledge that the rest of us could use, but never will because they will not give their input in the form of bets.


Here's George Carlin's 2 cents on death-related biases.
gusl: (Default)
Hypothesis: the kind of people who like cybernetics are similar to the kind of people who like category theory. Both fields are about abstract structures that can be applied to several different fields. I am one such person.

My housemate is going to teach a series titled "Baby Category Theory" to the logic students. I intend to go, but I'm a bit afraid that the math will be too fascinating, causing me to become a mathematician and never spend another day of my life as a productive human being.

Cybernetics has an image problem, unfortunately. Its name is frequently abused by the likes of spamferences and crackpots. I hope that respectable scientists don't dismiss its ideas, many of which are common sense.

When teaching us about the common ion effect (about the solubility of pairs of salts), my high school chemistry teacher used to say "equilibria retaliate" (I used to think that he was speaking Latin, but this was just his way of remembering Le Chatelier's Principle). But this is reminescent of the principle of diminishing returns from economics (how far can we make an analogy?). Are we applying chemistry to economics or vice-versa? Neither! That's why we need a more general framework... both of these results are special cases of more or less "universal" structures. This may not be saying much, but it provides me something with which to think: when I see a situation that is analogous, I will predict that adding twice as much of the stuff will give less than twice the return.

What about homeostasis? You see it in economics as well as biology. (keyword for later reference: qualitative reasoning)

Did anyone see Art De Vany - Our Body is Not Communist, arguing that the human body is kept living through an invisible hand? I think he would say that cancer is a market failure, caused by irrational agents.
gusl: (Default)
The Idea Trap, by Bryan Caplan, via MR

Caplan argues that there is a positive feedback loop: populations tend to support sensible ideas when things are going well. Just like things need to be really bad for a communist revolution to happen, free-market ideas will only come when populations are happy enough to think clearly. According to Caplan, countries break out of bad loops more or less by luck.
gusl: (Default)
If an individual relocates from a high-paying environment (musicians in the NL) to a low-paying one (musicians in Brazil), it is conceivable that they:
* work more, in order to make (almost) the same amount of money.
* work less, because the small earnings are not worth the effort

You will work more if:
* you are desperate, and don't want to cut back on your expenses.
* you don't mind working very much.

You will work less if:
* you are financially comfortable, or don't mind going into debt.
* you don't mind cutting back your expenses.
* you are "proud".

The perception of the value of money should depend only on projected future income vs expenses. But I imagine that in practice, past experience will have a strong effect, because of the way things are framed.

Assume Income is proportial to Work.
Assume Utility is Work Utility (UW) + Income Utility (UI is +, UI' is +, UI'' is - )

hm... Can anyone recommend a nice data_set -> graphics generator for Linux?
gusl: (Default)
Robert Aumann, of "you can't agree to disagree" fame, shares the Nobel this year with Thomas Schelling. via Marginal Revolution

from Tyler Cowen, Robin Hanson - Are Disagreements Honest?
... according to well-known theory, such honest disagreement is impossible. Robert Aumann (1976) first developed general results about the irrationality of “agreeing to disagree.” He showed that if two or more Bayesians would believe the same thing given the same information (i.e., have “common priors”), and if they are mutually aware of each other's opinions (i.e., have “common knowledge”), then those individuals cannot knowingly disagree. Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information.
gusl: (Default)
The other day, I wrote the following on my PDA:

Why I am no longer a mathematician:
· Tired of working hard just to be clever. Life is short. The real world is more interesting.
· Phenomenology, introspection drove me towards cogsci.
· it's more productive to do meta work: computers will eventually do math much more cheaply than me. (see Zeilberger)


Here's something of an academic autobiography, of my time at Bucknell. It says nothing about my ideas, or what I read. I tell the story of how undergraduate curricula shaped my choice of majors:


The last time I did serious mathematical research was my junior year of college... and even that was very much empirically-aided: it was about counting the number of roots of polynomials over finite fields... my discoveries were made with the aid of a C++ compiler.
Since then, I have proven things about cute games (Nim, thanks to [ profile] agnosticessence), toy theorems (prove that number_of_divisors_of(n) is always even except for when n is a perfect square), and created neat correspondences (e.g. if you represent natural numbers as multisets, GCD is intersection, LCM is union), but nothing that could count as serious mathematics.

Already my senior year, in topology class, I no longer saw the point of doing pure math. The only way I could interpret infinite products of topological spaces was as a game of symbols: it had no real meaning to me.

Not only was I starting to get a formalistic view of mathematics, but I was increasingly bothered by the normal approach to mathematics, the standard mathematical language and the paper medium. This was made much worse by the fact that I had grown intolerant of confusing notation/language and informal proofs. Thankfully, I didn't stay in mathematics. Advanced mathematics requires a lot of effort and things are not always beautiful. The real world has many more interesting things to understand. During this time, I considered going for a PhD in Applied Math, but became disappointed with that idea too. It was still too much like other math.

By my senior year, mathematics was no longer fun. Still not "hard", but I no had motivation left. I had become enthusiastic about statistical modelling... even if I got labelled a Bayesian by our frequentistics department (I think it was meant as a compliment). And it was my interest in AI, by far, that dominated my intellect.


The reason I had liked mathematics before that was that it had been, for me, easy and fun. And its formal structures were much more satisfactory and easier for me to understand than the things people did in physics, my original major. My physics teachers never seemed to explain things clearly, and never gave me good logical reasons for why they were doing what they were doing. It was often unclear which model and assumptions were being used. And even after pressing them, I still had foundational questions that went unanswered. Quantum Mechanics class was extremely frustrating: while "nobody understands quantum mechanics", the theory still has a reason to be, but they didn't give us a chance to try to make sense of the experimental results that motivated the theory, or convince me that the theory was the best we could do.

Although I started out with bad grades in physics, they were steadily improving. Still, my professors saw promise in me, and wanted me to stay. Despite liking and doing well on my last class on Thermodynamics & Statistical Mechanics, I decided that I was going to focus on math: I was just too different from the physicists, and talking to them took too much effort. Now I want Patrick Suppes to be my next physics teacher. Among the physicists, I was definitely a philosopher.

Computer Science

I had to overcome my initial prejudice against CS. I only started it because of my father's argument that it would be a good idea if I wanted to make money. As a freshman, I had thought that it was just going to be about programming techniques, and similar boring-sounding things. The sort of person who did CS at my school was not far from the "typical management major": financially ambitious, if not particularly mathematically-talented. When I joined the group, I learned that there were exceptions... so now, I realized that there were also "computer geeks", as well as the former type. I was never a "computer geek". Programming geek, yes, for a long time... but one who couldn't get Linux installed, and who would call a technician to troubleshoot my network. Among them, I was solidly seen as a math geek. It bothered me that their AI class assumed neither knowledge of basic probability or basic logic, and that the computer graphics class couldn't do a simple linear projection.
But I really liked ProgLan. Also, designing algorithms was fun. Algorithmic reductions even more. And I learned some useful programming techniques.


I've always been a philosopher. But I did not like the prospect of reading shelffuls of philosophy books, learning the ins and outs of useless arguments (for instance, about metaphysics), and rereading & struggling to understand what exactly writers mean. Philosophy is great for breaking people out of their epistemological vices: questioning their prejudices, intuitions, etc., but some things are just overanalyzed. I think this is because they talk past each other. Case in point: the Monty Hall problem. Why are they still writing papers about it?? I think that philosophers should benefit the most from computational aids to reasoning, argumentation maps and such. At least, they already know logic.


It was fascinating. But it wasn't rigorous enough for me. If they had offered cognitive science, I probably would have taken lots of it.

Economics & Linguistics

I also flirted with economics, although never for credit. It was interesting, but they were too slow on the math. Like CS, only worse. I also took a class in linguistics (the only one offered!), but as I wasn't about to start doing NLP, it remained a mere curiosity.


gusl: (Default)

December 2016

18 192021222324


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags