my non-interests
Jun. 22nd, 2005 11:01 pmAs a way of forging one's identity within the group, individuals try to be different than others. (while at the same time trying to conform, see Judith Harris).
This seems like a good way of seeing in what ways I differ from like-minded people:
Based on the lj interests lists of those who share my more unusual interests, the interests suggestion meme thinks I might be interested in
1. game theory score: 19
I am not interested in game theory, but in its applications.
2. extropy score: 18
Not convinced that I should be optimistic, I prefer "transhumanism".
3. wittgenstein score: 17
I've read some stuff about him, but nothing appealing, although I like his view of philosophy as merely a language game.
4. ontology score: 15
I hope most of you had the CS-"ontology" in mind. In my view, the philosopher's sense of "ontology", i.e. "what really exists" is misguided.
5. free software score: 15
who isn't for free software??
6. machine learning score: 14
ok. I actually like machine learning. I used to be prejudiced against it, as a "fuzzy"/statistical/boring field.
7. genetic algorithms score: 14
AFAIK, GAs don't do very much without an "intelligent designer" behind it.
8. genetic programming score: 14
what is this??
9. memetics score: 14
I like the idea of memetics. But I already have "memes" as an interest.
10. extropianism score: 13
See #2
11. austrian economics score: 13
seems like libertarians' favorite "economics", since it justifies their prior beliefs. I don't see why economists belong to these different religions. Hasn't philosophy of science advanced far enough?
12. laissez faire score: 12
hm... libertarians.
13. utilitarianism score: 11
I'm a "utilitarian".
14. life extension score: 11
I like this, actually.
15. cypherpunks score: 11
I know nothing about them.
16. social engineering score: 11
Ok. This is cool.
17. natural language processing score: 11
Ok. I should use this one and avoid the ambiguous "nlp".
18. biotechnology score: 10
Don't know much about it.
19. singularity score: 10
I'm afraid of it.
20. cryonics score: 10
Hopefully it won't be necessary.
changed by
ouwiyaru based on code by
ixwin
Find out more
This seems like a good way of seeing in what ways I differ from like-minded people:
Based on the lj interests lists of those who share my more unusual interests, the interests suggestion meme thinks I might be interested in
1. game theory score: 19
I am not interested in game theory, but in its applications.
2. extropy score: 18
Not convinced that I should be optimistic, I prefer "transhumanism".
3. wittgenstein score: 17
I've read some stuff about him, but nothing appealing, although I like his view of philosophy as merely a language game.
4. ontology score: 15
I hope most of you had the CS-"ontology" in mind. In my view, the philosopher's sense of "ontology", i.e. "what really exists" is misguided.
5. free software score: 15
who isn't for free software??
6. machine learning score: 14
ok. I actually like machine learning. I used to be prejudiced against it, as a "fuzzy"/statistical/boring field.
7. genetic algorithms score: 14
AFAIK, GAs don't do very much without an "intelligent designer" behind it.
8. genetic programming score: 14
what is this??
9. memetics score: 14
I like the idea of memetics. But I already have "memes" as an interest.
10. extropianism score: 13
See #2
11. austrian economics score: 13
seems like libertarians' favorite "economics", since it justifies their prior beliefs. I don't see why economists belong to these different religions. Hasn't philosophy of science advanced far enough?
12. laissez faire score: 12
hm... libertarians.
13. utilitarianism score: 11
I'm a "utilitarian".
14. life extension score: 11
I like this, actually.
15. cypherpunks score: 11
I know nothing about them.
16. social engineering score: 11
Ok. This is cool.
17. natural language processing score: 11
Ok. I should use this one and avoid the ambiguous "nlp".
18. biotechnology score: 10
Don't know much about it.
19. singularity score: 10
I'm afraid of it.
20. cryonics score: 10
Hopefully it won't be necessary.
changed by
Find out more
(no subject)
Date: 2005-06-22 09:36 pm (UTC)(no subject)
Date: 2005-06-22 09:44 pm (UTC)(no subject)
Date: 2005-06-22 09:45 pm (UTC)And I think he raises a lot of interesting points of use of language.
Genetic Programming
Date: 2005-06-22 11:17 pm (UTC)I'm not sure if there are any real-world genetic programming success stories like there are for genetic algorithms, but it has come up with surprisingly non-intuitive yet effective ways to write programs. The main problem with genetic programming is that it's very slow. Even with a cap on maximum size, the number of possible programs that can be written with so many atoms is huge, and most of them do nothing interesting. So they have very large search spaces to explore.
John R. Koza of Stanford University is the biggest name in the field.
Re: Genetic Programming
Date: 2005-06-22 11:43 pm (UTC)The goals of "genetic programming" are best accomplished by AI that mimics human programmers' problem-solving. Human programmers don't search all possible programs... I think we need meta-level guidance (reasoning about specifications) if we're going to have decent automated programming (artificial programmers).
Re: Genetic Programming
Date: 2005-06-23 12:40 am (UTC)The nice thing about GP is that it doesn't need us to have much knowledge about how humans solve problems. The impression I have from what psychology I've read is that we're a long way off from getting at the mechanisms behind human ingenuity.
Re: Genetic Programming
Date: 2005-06-23 12:59 am (UTC)I don't know how better automated programming is going to work. It's a hard problem.
Re: Genetic Programming
Date: 2005-06-23 01:39 am (UTC)If you give a problem with elegant solutions to be solved by evolving algorithms, I think they are likely to find dirty solutions, even though they are longer than the elegant program we want (by definition, almost).
I don't know how better automated programming is going to work. It's a hard problem.
It's AI-hard. As soon as you have something to automatically solve programming problems, you'll quickly reach full AI. I'd like to make a convincing argument, but I'm a bit confused and tired right now.
Re: Genetic Programming
Date: 2005-06-23 09:14 am (UTC)I agree that satisfying vague wants with well-engineered code is AI-complete; OTOH I think a whole lot of what even good programmers do is more like Kasparov playing chess. (Though the history of the field doesn't really support this claim so far.)
Are you thinking of Eliezer Yudkowsky's ideas about self-improving AI? Even though the Singularity makes sense, I'm very skeptical of his program in particular -- just as well since I hate hate hate his goal of creating a sysop with sole root permissions to the universe.
Re: Genetic Programming
Date: 2005-06-23 09:37 am (UTC)I wouldn't even say it is "articulated knowledge" in humans: even though there may be very simple theory of aesthetics, we are hacks, and we wouldn't know this theory.
I agree that satisfying vague wants with well-engineered code is AI-complete;
How do you define a "vague want"? "not expressed in a formal logic"?
I think the interesting and productive challenges are in the middle of the formality spectrum: reverse-engineering what people do (this makes me a cognitive AIer), since knowing logic (formal) is not enough, and understanding NL (informal) is still hopeless because it's still too far from logic. My approach is to formalize how people reason. I'd like to hear a criticism from someone who thinks there are better approaches to AI.
OTOH I think a whole lot of what even good programmers do is more like Kasparov playing chess.
How does he play?
Are you thinking of Eliezer Yudkowsky's ideas about self-improving AI?
Not in particular his ideas. I'm just thinking of bootstrapping AI.
Even though the Singularity makes sense, I'm very skeptical of his program in particular -- just as well since I hate hate hate his goal of creating a sysop with sole root permissions to the universe.
Does he put it that way? Or is that your extrapolation for what superintelligence entails?
It's very scary... But if the good people don't do it first, then the bad people will. I have no idea if Yudkowski's dream would be a good enough "lesser evil".
Re: Genetic Programming
Date: 2005-06-26 10:41 am (UTC)I want to know where the bottleneck is in the development of AI, and I think it's in formalizing cognitive processes: we make real progress in increasing *intelligence* when we put effort into this, and AFAIK this is not the case with other endeavours that go by the name of "AI".
(no subject)
Date: 2005-06-23 12:43 am (UTC)Definitely not. I know what you're getting at here, the Bryan Caplan/Bayesian argument that we should be in agreement on what economics is and how it should be done.
But I think that the correct priors either have not been established, or have been deliberately mis-established.
Part of the problem with treating Economics as a science is that it has a use-value (or maybe, a mis-use-value) to people who have the power to obfuscate it. Another problem is that human behavior is still fairly irreducible.
In it's most basic definition, Economics is the study of how we deal with scarcity. It doesn't apply to things which are not scarce.
Beyond that, how do we study it? Can we isolate variables? If not, a wholly empirical (or rather, statistical) approach will be highly misleading, as it is in most "social sciences".
If we can establish enough priors that we agree on, we can extrapolate the rest through logic. And that's pretty much all Austrian Economics is. Yes, libertarians like it because its conclusions jibe with what they already are inclined to believe, but that's no reason to dismiss it either. Perhaps the conclusions are a reason for libertarians to believe what they do in the first place. (unless you think that ALL libertarians believe in liberty for ulterior psychological reasons...)
(no subject)
Date: 2005-06-23 01:25 am (UTC)Would you have any references to this?
I don't think many disagreements in economics are due to differing priors.
Although the schools have disagreements on their philosophy of probability (e.g. "radical ignorance") (which btw, should not be a difficult to resolve for any Machine Learning specialist, who can test methods of induction), I think they mostly disagree on basic assumptions, e.g. whether people are rational, and whether people's behavior can be modelled by a numeric notion of utility.
Which I think are all silly thinks to disagree about, especially since we can do meta-science to test which "economics" gets it right.
Part of the problem with treating Economics as a science is that it has a use-value (or maybe, a mis-use-value) to people who have the power to obfuscate it.
i.e. bias.
Another problem is that human behavior is still fairly irreducible.
(1) what would it be reducible to?
(2) what does this have to do with whether there are reasonable disagreements?
(no subject)
Date: 2005-06-23 02:51 pm (UTC)