my non-interests
Jun. 22nd, 2005 11:01 pmAs a way of forging one's identity within the group, individuals try to be different than others. (while at the same time trying to conform, see Judith Harris).
This seems like a good way of seeing in what ways I differ from like-minded people:
Based on the lj interests lists of those who share my more unusual interests, the interests suggestion meme thinks I might be interested in
1. game theory score: 19
I am not interested in game theory, but in its applications.
2. extropy score: 18
Not convinced that I should be optimistic, I prefer "transhumanism".
3. wittgenstein score: 17
I've read some stuff about him, but nothing appealing, although I like his view of philosophy as merely a language game.
4. ontology score: 15
I hope most of you had the CS-"ontology" in mind. In my view, the philosopher's sense of "ontology", i.e. "what really exists" is misguided.
5. free software score: 15
who isn't for free software??
6. machine learning score: 14
ok. I actually like machine learning. I used to be prejudiced against it, as a "fuzzy"/statistical/boring field.
7. genetic algorithms score: 14
AFAIK, GAs don't do very much without an "intelligent designer" behind it.
8. genetic programming score: 14
what is this??
9. memetics score: 14
I like the idea of memetics. But I already have "memes" as an interest.
10. extropianism score: 13
See #2
11. austrian economics score: 13
seems like libertarians' favorite "economics", since it justifies their prior beliefs. I don't see why economists belong to these different religions. Hasn't philosophy of science advanced far enough?
12. laissez faire score: 12
hm... libertarians.
13. utilitarianism score: 11
I'm a "utilitarian".
14. life extension score: 11
I like this, actually.
15. cypherpunks score: 11
I know nothing about them.
16. social engineering score: 11
Ok. This is cool.
17. natural language processing score: 11
Ok. I should use this one and avoid the ambiguous "nlp".
18. biotechnology score: 10
Don't know much about it.
19. singularity score: 10
I'm afraid of it.
20. cryonics score: 10
Hopefully it won't be necessary.
changed by
ouwiyaru based on code by
ixwin
Find out more
This seems like a good way of seeing in what ways I differ from like-minded people:
Based on the lj interests lists of those who share my more unusual interests, the interests suggestion meme thinks I might be interested in
1. game theory score: 19
I am not interested in game theory, but in its applications.
2. extropy score: 18
Not convinced that I should be optimistic, I prefer "transhumanism".
3. wittgenstein score: 17
I've read some stuff about him, but nothing appealing, although I like his view of philosophy as merely a language game.
4. ontology score: 15
I hope most of you had the CS-"ontology" in mind. In my view, the philosopher's sense of "ontology", i.e. "what really exists" is misguided.
5. free software score: 15
who isn't for free software??
6. machine learning score: 14
ok. I actually like machine learning. I used to be prejudiced against it, as a "fuzzy"/statistical/boring field.
7. genetic algorithms score: 14
AFAIK, GAs don't do very much without an "intelligent designer" behind it.
8. genetic programming score: 14
what is this??
9. memetics score: 14
I like the idea of memetics. But I already have "memes" as an interest.
10. extropianism score: 13
See #2
11. austrian economics score: 13
seems like libertarians' favorite "economics", since it justifies their prior beliefs. I don't see why economists belong to these different religions. Hasn't philosophy of science advanced far enough?
12. laissez faire score: 12
hm... libertarians.
13. utilitarianism score: 11
I'm a "utilitarian".
14. life extension score: 11
I like this, actually.
15. cypherpunks score: 11
I know nothing about them.
16. social engineering score: 11
Ok. This is cool.
17. natural language processing score: 11
Ok. I should use this one and avoid the ambiguous "nlp".
18. biotechnology score: 10
Don't know much about it.
19. singularity score: 10
I'm afraid of it.
20. cryonics score: 10
Hopefully it won't be necessary.
changed by
Find out more
Re: Genetic Programming
Date: 2005-06-23 01:39 am (UTC)If you give a problem with elegant solutions to be solved by evolving algorithms, I think they are likely to find dirty solutions, even though they are longer than the elegant program we want (by definition, almost).
I don't know how better automated programming is going to work. It's a hard problem.
It's AI-hard. As soon as you have something to automatically solve programming problems, you'll quickly reach full AI. I'd like to make a convincing argument, but I'm a bit confused and tired right now.
Re: Genetic Programming
Date: 2005-06-23 09:14 am (UTC)I agree that satisfying vague wants with well-engineered code is AI-complete; OTOH I think a whole lot of what even good programmers do is more like Kasparov playing chess. (Though the history of the field doesn't really support this claim so far.)
Are you thinking of Eliezer Yudkowsky's ideas about self-improving AI? Even though the Singularity makes sense, I'm very skeptical of his program in particular -- just as well since I hate hate hate his goal of creating a sysop with sole root permissions to the universe.
Re: Genetic Programming
Date: 2005-06-23 09:37 am (UTC)I wouldn't even say it is "articulated knowledge" in humans: even though there may be very simple theory of aesthetics, we are hacks, and we wouldn't know this theory.
I agree that satisfying vague wants with well-engineered code is AI-complete;
How do you define a "vague want"? "not expressed in a formal logic"?
I think the interesting and productive challenges are in the middle of the formality spectrum: reverse-engineering what people do (this makes me a cognitive AIer), since knowing logic (formal) is not enough, and understanding NL (informal) is still hopeless because it's still too far from logic. My approach is to formalize how people reason. I'd like to hear a criticism from someone who thinks there are better approaches to AI.
OTOH I think a whole lot of what even good programmers do is more like Kasparov playing chess.
How does he play?
Are you thinking of Eliezer Yudkowsky's ideas about self-improving AI?
Not in particular his ideas. I'm just thinking of bootstrapping AI.
Even though the Singularity makes sense, I'm very skeptical of his program in particular -- just as well since I hate hate hate his goal of creating a sysop with sole root permissions to the universe.
Does he put it that way? Or is that your extrapolation for what superintelligence entails?
It's very scary... But if the good people don't do it first, then the bad people will. I have no idea if Yudkowski's dream would be a good enough "lesser evil".
Re: Genetic Programming
Date: 2005-06-26 10:41 am (UTC)I want to know where the bottleneck is in the development of AI, and I think it's in formalizing cognitive processes: we make real progress in increasing *intelligence* when we put effort into this, and AFAIK this is not the case with other endeavours that go by the name of "AI".