gusl: (Default)
[personal profile] gusl
As a way of forging one's identity within the group, individuals try to be different than others. (while at the same time trying to conform, see Judith Harris).

This seems like a good way of seeing in what ways I differ from like-minded people:


Based on the lj interests lists of those who share my more unusual interests, the interests suggestion meme thinks I might be interested in
1. game theory score: 19
I am not interested in game theory, but in its applications.

2. extropy score: 18
Not convinced that I should be optimistic, I prefer "transhumanism".

3. wittgenstein score: 17
I've read some stuff about him, but nothing appealing, although I like his view of philosophy as merely a language game.

4. ontology score: 15
I hope most of you had the CS-"ontology" in mind. In my view, the philosopher's sense of "ontology", i.e. "what really exists" is misguided.

5. free software score: 15
who isn't for free software??

6. machine learning score: 14
ok. I actually like machine learning. I used to be prejudiced against it, as a "fuzzy"/statistical/boring field.

7. genetic algorithms score: 14
AFAIK, GAs don't do very much without an "intelligent designer" behind it.

8. genetic programming score: 14
what is this??

9. memetics score: 14
I like the idea of memetics. But I already have "memes" as an interest.

10. extropianism score: 13
See #2

11. austrian economics score: 13
seems like libertarians' favorite "economics", since it justifies their prior beliefs. I don't see why economists belong to these different religions. Hasn't philosophy of science advanced far enough?

12. laissez faire score: 12
hm... libertarians.

13. utilitarianism score: 11
I'm a "utilitarian".

14. life extension score: 11
I like this, actually.

15. cypherpunks score: 11
I know nothing about them.

16. social engineering score: 11
Ok. This is cool.

17. natural language processing score: 11
Ok. I should use this one and avoid the ambiguous "nlp".

18. biotechnology score: 10
Don't know much about it.

19. singularity score: 10
I'm afraid of it.

20. cryonics score: 10
Hopefully it won't be necessary.


Type your username here to find out what interests it suggests for you.
Popularity Ceiling: (Please be patient!)

changed by [livejournal.com profile] ouwiyaru based on code by [livejournal.com profile] ixwin
Find out more

Re: Genetic Programming

Date: 2005-06-23 01:39 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
oh, nice pictures. I'd like to see if one can clean up these evolved programs into simpler, more understandable programs, from which to read off a theory of aesthetics. The fact that cleaning it up is hard is the other reason I don't like the genetic approach.

If you give a problem with elegant solutions to be solved by evolving algorithms, I think they are likely to find dirty solutions, even though they are longer than the elegant program we want (by definition, almost).


I don't know how better automated programming is going to work. It's a hard problem.

It's AI-hard. As soon as you have something to automatically solve programming problems, you'll quickly reach full AI. I'd like to make a convincing argument, but I'm a bit confused and tired right now.

Re: Genetic Programming

Date: 2005-06-23 09:14 am (UTC)
From: [identity profile] darius.livejournal.com
There are lots of problems that may have no elegant solution (I guess you could make a counting argument for this, except it'd need a crisp definition of elegance) -- and GP can be good for them. When you're more interested in articulated knowledge, then yeah, GP pretty much sucks.

I agree that satisfying vague wants with well-engineered code is AI-complete; OTOH I think a whole lot of what even good programmers do is more like Kasparov playing chess. (Though the history of the field doesn't really support this claim so far.)

Are you thinking of Eliezer Yudkowsky's ideas about self-improving AI? Even though the Singularity makes sense, I'm very skeptical of his program in particular -- just as well since I hate hate hate his goal of creating a sysop with sole root permissions to the universe.

Re: Genetic Programming

Date: 2005-06-23 09:37 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
When you're more interested in articulated knowledge, then yeah, GP pretty much sucks.
I wouldn't even say it is "articulated knowledge" in humans: even though there may be very simple theory of aesthetics, we are hacks, and we wouldn't know this theory.

I agree that satisfying vague wants with well-engineered code is AI-complete;
How do you define a "vague want"? "not expressed in a formal logic"?
I think the interesting and productive challenges are in the middle of the formality spectrum: reverse-engineering what people do (this makes me a cognitive AIer), since knowing logic (formal) is not enough, and understanding NL (informal) is still hopeless because it's still too far from logic. My approach is to formalize how people reason. I'd like to hear a criticism from someone who thinks there are better approaches to AI.


OTOH I think a whole lot of what even good programmers do is more like Kasparov playing chess.
How does he play?

Are you thinking of Eliezer Yudkowsky's ideas about self-improving AI?
Not in particular his ideas. I'm just thinking of bootstrapping AI.

Even though the Singularity makes sense, I'm very skeptical of his program in particular -- just as well since I hate hate hate his goal of creating a sysop with sole root permissions to the universe.
Does he put it that way? Or is that your extrapolation for what superintelligence entails?

It's very scary... But if the good people don't do it first, then the bad people will. I have no idea if Yudkowski's dream would be a good enough "lesser evil".

Re: Genetic Programming

Date: 2005-06-26 10:41 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
if you mean improving automated programming, I don't know what the state-of-the-art is.

I want to know where the bottleneck is in the development of AI, and I think it's in formalizing cognitive processes: we make real progress in increasing *intelligence* when we put effort into this, and AFAIK this is not the case with other endeavours that go by the name of "AI".

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags