gusl: (Default)
[personal profile] gusl
As a way of forging one's identity within the group, individuals try to be different than others. (while at the same time trying to conform, see Judith Harris).

This seems like a good way of seeing in what ways I differ from like-minded people:


Based on the lj interests lists of those who share my more unusual interests, the interests suggestion meme thinks I might be interested in
1. game theory score: 19
I am not interested in game theory, but in its applications.

2. extropy score: 18
Not convinced that I should be optimistic, I prefer "transhumanism".

3. wittgenstein score: 17
I've read some stuff about him, but nothing appealing, although I like his view of philosophy as merely a language game.

4. ontology score: 15
I hope most of you had the CS-"ontology" in mind. In my view, the philosopher's sense of "ontology", i.e. "what really exists" is misguided.

5. free software score: 15
who isn't for free software??

6. machine learning score: 14
ok. I actually like machine learning. I used to be prejudiced against it, as a "fuzzy"/statistical/boring field.

7. genetic algorithms score: 14
AFAIK, GAs don't do very much without an "intelligent designer" behind it.

8. genetic programming score: 14
what is this??

9. memetics score: 14
I like the idea of memetics. But I already have "memes" as an interest.

10. extropianism score: 13
See #2

11. austrian economics score: 13
seems like libertarians' favorite "economics", since it justifies their prior beliefs. I don't see why economists belong to these different religions. Hasn't philosophy of science advanced far enough?

12. laissez faire score: 12
hm... libertarians.

13. utilitarianism score: 11
I'm a "utilitarian".

14. life extension score: 11
I like this, actually.

15. cypherpunks score: 11
I know nothing about them.

16. social engineering score: 11
Ok. This is cool.

17. natural language processing score: 11
Ok. I should use this one and avoid the ambiguous "nlp".

18. biotechnology score: 10
Don't know much about it.

19. singularity score: 10
I'm afraid of it.

20. cryonics score: 10
Hopefully it won't be necessary.


Type your username here to find out what interests it suggests for you.
Popularity Ceiling: (Please be patient!)

changed by [livejournal.com profile] ouwiyaru based on code by [livejournal.com profile] ixwin
Find out more

(no subject)

Date: 2005-06-22 09:36 pm (UTC)
From: [identity profile] duckierose.livejournal.com
Wittgenstein is the MAN.

(no subject)

Date: 2005-06-22 09:45 pm (UTC)
From: [identity profile] duckierose.livejournal.com
I really like the way he writes. Well, the way he's translated, which, in part, speaks to the way he writes.

And I think he raises a lot of interesting points of use of language.

Genetic Programming

Date: 2005-06-22 11:17 pm (UTC)
From: [identity profile] xuande.livejournal.com
Genetic programming is kind of like a genetic algorithm, except the individuals it evolves are programs instead of parameters. The programs are in a simple LISP-like language, represented internally as a tree. "Crossover" between individuals usually consists in splicing part of one tree onto another, and mutation can involve things like growing a branch or replacing a branch with a randomly-selected leaf (atom).

I'm not sure if there are any real-world genetic programming success stories like there are for genetic algorithms, but it has come up with surprisingly non-intuitive yet effective ways to write programs. The main problem with genetic programming is that it's very slow. Even with a cap on maximum size, the number of possible programs that can be written with so many atoms is huge, and most of them do nothing interesting. So they have very large search spaces to explore.

John R. Koza of Stanford University is the biggest name in the field.

Re: Genetic Programming

Date: 2005-06-22 11:43 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
So "genetic algorithms" do a sort of hill-climbing?

The goals of "genetic programming" are best accomplished by AI that mimics human programmers' problem-solving. Human programmers don't search all possible programs... I think we need meta-level guidance (reasoning about specifications) if we're going to have decent automated programming (artificial programmers).

Re: Genetic Programming

Date: 2005-06-23 12:40 am (UTC)
From: [identity profile] xuande.livejournal.com
Genetic algorithms do the same sort of thing that hill climbers do (optimizing functions) but they're usually much better at it. Hill climbers tend to get stuck on small, local hills in the fitness landscape, while genetic algorithms tend to find the highest peaks. They're closer to simulated annealing (doubly so because they were both inspired by natural processes).

The nice thing about GP is that it doesn't need us to have much knowledge about how humans solve problems. The impression I have from what psychology I've read is that we're a long way off from getting at the mechanisms behind human ingenuity.

Re: Genetic Programming

Date: 2005-06-23 12:59 am (UTC)
From: [identity profile] darius.livejournal.com
My art program uses a homegrown type of genetic programming (with human judgement as the fitness function): here and here.

I don't know how better automated programming is going to work. It's a hard problem.

Re: Genetic Programming

Date: 2005-06-23 01:39 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
oh, nice pictures. I'd like to see if one can clean up these evolved programs into simpler, more understandable programs, from which to read off a theory of aesthetics. The fact that cleaning it up is hard is the other reason I don't like the genetic approach.

If you give a problem with elegant solutions to be solved by evolving algorithms, I think they are likely to find dirty solutions, even though they are longer than the elegant program we want (by definition, almost).


I don't know how better automated programming is going to work. It's a hard problem.

It's AI-hard. As soon as you have something to automatically solve programming problems, you'll quickly reach full AI. I'd like to make a convincing argument, but I'm a bit confused and tired right now.

Re: Genetic Programming

Date: 2005-06-23 09:14 am (UTC)
From: [identity profile] darius.livejournal.com
There are lots of problems that may have no elegant solution (I guess you could make a counting argument for this, except it'd need a crisp definition of elegance) -- and GP can be good for them. When you're more interested in articulated knowledge, then yeah, GP pretty much sucks.

I agree that satisfying vague wants with well-engineered code is AI-complete; OTOH I think a whole lot of what even good programmers do is more like Kasparov playing chess. (Though the history of the field doesn't really support this claim so far.)

Are you thinking of Eliezer Yudkowsky's ideas about self-improving AI? Even though the Singularity makes sense, I'm very skeptical of his program in particular -- just as well since I hate hate hate his goal of creating a sysop with sole root permissions to the universe.

Re: Genetic Programming

Date: 2005-06-23 09:37 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
When you're more interested in articulated knowledge, then yeah, GP pretty much sucks.
I wouldn't even say it is "articulated knowledge" in humans: even though there may be very simple theory of aesthetics, we are hacks, and we wouldn't know this theory.

I agree that satisfying vague wants with well-engineered code is AI-complete;
How do you define a "vague want"? "not expressed in a formal logic"?
I think the interesting and productive challenges are in the middle of the formality spectrum: reverse-engineering what people do (this makes me a cognitive AIer), since knowing logic (formal) is not enough, and understanding NL (informal) is still hopeless because it's still too far from logic. My approach is to formalize how people reason. I'd like to hear a criticism from someone who thinks there are better approaches to AI.


OTOH I think a whole lot of what even good programmers do is more like Kasparov playing chess.
How does he play?

Are you thinking of Eliezer Yudkowsky's ideas about self-improving AI?
Not in particular his ideas. I'm just thinking of bootstrapping AI.

Even though the Singularity makes sense, I'm very skeptical of his program in particular -- just as well since I hate hate hate his goal of creating a sysop with sole root permissions to the universe.
Does he put it that way? Or is that your extrapolation for what superintelligence entails?

It's very scary... But if the good people don't do it first, then the bad people will. I have no idea if Yudkowski's dream would be a good enough "lesser evil".

Re: Genetic Programming

Date: 2005-06-26 10:41 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
if you mean improving automated programming, I don't know what the state-of-the-art is.

I want to know where the bottleneck is in the development of AI, and I think it's in formalizing cognitive processes: we make real progress in increasing *intelligence* when we put effort into this, and AFAIK this is not the case with other endeavours that go by the name of "AI".

(no subject)

Date: 2005-06-23 12:43 am (UTC)
From: [identity profile] daoistraver.livejournal.com
"I don't see why economists belong to these different religions. Hasn't philosophy of science advanced far enough?"

Definitely not. I know what you're getting at here, the Bryan Caplan/Bayesian argument that we should be in agreement on what economics is and how it should be done.

But I think that the correct priors either have not been established, or have been deliberately mis-established.
Part of the problem with treating Economics as a science is that it has a use-value (or maybe, a mis-use-value) to people who have the power to obfuscate it. Another problem is that human behavior is still fairly irreducible.

In it's most basic definition, Economics is the study of how we deal with scarcity. It doesn't apply to things which are not scarce.

Beyond that, how do we study it? Can we isolate variables? If not, a wholly empirical (or rather, statistical) approach will be highly misleading, as it is in most "social sciences".
If we can establish enough priors that we agree on, we can extrapolate the rest through logic. And that's pretty much all Austrian Economics is. Yes, libertarians like it because its conclusions jibe with what they already are inclined to believe, but that's no reason to dismiss it either. Perhaps the conclusions are a reason for libertarians to believe what they do in the first place. (unless you think that ALL libertarians believe in liberty for ulterior psychological reasons...)

(no subject)

Date: 2005-06-23 01:25 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
Definitely not. I know what you're getting at here, the Bryan Caplan/Bayesian argument
Would you have any references to this?

I don't think many disagreements in economics are due to differing priors.

Although the schools have disagreements on their philosophy of probability (e.g. "radical ignorance") (which btw, should not be a difficult to resolve for any Machine Learning specialist, who can test methods of induction), I think they mostly disagree on basic assumptions, e.g. whether people are rational, and whether people's behavior can be modelled by a numeric notion of utility.

Which I think are all silly thinks to disagree about, especially since we can do meta-science to test which "economics" gets it right.


Part of the problem with treating Economics as a science is that it has a use-value (or maybe, a mis-use-value) to people who have the power to obfuscate it.
i.e. bias.

Another problem is that human behavior is still fairly irreducible.
(1) what would it be reducible to?
(2) what does this have to do with whether there are reasonable disagreements?

(no subject)

Date: 2005-06-23 02:51 pm (UTC)
From: [identity profile] selfishgene.livejournal.com
Good analysis.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags