gusl: (Default)
[personal profile] gusl
I think that one of the roles of (a certain type of) friends is to extend your mind (in Engelbart's sense, i.e. roughly the same sense that a computer does). For example: argumentative interactions, in which mental labor is naturally and profitably divided between the For and Against side, or Creative and Skeptic.1

Why does cultural background matter? Because a set of basic concepts and dogmas is required for two people to understand each other's ideas. My dogmas are rather positivistic, and include: "mathematics never lies", "there is no problem that humans can solve and computers inherently can't", "informal reasoning can always be formalized (digitized+parsed) losslessly, with the right knowledge representations". Scientists, and AI folks in particular, will usually share these.

I'm just beginning to talk to biologists, and it is... effortful. Although books like William Cohen's "A Computer Scientist's Guide to Cell Biology" help bridge this gap.

Knowledge of popular culture seems to be very important among geeks in North America. I haven't seen people enthusiastically spout movie quotes anywhere else. Europeans, for example, have to deal with a Tower of Babel among themselves, and generally don't seem to expect anyone to have read the same books. It's just not a thing they do there.

Another lazy man's view of like-minded intellectual friends is that they infect you with their best memes, already filtered and interpreted and critiqued into your conceptual system, so you don't have to. Also, they probably have (or have had) some of the same questions as you, and made some progress (or solved them), so you can share notes.



1 - if you're sufficiently detached and flexible of mind, you can always have the argument with yourself, by switching sides, but this comes with a certain overhead. As an example of this, Paul Graham talks about writing as a tool for thinking, and I couldn't agree more. Paul Graham is precisely the kind of geek with whom I'd enjoy a speculative, discovery-oriented conversation of the kind I have with my geek friends. Persuade XOR Discover

(no subject)

Date: 2009-09-25 12:14 am (UTC)
From: [identity profile] jcreed.livejournal.com
That essay is interesting.

I rather like
I think the goal of an essay should be to discover surprising things.
but I don't agree with
And most surprising means most different from what people currently believe.
There are also surprises to be had learning new justification for what you already believe. When I learned category theory, I learned that a lot of the things I already knew fit into certain patterns; I never had a strong prior belief that they didn't fit into patterns that was contradicted. I think it's also possible for an essay about a much less formal topic to uncover a good, novel argument that by coincidence is for what you already believe in --- it's just very risky to set out looking for it, because it's easy to be biased by the fact that you already have those beliefs when evaluating its goodness and novelty.

(no subject)

Date: 2009-09-25 12:35 am (UTC)
From: [identity profile] damion.livejournal.com
As a note to your footnote, a large amount of my thought process consists of internal dialog.

(no subject)

Date: 2009-09-25 06:35 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
ah, insights from an intuitionist!

(no subject)

Date: 2009-09-25 07:51 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I have two types of dialogs in my head.

Truth-finding dialogs, and dialogs that simulates the arguments between lawyers in a court. The difference is whether the agents in the dialog accept other people's norms about what is acceptable/unthinkable.

The former is easier for me. The latter requires simulating other people's values and taboos.

(no subject)

Date: 2009-09-25 07:24 pm (UTC)
From: [identity profile] cwarner.livejournal.com
I'm not sure I agree with "there is no problem that humans can solve and computers inherently can't" or "informal reasoning can always be formalized (digitized+parsed) losslessly, with the right knowledge representations" but maybe that's because I did a philosophy degree, and not a technical one. I'm curious why you think there are no inherent limits on computers' ability to solve the problems human brains can. I guess I'd need to know exactly how you define "computer" and whether the human brain falls under that definition. And, one of my favorite professors at UBC, John Woods, straddled CS and philosophy and studied representing human reasoning with logic, and...well I don't know, he certainly doesn't have a complete finished product yet.

(no subject)

Date: 2009-09-25 07:27 pm (UTC)
From: [identity profile] cwarner.livejournal.com
I think I'm also somewhat greedy and one-sided with friends in that I like to be an intellectual leech. I seem to seek out intelligent and informed people and ask them questions. I know there are so many people who are so much smarter than me, it just seems inefficient for me to do any real mental work.

(no subject)

Date: 2009-09-25 07:36 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I call them dogmas because I have no proof.


<< I'm curious why you think there are no inherent limits on computers' ability to solve the problems human brains can. >>

I'll believe in an inherent limit when you show me some problem that humans can solve and a proof that computers inherently can't do it. (I don't think it can be done)


When I talk about capturing informal reasoning, I'm including fallacies too. The point is that anything you call "reasoning" will be a symbolic process. Intuitions can be represented symbolically. (Though this isn't so meaningful, since I also believe that everything can be represented symbolically; my dogma is specifically about a certain level of description that pertains to logical arguments)

(no subject)

Date: 2009-09-25 08:10 pm (UTC)

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags