gusl: (Default)
Stolen from this comment:



What an awesome idea to think about!

Constrained languages make it easier to standardize communication (think semantic web vs. the free web), minimizing errors of interpretation. A familiar movie-plot-structure or song-rhythm tends to put the viewer at ease and confident. At this point, it's easy to switch into "flow"y automatic mode, focusing on the higher-level structure (i.e. the meaning rather than the words). By constantly demanding your attention (though not necessarily your focus), the task puts you in a trance-like state of consciousness.

This is kinda like how driving on a highway can be relaxing.

When the medium is free-form (at least in the time dimension), one's attention is free to shift around, and one is free to spend time on complex planning, etc... it is precisely this freedom that makes anxiety possible.

I would like to look at frontal lobe activation in structured vs unstructured tasks. If my hypothesis is correct (more frontal activation in unstructured tasks), this would explain autistic impairment in the latter.

---

Bluegrass seems like a very constrained form. Maybe this is my bias, since it's a style I know very well.

To test this hypothesis using information theory, I would try to show that the relevant features can be compressed quite efficiently.

If we had an MDL program for generating any tune over the space of bluegrass tunes (generating only the relevant features, let's say the kind of information that is in a MIDI file), the input necessary to generate any given tune would be rather small.
gusl: (Default)
Much of my motivation to try to understand "semantic information theory" comes from shortcomings in distinguishing signal from noise.

How Could Information Equal Entropy?
If someone says that information = uncertainty = entropy, then they are confused, or something was not stated that should have been. Those equalities lead to a contradiction, since entropy of a system increases as the system becomes more disordered. So information corresponds to disorder according to this confusion.


Also, Information is not uncertainty shows related errors in published papers.

If I get the time, I should read Dretske's "On Semantic Information" and Floridi, Luciano (2005) - Is Information Meaningful Data?.
gusl: (Default)
The Mathematical Theory of Information (which is based on the "Law of Diminishing Information", a.k.a. "the GIGO principle"), on "semantic information theory", seems like an interesting book.

It smells a bit crackpottish though: the concepts seem overly general (such fields, e.g. "systems theory", tend to attract crackpots). Furthermore, the author hasn't published anything outside of Wolfram's conference, and the only reviewers I recognize are Chaitin and Calude.
gusl: (Default)
from The Tao of Programming:

There was once a programmer who was attached to the court of the warlord of Wu. The warlord asked the programmer: ``Which is easier to design: an accounting package or an operating system?''

``An operating system,'' replied the programmer.

The warlord uttered an exclamation of disbelief. ``Surely an accounting package is trivial next to the complexity of an operating system,'' he said.

``Not so,'' said the programmer, ``when designing an accounting package, the programmer operates as a mediator between people having different ideas: how it must operate, how its reports must appear, and how it must conform to the tax laws. By contrast, an operating system is not limited by outside appearances. When designing an operating system, the programmer seeks the simplest harmony between machine and ideas. This is why an operating system is easier to design.''

The warlord of Wu nodded and smiled. ``That is all good and well, but which is easier to debug?''

The programmer made no reply.


I like the argument-pattern in bold. It's an information-theoretic argument (the meta-specification of an OS is very small), and it reminds me of incompressibility proofs using Kolmogorov Complexity.
gusl: (Default)
Human communication is really lossy... even when the two people use the same logic and the same architecture. Maybe the goal of my formalization dreams could be couched in terms of bridging this gap: even if we both understand and use the same "logic of common sense", neither of us speaks a language that can express it easily. Hopefully, one day, natural language will seamlessly use metaphors from mathematics & programming languages (see Sussman's "The Legacy of Computer Science"). Not the mathematics & programming languages of today, mind you, but formal structures matching the common sense logic that we use in everyday life (I think that planning formalisms come close to what I want).

Why is it so hard to express oneself musically? I can hear beautiful music in my head, but it takes lots of training to communicate it to others, and even then there's a bottleneck. I can easily "see" a picture that I can't paint in my mind's eye. I can automatically recognize a known person's face, but I can't easily give this information to someone else.
I believe that this bottleneck lies in the brain itself: it's what happens when we convert information from parallel to serial. Since our communication channels are serial, communicating such "parallel" information with others requires us to first convert it to serial.

New media can do a lot to relieve many of these constraints, but I think that some of these constraints are fundamental.

Could one make a business out of creating software to let people express themselves and/or communicate better? What about software for people who have communication disorders?

Btw, has anyone modeled the tip-of-the-tongue effect? This seems exactly like the kind of thing that would not exist if our brains were purely serial. While in some examples of recognition-is-easier-than-production tasks (see also one-way-functions), one may accept several possible false matches, this does not seem to be the case with the tip-of-the-tongue effect (only the right word will satisfy the person).

See also: Thinking the Unthinkable

---

Related to this issue of self-expression, I will soon become an emacs wiz.
gusl: (Default)
Gustavo - "Kolmogorov Complexity"! "information distance"! "case-based reasoning"!
Google - no major websites or papers connecting the two


Isn't the connection obvious??
Doesn't CBR require a similarity measure? Isn't information distance the most general similarity measure?
gusl: (Default)
It seems that much of my attraction "Information Flow Theory" (by Barwise, Seligman and improved on by Keith Devlin) has been misguided. Afterall, what use is their theory? What does it model? I wanted to interpret it causally, but the instructor said it wasn't meant to be.

The only interesting use I could see was representation systems (Shimojima): we can explain why certain representations are better than others using the math developed from the theory. For example, the natural constraints of 2D maps model the constraints of locations on the Earth: "north of" is necessarily a transitive relation in both the Earth and in 2D maps.

But what attracted me to the "theory of information flow" was formalizing common-sense ideas like "The Law of Diminishing Information". I love this kind of fundamental constraints: we use them in arguments, and yet the concept of "information" we use is not formalized.

You can also see this Law applying in mathematical proofs: sometimes you lose information (irreversible steps), sometimes you don't (you can go back). If an implication proof consists only of the latter kind, then the implication holds both ways.

Profile

gusl: (Default)
gusl

December 2016

S M T W T F S
    123
45678910
11121314151617
18 192021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags