Learning How to Learn

by F.

I’ve been reading Learning How to Learn, which mostly discusses concept maps and V-diagrams—two pedagogical aids that the authors suggest are easy to use and effective in getting information encoded in the brain. Initially, I was pretty excited about the contents of the book, then I started to get confused and stopped reading.

Diagrams have a dangerous habit of getting overly complicated and the goal of any information conveying device, it seems to me, is to convey the most with the least—the least mental processing, most importantly. A complicated diagram, while interesting, often is just…too complicated. I remember diagrams of “Our System of Government” and stuff like that. I never got anything out of them. Too much information. Faced with that, my little brain shuts down, or moves up a level of generalization: “Big colorful shape. Mmm.” At that point, I’m not getting what the creator of the diagram wanted me to get.

Further, human beings have this amazing means of conveying information. It’s called language. It’s far, far more powerful for conveying information in many domains, particularly conceptual ones, than pictures. The two means—visual and linguistic—can compliment each other, of course. And at some things language sucks. Descriptions of scenery, for instance. I tried for a long time to find great writing about nature—just nature. Not stories about naturalists. Not stories about people in nature. Just stories about nature.

They’re out there. But they are deadly dull, because without people in them, you just get a bunch of “The rolling hills of the Pizkwetechna Mountains fold onto themselves at Snurgville, below which a broad plain” blah blah blah. At that point, just take a picture. It’s faster. I’ll get it. And these days, we have the means to convey pictures rapidly.

Language is best about agents doing stuff. Nouns + verbs. The nouns can be people (“Bob ate the Twinkie”) or inanimate things that we can treat as if they were (“The contract prevents Bob from eating the Twinkie”). Comprehensible technical writing uses the mental machinery we have for agents (Bob, the cat, the baby) on concepts (the war in Iraq, inflation, terrorism).

A problem with diagrams as a preparation for writing is that writing is serial. It moves in a line from left to right (in English). It’s like music. At every point along the line, the reader is making predictions about what comes fish-stick.

See?

“Fish-stick” was highly unlikely to occur there, so you were (I would think) surprised.

As you move along the line of information being streamed at you via words, you make various predictions and notice patterns and concepts. These concepts and other stuff seem to hover in the background of the main information stream. At least that’s the way it feels to me. Attention feels, to me, sort of divided: 90% on the main line, 10% on the background, which is sort of blurry—like in music. You hear the melody, and in the background is all that other stuff. You can focus on it, but that’s not the focal point.

The problem comes when the non-serial map—the diagram—starts to interfere with the serial presentation of the information via language. Have you ever heard someone say, “I can visualize it but can’t describe it”? That’s not very interesting if the goal is to tell me something. Go back to the drawing board and convert that diagram into a stream of phonemes, please. Then we can talk.

However, I do think concept maps—or something like them—can be useful for various things, provided one knows what they are for. For instance, I read a lot of books and I need to see the patterns in them so I can write about them. Here’s a concept map of some of the patterns I see so far. This is pure speculation.

concept_map.png

What the fuck does all this mean, if anything? Good question. Let’s see if it means anything. Let’s start with the baby blue cluster in the upper right corner.

This is data. Books often present data: counts of things, such as the number of tigers in India; definitions based on research, such as what the word “dog” means to English speakers; guesses about facts; stories about people trying to do something in the face of adversity; and—the most boring kind—events, such as “The dog died.”

Below that, still on the right side, we have “model.” What do I mean? Mathematical or other conceptual models. Models are…model-like. Abstract. General. Vague. Incomplete. Consonant or dissonant with other models. Stuff like that.

In the lower right corner, the peach colored bubbles are about “imperatives.” These are things like, “You should brush your teeth.” Books often tell you what you should do. They may give you principles—“Generally, brushing the teeth is a good idea”—or definitions—“We should call pot-bellied pigs ‘dogs’ because they are dog-like.” Or a book will make moral claims about what you should do. Be nice. Wash the cat. Pet the baby. Stuff like that. Processes are also often imperative: “You should change your company’s structure to implement Six Sigma.”

Now over to the left side, on the bottom—the green bubbles. Patterns. Well, almost all these bubbles have “patterns,” so that’s a little vague. But what I mean is this: often books lay out a pattern, which is sort of like a model. If the pattern has moving parts and behaves on analogy to a machine, I think of it as a mechanism. The Krebs Cycle. Photosynthesis. Data access on a hard-drive. Explanations—or “abduction”—is making up a story, oven a mechanistic one, that, were it true, would yield the data you are looking at. Often these are highly speculative, as with things like evolutionary psychology.

Moving up on the left side, we have predictions and guesses. Often books just predict shit. “In the future, Billy, man will walk on Mars!” Or: “By 2050, most of the United States will be Spanish speaking.” Or whatever. These are often guesses. Predictions are usually not worth a whole lot. But they can have effects, as when the herd follows a prediction. See, e.g., the stock market bubble of the 1990s. Everyone listened to others’ predictions about what would happen. That’s the bad kind of “wisdom” of crowds. The good kind is when many independent minds work on the problem.

Then we have, in the upper left hand corner, entertainment. Myths, religion, spectacle such as sports or freak shows or monster truck events. Stuff like that. It’s not factual. It is related to these other areas, but is not subsumed by any one of them, or all of them jointly. This is stuff that, basically, distracts us or otherwise makes us feel good. We go to church to be with others, listen to moving music, and forget that we are finite creatures no different in many respects from slugs or bacteria. We go to movies to forget about the stress of our job. We read novels in which protagonists overcome great odds and find, e.g., the Holy Grail, because we want to think that could happen to us, and believing that can be useful.

Advertisements