MC Bateson on getting reacquainted with Cybernetics

Pointer from: Searching "Macy conferences" obviously Systems Thinker caught my eye :)
Place digested: ? but it was resonant
Time digested: Oct 6, 2018



How To Be a Systems Thinker
A Conversation With Mary Catherine Bateson [4.17.18]

The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.

MARY CATHERINE BATESON is a writer and cultural anthropologist. In 2004 she retired from her position as Clarence J. Robinson Professor in Anthropology and English at George Mason University, and is now Professor Emerita

EXCERPTS:  HOW TO BE A SYSTEMS THINKER
At the moment, I’m asking myself how people think about complex wholes like the ecology of the planet, or the climate, or large populations of human beings that have evolved for many years in separate locations and are now re-integrating. To think about these things, I find that you need something like systems theory. So, I went back to thinking about systems theory two or three years ago, which I hadn’t for quite a long time.

What prompted it was concern about the state of the world

Two or three years ago, I started getting invited to do things with the American Society for Cybernetics. I kept saying that I hadn't done anything or thought about that for years, but they persisted. I was invited to write a chapter for a huge handbook called The Handbook of Human Computation. Basically, what they meant by human computation is human-computer collaboration of various sorts. I told them I didn't know anything about that, and they said, "Since you don’t have time to write a chapter, please write the preface." I asked how I would do that if I couldn't write a chapter, and they said, "We'll send you all the abstracts." I became quite cranky and told them to get someone else to do it, but they kept sending me things to read. First, I searched Google for what human computation was, and I found that I did know about some corners of the field. So, I wrote everything I knew about human computation, sent it in, and said to them, "See, I don’t know anything about it." They published it.

Then I went to the conference and I started to get back in the conversation with people working on AI. I had realized that I’d learned an awful lot as quite a young person, even as a child, from my parents, who were involved in the Cybernetic Macy Conferences right through the ‘50s. They and other figures that were involved, like Warren McCulloch or many other people, were drifting through the house and having conversations all the time, and I was listening.

I didn’t go straight to AI; I was nibbling at edges of it. I had realized that our capacity to think about complex interactive systems seemed to be falling apart, that a great many efforts towards international cooperation were falling apart; states that involved multiple ethnic systems or dialects were breaking up; and, indeed, societies like the United States, with many ethnic groups and racial groups, were having a progressively harder time trying to cooperate.

We all think with metaphors of various sorts, and we use metaphors to deal with complexity, but the way human beings use computers and AI depends on their basic epistemologies—whether they’re accustomed to thinking in systemic terms, whether they’re mainly interested in quantitative issues, whether they’re used to using games of various sorts. A great deal of what people use AI for is to simulate some pattern outside in the world. On the other hand, people use one pattern in the world as a metaphor for another one all the time.

One of the problems when you bring technology into a new area is that it forces you to oversimplify. That is, the possibilities of AI have been there from the very beginning of thinking about computers, but there's always this feeling of disappointment that there are limitations to what you can do. We keep attempting to do more complex things.

How do you deal with ignorance? I don’t mean how do you shut ignorance out. Rather, how do you deal with an awareness of what you don’t know, and you don’t know how to know, in dealing with a particular problem? When Gregory Bateson was arguing about human purposes, that was where he got involved in environmentalism. We were doing all sorts of things to the planet we live on without recognizing what the side effects would be and the interactions. Although, at that point we were thinking more about side effects than about interactions between multiple processes. Once you begin to understand the nature of side effects, you ask a different set of questions before you make decisions and projections and analyze what’s going to happen.

One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it's willing to make projections when it hasn’t been provided with everything that would be relevant to those projections. How do we get there? I don’t know. It’s important to be aware of it, to realize that there are limits to what we can do with AI. It’s great for computation and arithmetic, and it saves huge amounts of labor. It seems to me that it lacks humility, lacks imagination, and lacks humor. It doesn’t mean you can’t bring those things into your interactions with your devices, particularly, in communicating with other human beings. But it does mean that elements of intelligence and wisdom—I like the word wisdom, because it's more multi-dimensional—are going to be lacking.

As a child, I had the early conversations of the cybernetic revolution going on around me. I can look at examples and realize that when one of my parents was trying to teach me something, it was directly connected with what they were doing and thinking about in the context of cybernetics.  

One of my favorite memories of my childhood was my father helping me set up an aquarium. In retrospect, I understand that he was teaching me to think about a community of organisms and their interactions, interdependence, and the issue of keeping them in balance so that it would be a healthy community. That was just at the beginning of our looking at the natural world in terms of ecology and balance. Rather than itemizing what was there, I was learning to look at the relationships and not just separate things.

Bless his heart, he didn’t tell me he was teaching me about cybernetics. I think I would have walked out on him. Another way to say it is that he was teaching me to think about systems. Gregory coined the term "schismogenesis" in 1936, from observing the culture of a New Guinea tribe, the Iatmul, in which there was a lot of what he called schismogenesis. Schismogenesis is now called "positive feedback"; it’s what happens in an arms race. You have a point of friction, where you feel threatened by, say, another nation. So, you get a few more tanks. They look at that and say, "They’re arming against us," and they get a lot more tanks. Then you get more tanks. And they get more tanks or airplanes or bombs, or whatever it is. That’s positive feedback.
               
I would say that the great majority of Americans still believe that "positive feedback" is when someone pats you on the back and says you did a good job. What positive feedback is saying is, do more of the same. So, if what you’re doing is taking heroin or quarreling with your neighbor, this is just going to lead to trouble. Negative feedback corrects what you’re doing. It’s not somebody saying, "That was a lousy speech." It’s somebody saying, "Reverse course. Stop building more bombs. Stop taking in more alcohol faster. Slow down." Negative feedback is corrective feedback.

Gregory then wrote a paper about an arms race and made the move from thinking about the New Guinea tribe to the nature of arms races in the modern world, which we still have plenty of.

At the beginning of the war, my parents, Margaret Mead and Gregory Bateson, had very recently met and married. They met Lawrence K. Frank, who was an executive of the Macy Foundation. As a result of that, both of them were involved in the Macy Conferences on Cybernetics, which continued then for twenty years. They still quote my mother constantly in talking about second-order cybernetics: the cybernetics of cybernetics. They refer to Gregory as well, though he was more interested in cybernetics as abstract analytical techniques. My mother was more interested in how we could apply this to human relations.

My parents looked at the cybernetics conferences rather differently. My mother, who initially posed the concept of the cybernetics of cybernetics, second-order cybernetics, came out of the anthropological approach to participant observation: How can you do something and observe yourself doing it? She was saying, "Okay, you’re inventing a science of cybernetics, but are you looking at your process of inventing it, your process of publishing, and explaining, and interpreting?" One of the problems in the United States has been that pieces of cybernetics have exploded into tremendous economic activity in all of computer science, but much of the systems theory side of cybernetics has been sort of a stepchild. I firmly believe that it is the systems thinking that is critical.

At the point where she said, "You guys need to look at what you’re doing. What is the cybernetics of cybernetics?" what she was saying was, "Stop and look at your own process and understand it." Eventually, I suppose you do run into the infinite recursion problem, but I guess you get used to that.

How do you know that you know what you know? When I think about the excitement of those early years of the cybernetic conferences, there have been several losses. One is that the explosion of devices and manufacturing and the huge economic effect of computer technology has overshadowed the epistemological curiosity on which it was built, of how we know what we know, and how that affects decision making.

If you use the word "cyber" in our society now, people think that it means a device. It does not evoke the whole mystery of what maintains balance, or how a system is kept from going off kilter, which was the kind of thing that motivated the question in the first place.

People are not using cybernetic models as much as they should be. In thinking about medicine, for instance, we are thinking more than we used to about what happens when fifty years ago you had chicken pox and now you have shingles. What happened? How did the virus survive? It went into hiding. It took a different form. We’re finding examples of problems that we thought we’d solved but may have made worse.

Religion is an adaptive tool, among other things. It is a form of analogic thinking