Google analytics

Wednesday, September 10, 2014

Religion as an interface: The Strangeness of being human cont'd.

by John MacBeath Watkins

One of the most popular posts on this blog explores the roots of religion, and the need we have for a mythopoetic understanding of the world. Scot Adams, blogger and cartoonist of the Dilbert strip, says that religion is not a bad interface with reality.

And it strikes me that as we've made our machines more compatible with us, we've made them more artistic and poetic. I do not speak machine language, but I am able to communicate with my computer through my simple faith that when I reverently click an icon, the file will open.

On rare occasions, I have to use the command line to communicate in a more concrete way with my computer, and sometimes I even have to open the back and stick in more memory. But I don't really understand the machine in the way my nephew Atom Ray Powers, a network administrator, does, nor do I understand the software the way his brother, Jeremy, a programmer does. And neither has studied assembler code, which my uncle Paul learned after he was injured out of the woods as a logger.

It's as if we are replicating the way people perceive the world. The graphical user interface gives us a visual, metaphorical understanding of how to face the reality of the computer, just as religion gave us a metaphorical, poetic, and often visual way of interacting with the reality of the world. The command line gives us greater control of the computer, just as technology gives us the control of nature.  Science attempts to learn how the world really works, at deeper and deeper levels, similar to knowing how the transistors work and how to read machine language..

The fact that computer scientists, who started at the scientific end of things, felt a need to make the interface more metaphorical and even artistic tells us something about how humanity interacts with the world. The intuitive approximation is vital if we are not to be overwhelmed with detail. It is sometimes said that ontogeny recapitulates phylogeny, because every fetus goes through phases of looking like a primitive fish, then a salamander, and eventually takes on human form. It would appear that the same thing happens cognitively.

Those of us, like myself, who follow the methods of the metaphorical interface in our daily lives often seek guidance from computer gurus. And those gurus, when they are not repairing malfunctioning machines or recalcitrant code, operate their computers in the symbolic realm made possible by the GUI.

We seem to have some difficulty doing this in our world of faith and science. This is usually because each side insists that its way of understanding the world is truth, therefore the other cannot be truth. But a model of an atom isn't what an atom really looks like, because an atom is smaller than a visible light wave. All of our understanding is metaphor and artistic license at some level. In my view, we have understandings at different levels.

Now, perhaps I've offended some religious people by saying religion is metaphor. But all sacred texts were written to be understood by people, not by gods. All of our understanding is metaphor. "For now we see through a glass, darkly" a biblical passage says. We understand the world by telling stories about it, and deciding which best describe it. Sometimes, as with math, the stories can be very precise, and the grammar quite rigorous, but they are stories none the less.

Tuesday, August 26, 2014

On the spell of the spiritual and the mechanism of philisophy

by John MacBeath Watkins

The Guardian has an interesting article on the failure of anglophone philosophy here. In it, Roger Scruton argues that the analytic philosophy of English-speaking philosophers has taken philosophy out of realms where it might be relevant to peoples' lives.

Scruton says:
Academic philosophers in the English speaking world still regard philosophy as Locke defined it in the 17th century, as “the handmaiden of the sciences”: it doesn’t explore the world beyond science but the limits of science, with the result that philosophy doesnt really intrude into the public world. In the early 20th century were were caught up by the movement to form analytical philosophy, based in the study of logic, the foundations of mathematics, the syntax of ordinary language, the validity of arguments, something very formal. So when people have a big question, especially now since the decline of the orthodox religions, they don’t turn to philosophy for the answer but try to formulate it in whatever technical words have been bequeathed to them, and when a scientist comes along and says “I have the answer”, or even “there is no question”, they think “this guy knows what he’s talking about, I’d better lean on him”.
The French, he notes, did not fall into this trap. Sartre was willing to address the great moral questions, even if the morality of his actions in World War II might be a little questionable (he gained his teaching position during the war because Vichy law eliminated a Jew from that position, and chose not to be active in the resistance.)

But Scruton fails to note that many people don't look to science for their answers. Some turn to religion, some turn to New Age gurus. Both reflect a backlash against the Enlightenment ideas reflected in modern philosophy. Most modern philosophy (yes, even the French) is unwilling to deal with the spiritual feelings people have.

Part of the problem is that people tend to believe in the spiritual in an a priori manor., and will interpret any attempt to analyze it as an attempt to destroy it, to reduce it to the physical world. Any logical and analytical approach to the spiritual that does not treat the existence of the spiritual as an accepted fact and a realm not readily explained by the physical world will be seen as the reductive destruction of the spiritual, equivalent to trying to understand the Mona Lisa by turning it to powder and doing chemical analysis of the molecules.

Any attempt to find the part of the brain that needs to believe in god will receive this reception. My own attempts to understand the spiritual in terms of the ethereal parallel world of symbolic thought have been received this way. As an agnostic, I am open to the possibility of the existence of a spiritual world but not convinced of it. And I have to wonder, if we could understand the spiritual world, would that be tantamount to its reductive destruction?

In my series of posts on the strangeness of being human,  I have stuck with trying to explain what I can, which has restricted me to the physical and analytical. I remain skeptical of those who claim a special knowledge of the spiritual world, because so many have been shown to be frauds, but I respect the impulses and the work of sincere ministers of many faiths. For many people, faith has been a support for them spiritually, psychologically, morally, and socially. Scot Adams, long a vocal atheist, said on his blog recently:
In recent years I've come to see religion as a valid user interface to reality. The so-called "truth" of the universe is irrelevant because our tiny brains aren't equipped to understand it anyway. 
 As a pragmatist, I find this appealing. Were I a Christian, I might find it appalling, for the same reason the Catholic Church found Pascal;s Wager appalling: It does not accept the truth of religion as its reason for practicing religion.

Yet in many ways, worrying about the truth of religion is a modern luxury. If you lived in most societies for most of the history of religion, the penalty for failing to believe in the God or Gods of your people was death, ostracism, or incomprehension by your fellows. The notion that religion should have to justify itself was uncommon until recently. Socrates was charged with undermining the young's faith in the gods, and condemned to death. Society was punishing him, not for proving the gods did not exists, but for raising the question of how we might logically confront religion.

Thomas Aikenhead was executed in Scotland in 1697 for the same thing. Thomas Hobbes might have lost his life on a charge of blasphemy for claiming God exists, but is a material being, had he not had protection from the king, who he had tutored.

Although Aikenhead was the last person in the United Kingdom executed for blasphemy, the last successful prosecution in the UK for blasphemy was in 1977. The law has since been repealed.

There are parts of the world where the law says you can still lose your life for leaving the established religion, although in the best-known cases governments have backed off.

But even for the unchurched, the spell of the spiritual has an appeal that the logical mechanisms of philosophy cannot address. This is an interesting problem, because for centuries, philosophy was taught in Europe at Christian institutions. In fact, if you wanted to be educated in Europe after the rise of Christianity, for centuries you had to take orders.

This led to exactly the sort of reductive logic chopping we now see in our more materialistic philosophy. Schoolmasters were ridiculed for arguing how many angels could dance on the head of a pin (my view is, all of them or none of them, depending on whether angels have a sense of rhythm -- after all, they are as immaterial as the question.)

So the problem of the relevance of academic philosophy is not a new one. One of the aspects of the academic environment is that to be wise, you must specialize, so that you may know more about something than anyone else. That specialization takes you away from the big questions. Another is that the trap of irrelevance is not always obvious. The question of whether angels had a material presence interested some philosophers, and the thought experiment about them dancing on the head of a pin was a thought experiment intended to illustrate it.

The real trap was in failing to understand that in the grand sweep of things, whether angels had a material presences was irrelevant to the important questions of how we should live. The conversation became attenuated because those involved did not realize that they had lost the plot.

And if philosophy leaves the questions of how we should live our lives to the soft science of psychology or the realm of new=age gurus, it will be irrelevant to the questions they attempt to answer. Perhaps these questions are not the ones modern philosophy wishes to deal with, but if so, people will continue to ask, what is it for?

Scruton thinks the notion that philosophy is the queen of the sciences makes it beholden to the sciences, but that is wrong. Philosophy is the mother of the sciences, having spun them off. There was a time when naturalists called themselves "natural philosophers." It was philosophers who first examined the basic questions of physics, math, and astronomy.

Philosophy should not now turn its back on its children, but should integrate them, and show how they affect the way we live. But it seems to me that philosophy is the child of the spiritual rather than its queen or mother. We first tried to understand the world in a poetic and mythic way, and only later brought our problem-solving logic to bear on those understandings. It is much harder for the spirtual's logical child to understand its parent, because its business has been to supplant mythic understanding with logical understanding.

But it can talk about the questions the spiritual attempts to answer. After all, the Buddha had little to say about the gods, nor did Confucius. The question is, will academic philosophy reward such efforts, or view it as an enterprise left to some other field of study?

Friday, August 22, 2014

On the illusion of the self: The Strangeness of being human #27

By John MacBeath Watkins

As we discussed in an earlier post, Julian Jaynes introduced the intriguing concept of the origins of consciousness in the bicameral mind. He supposed that brains worked differently until about 1200 BC, that the part of the brain that produces hallucinations was speaking to us with the irresistible compulsion of the voices of the gods.

This represented a different sort of mind than we now experience, a mind without the metaphorical self-narrating person in our heads.

This brings up several questions. Jaynes claims that only the mentally ill still hear voices from that part of the brain, which is not much used by modern humans. But surely the part of the brain responsible for these hallucinations existed prior to human culture. What role did it play before that, and what role does it play in the style of perception used by animals other than man? Is it part of a system of perception for a spiritual world that is real, or the source of the invention of the spiritual? 

I propose that the supposition of the breakdown of the bicameral mind is unnecessary. Psychologists refer to a healthy psyche as a well-integrated personality. This recognizes that a personality is made up of many motivations, often conflicting – the self who wants sweets and the self who wants to be slender, the self who wants children and the self who is selfish, the self who aspires to goodness and the self who cheats on its spouse. Some of us avoid conflicts by compartmentalizing. Some actually fragment into different personalities.

There was a case a few years ago in which a man was accused of raping a woman with multiple personality syndrome. What had happened was that the accused had started having sex with the woman's adult personality, then asked to speak to her little girl personality. The woman had consented to have sex in one personality, but not in the other – in fact, that personality was incapable of consenting to sex. The man was convicted, but the conviction was overturned.

That the woman had shattered into several personalities is considered pathological, but what if a single, well-integrated personality is as much an hallucination as the gods were? Does that mean that neither is real, or that both are real, or something in between?

I propose that both are ways of constructing reality. Scott Adams says that religion is a pretty good interface with the world, and I suspect that for many people it is. Think of it as a graphical user interface. The real world of computers is a world of 1s and 0s, but this is not a way of thinking about computers that enables us to work smoothly with them.

Similarly, the world we perceive is one of differing amplitudes and frequencies of light and sound, of the atoms we are composed of interacting with the atoms of other objects. Who knows, it may even be one of our spirit interacting with other spirits, though I see no particular need to suppose this. We have several levels of perception, memory, and constructing all the evidence of our senses into a narrative that “makes sense” of our lives. The product of all this is a useful interface, a sort of useful illusion of the world.

When societies became larger and needed coordination beyond the clan level, we developed institutions and patterns of behavior that made that possible, resulting in the great age of religion, which gave societies a sort of group mind.

This group mind gave us a structure that allowed stable societies of great size to develop, but it was not adaptable. As Jaynes pointed out, in the Iliad, there are almost no references to individuals having motivations that were not the gods dictating their actions. The later Ulysses is all about one clever, adaptable individual making his way through changing circumstances that his gods did not issue instructions for.

About the same time, the great age ofprophecy began, and for about a thousand years, new religions told people how to act as individuals. And those religions focused on human prophets, less than on ethereal gods. Mohammed gave the word of God to Muslims, Jesus gave the world of God to Christians, and while Siddhartha had no brief agains the Hindu gods, his followers focus on his teaching more than on worshiping those gods.

Each, in his own way, taught people not to be selfish. It may have been literally unthinkable in the age of myth to be selfish, but in a world where adaptable individuals made their way, it was an ever-present danger.

An it is a danger. Any society that relies for its survival on people having and raising children requires some level of self-sacrifice. Any society that needs to defend itself from aggressive neighbors requires it as well.

We live in a transitional era, when adherents of the prophets are worried about the relentless rise of unbelief, when prophets of the Singularity are trying to invent an entirely material god, when atheism is no longer the creed that dare not speak its name. Reason rules our world more than myth, although often, it is motivated reasoning that seeks out desired conclusions.

But what role does reason really play? Often, our reason justifies things we already want to do, but have not consciously acknowledged. What if, when we spoke to the gods to get our guidance, the same thing was happening there as happens when we talk to ourselves?

If Jaynes was right about the literary evidence pointing to a different sort of mind prior to 1200 BCE, it may be that it was a different way of integrating a personality than our current mode, rather than a completely different way of using our brains.

The strangeness of being human is a series of posts about the way language makes us human, giving us abstract categories we use to think and memes that make up much of what we are.

Night of the unread: Why do we flee from meaning?
The conspiracy of god, the well-intentioned lie, and the strangeness of being human
Spiritual pluralism and the fall of those who would be angels
Judging a book by its author: "Fiction is part confession, part lie."
What to do when the gods fall silent, or, the axis of ethics
Why do we need myths?  
Love, belief, and the truth we know alone
"Bohemians"-- The Journey of a Word
On being a ghost in a soft machine
On the illusion of the self

Thursday, August 21, 2014

A 4th helping of notes on a novel in 1940s noir

by John MacBeath Watkins

I woke with an aching head, and found there was a heavily-built character sitting on the bed with me. A stitched-up scar ran up his swarthy face to the missing eye, and his remaining eye was dead, completely devoid of human emotion.

But he was my teddy bear, and I loved him.

"You're undercover?" she whispered.

"Yes," I said.

"Well, you look about as inconspicuous as Herman Cain at a Republican convention."


"The streets were dark with something more than night," Chandler said.

"Yes, you've got some of it on your shoe, and tracked it on the rug."


"Alcohol is like love," Chandler said. "The first kiss is magic, the second is intimate, the third is routine."

"And after the third kiss, I start to puke," I finished for him. "Just like our first date."


She told me every time we said goodby, she died a little. She must have said goodbye once too often, causing blood to leak from a massive head wound. 

I made a note to go with "smell you later" in the future.

Friday, August 15, 2014

Demonic males: Failure of a narrative

by John MacBeath Watkins

On July 25, 2014, ESPN host Stephen Smith brought an uproar on his head with the comment that women should not "provoke" men to anger, shifting the blame for domestic violence.

And he was quite properly pilloried for the comment. There is no excuse for beating your mate. Shifting the blame from men to women is wrong, not just because it's blaming the victim, but because blame is not a useful framework for solving the problem.

Because there is another issue here. The consistent narrative about domestic violence is that the problem is demonic males, and the solution is controlling those demons.This seems obvious from the fact that most people hurt in domestic violence incidents are women.

Logically, the real hell should be two men living together.

And, in fact, according to a Centers for Disease Control study, gay men report that 26% have, in their lifetimes, been subject to violence from a domestic partner. That's a shockingly high number. But it's not the highest number. For lesbians, the figure is 44%. For straight couples the figure was 29% for men, 35% for women.

This is the opposite of the expected result. The more women are involved in a relationship, the more violent that relationship becomes. It is an astounding, disturbing result that has received far too little attention.

Now, there are several possible explanations for this. It could be that women are more likely to report having been hit. This is possible, but I submit that this is not about police reports, these people participated in a survey that allowed them anonymity. I believe the numbers. In any case, why would straight men be more eager to report domestic violence than gay men? And why would lesbians be more eager to report than straight women?

Another possibility is that men are beating up lesbians. After all, not all lesbians start out in lesbian relationships. I'm sure that happens, but a psychologist I know told me years ago that a deplorable amount of domestic violence happens in lesbian couples.

A 1949 comic, for sale here.
There are other possibilities. Some people fight with their mate as a prelude to sex. I've never understood that one, but I know it exists.

Maybe there is something wrong with the CDC's sampling or the wording of their questions, but I doubt it. I do think the survey opens a window on a deeply emotional issue, and may even point a way to making peoples' lives less violent.

There is a more disquieting possibility, that women are more subject to violence because they are seen as more vulnerable, even by other women. That would be a more intractable problem. It would also fail to explain why straight men report being hit more than gay men.

If this is the problem, the solution would be to decrease the perception of female vulnerability, a rather difficult bit of cultural engineering.

It is possible that what we are teaching women in our culture about conflict resolution is working badly, especially when dealing with other women.

 That is an intriguing possibility, because if conflict resolution styles is a problem, teaching better techniques could benefit any couple having this problem. You'd have to teach both parties, and not everyone would be willing, but lesbian, straight or gay, you'd be better off.

There are a number of stereotypes about this. The woman who enforces her will with a rolling pin. The woman who won't tell the guy what he's done wrong, but expects him to know, for example. I know nothing about the validity of the stereotypes, and I doubt that's the sort of thing that leads to most domestic violence, but having never been involved in domestic violence, I don't know what problems lead to it from personal experience. Is it score settling? Naked competition for power within the relationship?

I don't know, but someone must find out.

We'd have to open our minds to a new approach. I would suggest teaching kids conflict resolution, before their patterns are set. You could ask them what they would do in certain circumstances, and what would likely result, and explore alternatives. If 26% of those in all-male relationships are subject to domestic violence, it's clear men need this. If 44% of women in all-female relationships are subject ot domestic violence, women could use it even more.

Part of the problem with the notion of demonic males is that it focused on who was to blame, just as the problem with what Mr. Smith said was its focus on shifting blame. If we shift the focus to how to resolve domestic conflicts without violence, everyone could benefit. After all, most couples of all types manage to avoid violence.

Saturday, August 9, 2014

The proper strategy for selling ebooks (publishing in the twilight of the printed word continued)

by John MacBeath Watkins

When Amazon sells an ebook published by Hachette, the proceeds are divided as follows: 30% to Amazon, 52.5% to the publisher, and 17.5% to the author. The two companies are now at odds over the fact that Amazon wishes to discount books more heavily.

They have also proposed the authors should get more -- 35% instead of 17.5%. Only, that wouldn't come out of the Amazon share, it would come out of the publisher's share. So far, the company under attack, Hachette, has had very vocal backing from its authors, who are deprived of part of their income because Amazon is refusing to sell their books. The tactic in suggesting that the publishers give authors a bigger share is an attempt to drive a wedge between authors and their publisher -- let's you and him fight. It's a free lunch for Amazon, which would not dream of giving authors more money out of their share.

And in Germany, Amazon is trying to get a 50% share of the ebook price..

My question is, why should Amazon be getting even 30%? The cost of delivering ebooks is minimal, while many of the marketing costs are borne by the publishers.

Suppose you could plug the title of a book into a search engine and pull up a variety of booksellers offering the book at a lower price than Amazon's. The publishers would have greater influence over a large group of independent booksellers than they do over Amazon. They might find themselves paying as little as 15% or even less to such competing sellers.

The reason this hasn't happened is that publishers worry about losing control over the perception of value of their products. What is needed is the agency model -- they wholesale books to an agent who then sells them.

Ah, you say, but that has been tried. Not, I answer, in the way that I propose. The publishers tried to ally themselves with Apple and set a higher price than Amazon wanted to charge.

I say they should fully commit to ebooks, and under-price Amazon. They were tripped up by the fact that they colluded with Apple to have high prices. Well, don't collude. Set prices that cover the cost of finding, editing, and promoting the book, plus a reasonable markup, and try to sell a lot of copies. Don't negotiate what margin the seller gets, just sell them the book and let them set the retail price. The company that can keep its overhead low while effectively promoting itself and the books can make money with a lower percentage of the price. With competing companies selling the books, the one who can make money on the smallest margin will have the lowest price.

No doubt a company like Google could build such a marketplace quickly that would be highly automated and have minimal costs. Or maybe someone wearing bunny slippers and working in their basement will find the key. The big problem is overcoming Amazon's marketing muscle, so I would expect either a well-funded startup or a fairly large existing company to take this on.

Amazon has a large and increasing overhead connected with delivering physical objects. A company with lower overhead could charge less for ebooks.

It has now become evident that not everyone wants an ebook. They seem to be best for leisure reading. For absorbing information, print books still have an edge. There is still, therefore, a place for bookstores and experts on the physical delivery of books such as Amazon.

This is not too different from the mass-market paperback revolution of the 1940s and '50s. Suddenly, news agents who had never sold books before were selling paperbacks with lurid covers. More people read more books, and publishers found that what had been a carriage trade became a mass market. The process was very well documented in one of my favorite books, Two-bit culture: the Paperbacking of America.

But even during the paperback revolution, the business was one of distributing books through centralized organizations.  Most publishers did not own printing plants, let alone warehouses and trucks to take the books to the many independent bookstores that peppered the land, which meant more middle men were needed. With ebooks, that lack doesn't matter, and in fact, becomes an advantage, because it means lower overhead.

Tuesday, August 5, 2014

Still more notes for a novel in 1940s noir

by John MacBeath Watkins

A ricochet zinged off the rock we hid behind.

"They mean business, doll," I said.

"By asking for your help, I've put you in danger," she said in that husky voice that drove me wild.

"Don't worry, babe," I said, firing back -- Bam! Bam! Bam!. "I won't let them take you alive."

"You told me your uncle was a humanitarian," I whispered, "but it looks more like he's a cannibal."

"Well, dear," she said,  "if a vegetarian eats vegetables..."

"You're a dick, aren't you?"

 "That's right, I'm a dick, a shamus, a private eye," I responded. :"In polite society, which I never meet, they call me a detective."

"That's not the kind of dick I was calling you."

The crime was monstrous, so I added Mothra to my list of suspects

:"My gun is quick," Jack Hammer said.

"Next time, try thinking about baseball while we do it," she replied.

"Farewell, my lovely," I said, my voice choking with emotion.

"It's just a dental appointment," she replied. "you'll be done in half an hour, and I'll buy you some ice cream as a reward."

"You can't pin that on me!" he shouted.

"Want to bet?" I reached out to thrust the sharp point through his jacket over the heart. The pin was pink with black type, and said "world's greatest grandad."

More notes here:

and here:

Sunday, August 3, 2014

Superstition and the singularity

by John MacBeath Watkins

I always figured California would be a place where religions could arise, but I had no idea smart people could come up with one so lame.

I'm talking about the Singularity. Vernor Vinge invented the term, which describes the future advent of a super-intelligent, conscious being as a result of computers getting smarter. Consciousness is supposed to emerge, but Vinge used the term "singularity" as a metaphor from black holes, from which no information can escape. His view was that we could not predict the capabilities or motives of such a being.

Which has not kept people from speculating.

Some believe a benevolent super-intelligence will be effectively all-knowing and omnipresent. Some believe they will be taken up into the cloud and given eternal life. And some believe in the devil.

I'm talking here about Roko's Basilisk. From RationalWiki::

Roko's basilisk is a proposition that says an all-powerful artificial intelligence from the future may retroactively punish those who did not assist in bringing about its existence. It resembles a futurist version of Pascal's wager; an argument used to try and suggest people should subscribe to particular singularitarian ideas, or even donate money to them, by weighing up the prospect of punishment versus reward. Furthermore, the proposition says that merely knowing about it incurs the risk of punishment. It is named after the member of the rationalist community LessWrong who most clearly described it (though he did not originate it).Despite widespread incredulity,[2] this entire saga is about things that are actually believed by some groups of people. Though it must be noted that LessWrong itselfdoes not, as a policy, believe in or advocate the basilisk — just in almost all of the premises that add up to it.

One of those premises is that an exact copy of you is you. It would feel what you would feel, suffer as you would suffer, and react as you would react. To a materialistic atheist, it would be no different from you.

I am a bookseller. I have recently seen a first edition of Hemingway's For Whom the Bell Tolls. I have in my store a rather nice facsimile of the same book, the only detectable differences being an entry on the copyright page. If I were to sell the facsimile as a first edition and were found out, it would ruin my reputation -- and if the publisher had not included an entry on the copyright page, the book would be not merely a facsimile, but a counterfeit.

An exact copy of you would be a counterfeit you. In fact, the super intelligence could make endless copies of you if it were so inclined. Differences in experience would start to occur almost at once, and each copy would become a different person as time when on. If so, which one would be you? All of them? None of them?

The notion that an exact copy of you would be you is atheist theology, based on the idea that you are no more than a physical being. I consider it a claim to know more than can be known, so one might call it a superstition or a religious belief.

And short of creating a new body for you, some of those who are doing the theology of the singularity speculate that you could do a mind upload, which would give you a bodyless existence in the cloud. But would that be you? Again, once the copy of you is in digital form, it can be copied endlessly. None of the copies would be you. They might act like you, or they might not, depending on how badly the copy gets corrupted and how different the urges of an expert system "living" in a machine are from those of a person living in a body.

What you could create would not be you. It would be a sort of software monument to you. Theoretically, a super intelligent machine could more easily create a software version of your mind than an entirely new you, but in either case, what motivation would it have to build monuments to inferior beings?

The next problem is that the assumptions about the singularity are that it will come to evolve differently than machines have to date. Up to now, machines have evolved the way ideas evolve, being designed and built by humans. If a machine started designing better versions of itself, its motivations would have to be those designed into it. Yes, you could even program it to be motivated to build software monuments to internet billionaires, but that seems like a vainglorious use of a powerful machine. At the point where we have "conscious" machines, they will be designed to simulate consciousness, which will be a signal to start an endless controversy about what consciousness is.

But part of the theology of the singularity is that consciousness is an emergent property, which will appear when the conditions are right, such as sufficient intelligence, sense data and memory. I see no reason to assume that this is the case, and I posit that any conscious machine that we create will be designed to be conscious, with its motivations in its software.

Which brings us back to Roko's Basilisk. It can only be created if we create it, and do so in a way intended to harm ourselves. I wish I could be certain that fearful, superstitious people would not do that.

Wednesday, July 30, 2014

Productivity, corporate ideology, and getting your "share": Rethinking liberalism continued

by John MacBeath Watkins

ADP, the company that does my payrolls, includes in this promotional material a common bromide --  that automation "can cut costs dramatically and free up time for higher-value work."

And that's generally been the argument for productivity growth. It means more wealth, therefore it will make you wealthier.

Except, of course, that whether this is true depends very much on who you are, and what your prospects for participating in the new wealth are. Andrew Carnegie became one of the richest men in America by always making sure his steel mills had the best technology. He'd seen the effects of falling behind first-person. His father was a weaver in Dunfermline, Scotland, who lost his profession when the handweavers were put out of business by the big weaving mills. The family had to borrow money to move to America, where Andrew's first job was as a "bobbin boy" at a textile mill in Pennsylvania at age 13.

He worked 12 hour days, six days a week, changing spools of thread for $1.20 a week. But he was a man of great ability. Fortunately, through a family connection, he was able to get a job as a telegraph messenger boy, and his energy, ability to learn, and hard work brought him to the attention of his superiors. He was more a self-made man than any other I can think of, but he never lost sight of the things that helped him. And he never forgot that his father had been a skilled man and a hard worker, yet had been ruined.

That's the trouble with disruptive technologies. Our society gives us time in our youth to learn a profession, and expects us to make our way based on those skills for the rest of our lives. But when skills become obsolete, it tosses people aside, with little chance to ride the new wave.

And as to the higher-value work, was the elder Carnegie doing work that called on his human abilities to a greater extent as a weaver in Scotland or as a textile mill worker in Pennsylvania? Was the work less routine, more challenging, requiring more of his judgement?

Perhaps for the term "higher-value work" we should substitute "harder to automate work." Higher-value is a term that makes us think of getting a promotion, of using our judgement more. Yet the jobs created when others are destroyed are not necessarily of that nature.

Janitorial work is hard to automate. So is sex work, the ultimate "high-touch" profession. We've seen a decline in workforce participation as productivity has soared. And it has soared. Look at this graph, from my favorite magazine, The Economist (from an excellent article here, which you should read):

The other problem is one unique to capitalism. The distinguishing characteristic of a capitalist system is that the major source of wealth is the investment of capital in the means of production, rather than, say, conquering more land or enslaving more people.

As a result, there is a tendency for wealth to concentrate in the hands of those who own a lot of capital. And with wealth, comes influence, and with influence, comes the

Rising inequity creates unrest, seen in the late 19th and early 20th centuries in the form of labor strife and extremist movements.

Part of the problem here is that the distribution of wealth depends in part on politics. In the U.S., decisions at the federal level have moved the tax burden from those who make their money by owning things to those who make their money working for wages. Wages were already declining as a percentage of the GDP, as you can see in the chart from this site, which has several other charts that may interest you:

 The decline dates from about the time corporate raiders started changing the way companies do business. In part, this reflected changes in the banking business that made it possible to raise money for a takeover. In part, it reflected a new orientation in business, the belief that companies should be managed for the highest possible stock prices for shareholders. Historically, public corporations prior to this had been managed on the basis that shareholders were one of the groups served by the company, along with bondholders, creditors, employees and customers.

The new orientation justified making war on a company's own employees to produce higher profits to benefit shareholders, stripping assets to pay off the debt contracted in a takeover, and other tactics that would in an earlier age have been considered bad for the company. Private equity companies, such as Mitt Romney's old company, Bain Capital, raised money from investors to do similar work.

I have a book to recommend on this subject, The Shareholder Value Myth, by Lynn Stout, Distinguished Professor of Corporate and Business Law at Cornell Law School. Prof. Stout makes a compelling argument that the pursuit of "shareholder value" -- a term with difficulties of its own -- has been bad for investors, corporations, and the public.

The problem is that we've seen this movie before. It was a bit more direct when federal troops killed 30
strikers during the Pullman Strike of 1894, but the basic idea of making war on the workers for the benefit of owners is a time-honored one in American history.

Now, it's done through legal maneuvering, outsourcing, or moving work to right-to-work states (where a worker has a right to not belong to a union in a workplace where unions have won the right to represent the workers.)

But consider the effect on the companies involved. Consider, for example, Boeing, the local giant in Seattle's economy.

In 1966, Boeing got a launch customer for the largest passenger aircraft built at that time. The delivery schedule required them to design the aircraft in 2/3 the time usually allowed. Engineers were so committed to the project that they worked longer hours than management thought advisable. I've had people describe to me how managers would insist an engineer go home, walk him to his car and watch him drive away. And the engineer would drive around the block and go back to work.

That kind of dedication earned them the name "the incredibles." It was the kind of dedication I don't expect Boeing to see again, or at least as long as the CEO is Jim "the employees are still cowering" McNerney. The company's culture has changed too much, and not for the better. And work is moving from the Seattle-area plants were the expertise that made the company great has historically been located to South Carolina, a right-to-work state.

We've already tackled the issue of how to fix inequity in my earlier essay, A plan to reduce inequity: Working versus owning. But there's another issue in an unequal society, which is that as wealth goes from being widely distributed to being held mainly by a smaller and smaller group, the kind of work available changes. What is happening as companies have reoriented from serving a variety of stakeholders to serving mainly the shareholders has been a bit like the Inclosure Acts that drove many people off the land during the British industrial revolution.

There used to be something called the Commons in many British communities, land on which anyone could graze their livestock, and which was sometimes farmed by landless peasants. The Inclosure Acts privatized that land, giving what had been a source of income to a large number of people to the local lord, who now gained ownership of it. Property is not objects or land, after all, it is the system of rights affecting how people use them, and when those rights changed, ownership changes.

The result was that people who had worked the land were now "free labor," that is, they had been freed from their previous source of income and were now free to alienate their labor in any way they wished, as the outlaw John Locke noted in his Second Treatise of Government.

The rhetoric of freedom is again being employed, along with a change in the nature of property rights, to redistribute property to the most fortunate. As Stout noted in The Shareholder Value Myth, a shareholder has never been an owner in the sense that a partner is. A partner can direct that the company sell assets to buy out that partner's equity on the company, while a shareholder can only sell whatever shares of stock they own. This, in fact, is one of the major reasons for starting a public stock company. Such demands have ruined many a business started as a partnership, while public companies have been able to take the long view.

No more. As shareholders have acted more like owners, they have forced companies to take the short view, which is why more and more companies are being taken private. The number of public corporations declined 39% between 1997 and 2013.

This trend has accompanied another, the trend to cutting taxes on inherited wealth and capital gains. Payroll taxes, which are charged only on income below the income level of the 1%, were raised in the 1980s.

It seems to me that shareholder value ideology, supply-side economics, and a shift from Keynesian economic modeling have one thing in common. They abandoned empiricism (such as how well the Phillips Curve was actually working) and a reliance on knowing history (such as the legal history of the purpose of corporations) in favor of plausible-sounding logic that appealed to moneyed interests.

And those interests are not always about the money. Sometimes, they are about positional status, and about making sure "the employees are still cowering."

This interest fit very well with shareholder value ideology, and the war on companies' own workers. It also fit well with the snake oil of supply-side economics, which promised wealth for all if we'd just let the rich keep more of their money.

Keynesian economics did nothing for the positional status of the rich. It argued the government could create full employment, and while a full-employment economy may make everyone richer, it gives workers more leverage when it comes to negotiating wages -- they can stop cowering.

So the new classical economics promoted by Robert Lucas, Jr., and Thomas Sargent, which claimed that the government can't do much about employment was bound to attract wealthy sponsors. For different reasons, it had a certain appeal to academic economists, as Simon Wren-Lewis notes:
If mainstream academic macroeconomists were seduced by anything, it was a methodology - a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the NCCR. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous!

In short, Wren-Lewis argues, it allowed economists to leave behind a history of the dismal science as a messy social science and act more "scientific" -- even though the new method did not provide better empirical results.

Shareholder value ideology had a similar appeal for the purity of its logic. The "managerialist" view of corporations which said that just as you can buy a "share" in a prizefighter, but you can't own him, a shareholder was but one of the stakeholders in a corporation. Economists, in particular, preferred the purity of the owner-agent model to the messy business of the traditional legal status and purpose of corporations.

But I cannot imagine this having as much impact as it did, if it had not suited the purposes of the corporate raiders and private equity companies that were becoming prominent at the time. If you say things that give rich people justifications for what might otherwise be viewed as pretty dodgy behavior, you won't lack for people willing to promote your views.

Sunday, July 20, 2014

The first unplanned words from the moon

by John MacBeath Watkins

When Neil Armstrong stepped on the moon, everyone was eager to hear man's first words from another
world. We all know what he said, and we all know about the controversy -- he muffed his lines and failed to say "a man."

But what interested me that day in 1969 was the first unscripted words from the moon. And they told me more about the moon, and less about mankind. Here's what Armstrong said after his famous line:

And the—the surface is fine and powdery. I can—I can pick it up loosely with my toe. It does adhere in fine layers like powdered charcoal to the sole and sides of my boots. I only go in a small fraction of an inch, maybe an eighth of an inch, but I can see the footprints of my boots and the treads in the fine, sandy particles. 

Now, that's an authentic astronaut talking the way those guys talked.

Friday, July 11, 2014

Free-lunch Conservatives

by John MacBeath Watkins

Our political taxonomy puts "fiscally conservative" voters mostly in the Republican voting bloc, but this seems indefensible. The last Republican president to act in a fiscally responsible manner was George H.W. Bush, who realized that his party had no taste for real cuts in the budget and raised taxes to deal with the deficit.

Republicans hated him for that. Merely failing to deal the a budget deficit would probably have allowed him to be re-elected, but raising taxes was not acceptable.

Yet informs us that "...the Republican Party is most often credited with creating the fiscal conservative ideal, despite the big-spending tendencies of the most recent GOP administrations."

Substitute "fiscal conservative rhetoric" for "fiscal conservative ideal" and you'll have it about right. The Republican Party from St. Ronald of Reagan onward has been all about lowering taxes. Reagan claimed that the effect of his lower taxes would be such an economic boost that revenues would increase rather than decrease. When this did not turn out to be the case, Republicans chose to stick with tax cuts and invent a series of justifications.

Reagan vastly increased the size of the government and tripled the deficit. While there was some budget cutting early in his administration, it soon became evident that Republicans do not, in practice, want smaller government. They want government that spends less on Democratic priorities and more on Republican priorities.

In short, they want more goodies for their side, and they want to pay in less in taxes. This is not fiscal conservatism. It is free-lunch conservatism. It is the reason Republicans are the party of "borrow and spend."

The "fiscal conservative" label has been a bit of marketing genius, but at some point, our country is going to have to face the truth. The tax revolt and the anti-tax movement have never been about cutting government, they've always been about getting a free lunch. Oh, sure, Republicans have talked a good game about cutting the sorts of programs Democrats support, but since they've wanted to spend more on Republican constituencies, there's always been an element of "we cheat the other guy and pass the savings on to you!" in their rhetoric.

If you want a tax cut, and you want it paid for out of someone else's pocket, how fiscally conservative are you?

The concept of "the other" has an enduring appeal to Republicans of a nativist bent. About 13% of the people living in America at present are foreign born, a percentage last seen in the 1920, which were about the peak for the Klu Klux Klan, then preaching "One Hundred Percent Americanism"

Republicans have clearly campaigned against those who who are not 100 percent American by the standards applied by the Klan back in the 1920s -- White and native-born. Only what might be called the "Bundy fringe" have violated the law,as the KKK liked to do, but the nativists have this time allied themselves with the free-lunch conservatives. One group wants to cut a certain kind of spending that they think benefits "those people," the other wants to cut taxes regardless of the cost to later generations or society as a whole.

It's a marriage made in one of the inner circles of the Inferno.

Saturday, June 28, 2014

More notes for a novel in 1940s noir

by John MacBeath Watkins

"Get your mitts off me," I said.

"Those are your mitts," the bouncer answered. "See, they're connected with a string that goes through your sleeves."

It was a nice, quiet joint. There hadn't been a knifing in a month, and they'd hired librarians to shush the

The job sounded easy. Too easy. But what if those kindergarteners were tougher than I thought?

"Babe, I could go for a girl like you," I said, drinking her in with my eyes.

She had long legs, slender hips, lots of blond hair. Some men might not have liked the size of her Adam's apple, but I like that in my women.

The door to her bedroom was ajar.

"Let me go first," I said, pulling my snub-nosed .38 out of the shoulder holster. I slammed the door open and moved in fast, scanning the room with my eyes and my gun. Most of the bedding was on the floor, every drawer was pulled out and there was clothing strewn everywhere.

"Notice anything?" I asked her.

"It's just like I left it this morning," she said. "Don't mind the mess, it's always like this."

More here:

and here:

Wednesday, June 25, 2014

Former professions of famous writers

John MacBeath Watkins

Most writers did something else before they became famous writers. I've long been fascinated by this, because the experiences they bring to bear on their writing shapes the narrative.

Herman Melville was a merchant mariner who later became a customs inspector when he found his writing wouldn't support him.
Aphra Bhen, secret agent

Mark Twain was a printer's devil, then a riverboat pilot before the Civil War and a journalist after that, before becoming a successful novelist, essayist and lecturer.

Dante Alighieri was a cavalry soldier and later joined the physicians' and apothecaries' guild before writing The Divine Comedy.

Miguel de Cervantes Saavedra, better known only as Cervantes, was also a military man, serving as a marine in the Spanish Navy during the Battle of Lepanto, where he was wounded three times, leaving his left arm limp. Returning to Spain, his ship was captured by an Algerian corsair, and he worked as a slave for five years and made four unsuccessful escape attempts before his parents ransomed him and he could begin his literary career.

Solomon Northup, born free in New York, was kidnapped in Washington D.C. and worked for 12 years as a slave before he was rescued. New York had in 1840 established funding for rescuing its citizens who were kidnapped and sold into slavery, so apparently this was a problem for quite a few free New York blacks.

Frederick Douglass was born into slavery, and escaped at about the age of 18, later writing his autobiography and becoming an influential abolitionist and reformer.

Chester Himes get busted for armed robbery when he was 19, and began writing in prison. If you haven't read If he Hollers Let Him Go, do so immediately.

Nathaniel Hawthorne began writing while working at the Boston Customs House. He was also a magazine editor at one point, but earned most of his money in the customs service.

Captain Frederick Marrayat, who set the pattern of square-rigged adventure stories, served in the British Navy, as a midshipman under the infamous Lord Cochrane, later invented a lifeboat (and got the name "lifeboat" Marrayat) and developed a flag signalling system known as Marrayat's Code. After the Napoleonic wars ended, he held the rank of captain and could still get commands, but he wrote a novel, Frank Mildmay, or, The Naval Officer, and sent it off to a publisher. When he returned from a two-year voyage, his book had been published and he was a best-selling author. He gave up his commission and devoted himself to writing.

Like Twain, Ernest Hemingway was a journalist before he became a novelist, but not until after he served as an ambulance driver in World War I. George Orwell was a journalist as well as well, but not until after he'd served as a policeman in Burma. Orwell also served in an Anarchist unit in the Spanish Civil War.

Joseph Conrad ran away from his home in Poland at the age of 17, and became a merchant mariner. He became a Captain in the British merchant marine, and worked at that until his health forced him to return to land and become a writer. Another merchant mariner was Jack Vance, a science fiction writer. He was nearly blind, but memorized the eye chart to become a able-bodied seaman.

Vance also studied physics and engineering. Robert Heinlein, another science fiction writer, studied engineering at the Naval Academy and had a career as a naval officer until he was forced by his health to retire and become a writer. Isaac Asimov, famous for inventing the laws of robotics, was a biochemistry professor. Arthur C. Clarke was a pensions auditor before World War II, became a radar operator during the war, and studied physics and mathematics after the war.

Aphra Bhen, one of the first famous northern European women writers, was a spy until poverty and debt drove her to writing. Ian Fleming, Graham Greene, John le Carre, Muriel Spark, and Compton McKenzie (an early gay writer and author of Whiskey Galore) also served in intelligence. Christopher Marlowe, who bought jokes for his plays from Shakespeare, was also a spy.

Mary Wollstonecraft worked as a lady's companion and a governess before becoming pregnant out of wedlock, not once, but twice. The second time she married British author William Godwin and began her career writing and campaigning for women's rights.

Jane Austen, born to the landed gentry, lived at home and seems not to have worked outside of it before beginning her literary career.

Baroness Emma Magdolna Rozália Mária Jozefa Borbála "Emmuska" Orczy de Orczi, aka Baroness Orczy, despite her noble birth, had little money and worked as a translator before writing Gothic novels which are still read.

Charlotte and Anne Bronte were governesses, and Emily Bronte worked as a teacher until the 17-hour days broke her health and she returned home.

W. Somerset Maugham was a medical student when he started writing, but he was so successful as a writer he had no need to practice medicine.

Dorothy Sayers is another who gained literary success without a prior career. She was also one of the first women to receive an MA from Somerville College in Oxford when those degrees became available to women.

Josephine Tey was the pen name of Elizabeth Mackintosh, a physical education teacher.

Ursula Le Guin did an MA in French and Italian literature, but worked as a secretary before she became one of the most respected living writers of science fiction and fantasy.

George Eliot was a magazine editor named Mary Evans before she was published as a writer under he pen name. George Sand was an often-straying housewife named Amantine Dupin before being published under her pen name.

Louisa May Alcott worked as a teacher, seamstress, governess, and domestic helper, before success as a writer allowed her to focus on this craft.

Jack London escaped long hours working in a cannery to become an oyster pirate. After his oyster sloop got damaged beyond repair, he worked for the Fish Patrol, hunting poachers such as he had been. He signed on with a sealing schooner, and on finishing the voyage, fell on hard times and became a tramp. At this point in his life, he was still only 17, and became a high school student. A saloon keeper lent him money to go to college when he was admitted, but finances forced him to drop out.

He was 21 when he left for the gold fields of Alaska, and suffered scurvy there. He decided that the only way to get out of poverty was writing, and early on even when published, he was paid badly and late. By 1900, his fortunes had turned, and he made $2,500 writing that year. Keep in mind, that's about what a modest house cost in 1900.

One might have expected a man with such a heroic career to write the ultimate hero stories, but that fell to Robert E. Howard, now remembered for the Conan stories. He did a little journalism and worked as a stenographer for an oil company.

Flannery O'Connor was interested in birds, and raised peacocks, emus and ostriches before gaining her literary fame.

Isabel Allende worked for the U.N. and later translated romances into Spanish before launching her literary career.

Maya Angelou worked as a street-car conductor, night club dancer, prostitute, madame, and actor before gaining success as a writer. Robert Ludlum, after serving in the Marines, became an actor and theatrical producer before writing thrillers.

William Faulner, rejected by the U.S. Army Air Force in World War I, changed the spelling of his name and lied about his birthplace to join the RAF. He was still training when the war ended. He also worked at a post office in New York before being asked to resign for "moral reasons." Faulkner was, of course, a drunk, and likely was drunk on duty. He often elaborated his RAF experiences, fabricating war wounds, including a metal plate in his head.

F. Scott Fitzgerald, born with every advantage, was doing badly in college when he dropped out in 1917 to join the army. He worked for an advertising agency before gaining a reputation as a writer, then drank himself to an early death.

More later.

Thursday, June 12, 2014

A nation founded on debt (rethinking liberalism)

by John MacBeath Watkins

There is an email making the rounds of elderly white conservatives that quotes the Founding fathers on "economics, capitalism, and banking."

A sampling:

#1 "A wise and frugal government… shall restrain men from injuring one another, shall leave them otherwise free to regulate their own pursuits of industry and improvement, and shall not take from the mouth of labor the bread it has earned. This is the sum of good government." — Thomas Jefferson, First Inaugural Address, March 4, 1801

#2 "A people... who are possessed of the spirit of commerce, who see and who will pursue their advantages may achieve almost anything." - George Washington

#3 "Government is instituted to protect property of every sort; as well that which lies in the various rights of individuals, as that which the term particularly expresses. This being the end of government, that alone is a just government which impartially secures to every man whatever is his own." – James Madison, Essay on Property, 1792

#4 "Banks have done more injury to the religion, morality, tranquility, prosperity, and even wealth of the nation than they can have done or ever will do good." - John Adams
And 11 more along those lines. One problem is that in practice, the founding fathers borrowed a great deal of money, much of it from the Dutch, to finance the war. After we'd won our independence, they followed Alexander Hamilton's advice and funded the debt, creating a stable market for securities which private companies could tap when they needed money.

Hamilton argued that public debt would be a blessing if it didn't become too large, and so it has been. We've eliminated the national debt once in our history during Andrew Jackson's administration, and this was followed by an economic disaster.

Another problem with the chain email is with the headline. Capitalism hadn't been invented yet, which is why the founding fathers never used the word (which was invented by Karl Marx in about 1850.) They were talking about the economy, but not about capitalism.

Yes, markets and private property had existed since antiquity, but there is a great deal more than that to capitalism. In pre-capitlist societies, wealth was viewed as pretty much a zero-sum game -- I get my wealth by taking it from you. There might be resources not being used, fields not tilled, mines not dug, but the wealth was there waiting to be taken. The means of production, such as plows and spinning wheels, were usually owned by those who used them. The rich did not invest in looms or spinning wheels, those were for the peasants. You might do well as a merchant, but wealth was closely tied to power, because in a zero-sum game, wealth flows from the ability to decide who gets it.

Capitalism broke the relationship between the artisan and their tools. The capitalist was not an artisan who owned his tools, he was an investor in the means of production, and investment in constantly improving capital stock was the major source of growth. He hired artisans and laborers to use the means of production he owned (and in the early days, property laws insured this would be a "he.)

In Confucian thought, all wealth came from the land, and merchants just moved it around, artisans just re-arranged it. Physiocrats, a system of thought that started in France, thought the same. At the time of the American Revolution, the main alternative system of economic thought was mercantilism.

The concept that capital could lead to growth is a concept that was still being invented when this country gained its independence. The Wealth of Nations was published in 1776, and the modern theory of value was invented about a century later in the marginal revolution.

The founding fathers fell mainly into two groups, the merchants and the planters. The planters, such as Jefferson, tended to be physiocrats. They believed in laissez faire capitalism, which was rather convenient in that the abolitionists were already proposing that the government should interfere in their slave markets (Jefferson made a lot of his money selling slaves: "I consider a woman who brings a child every two years as more profitable than the best man of the farm," Jefferson remarked in 1820.) Physiocrats also believed that all real value came from the land, another convenient thing for a planter to believe.

The merchants, such as Alexander Hamilton, tended to be mercantilists. They believed in trying to capture as much wealth as possible for their country. This meant high tariffs on imported goods, substantial government projects to develop the country, like the Erie Canal and the post roads, and if possible, colonies from which wealth could be extracted.

Both of these systems of thought were built on the notion that the wealth of the world is a zero-sum game. Capitalism is not a zero-sum game. David Ricardo wrote about the theory of comparative advantage in 1817, suggesting that if each country focused on doing what it does best and purchased from other countries what they could produce more cheaply, everyone would be better off. The mercantilists, though they were focused on developing the wealth of the nation, had at least some notion that investment could increase wealth. They came up with the American Way, a program for public and public-private investments for the development of the nation, but had not realized what a mechanism of growth private capital could be.

It is all too easy to impose our modern framework of thought on people long dead, but they had a different set of tools to work with. I don't think capitalism really existed until the idea that free trade and investment in the means of production were seen as essential to the creation of wealth. The physiocrats, such as Jefferson, were in favor of the former, the mercantilists, such as Hamilton, were in favor of the latter, but almost no one at the time the constitution was written was in favor of both.

More on physiocrats and mercantilists here, on the nature of capitalism here.

Now you can see that the founding fathers' quotes presented here were, in fact, not about capitalism, but about a physiocrat's view of government. However, in practical terms, what they actually did about government debt was based on a mercantilist's point of view, and it's a very good thing that happened.

This country was deep in dept by the time it had won its independence. The physiocrats in congress were generally in favor of screwing the investors, but Alexander Hamilton realized that the national debt could be a tremendous asset.

From this source:

His 1790 Report on the Public Credit proposed funding the debt, thereby creating a stable market in bonds in this country that enabled businesses to borrow more cheaply than they could have otherwise.

It is all very well to quote the physiocrats among the founding fathers on the subject of public debt, but their understanding of banking and debt generally was fairly primitive. They were wise enough to follow the advice of a mercantilist on the actual handling of the debt -- that "if it is not excessive, will be to us a national blessing."

Now, you might think mercantilists were really capitalists, but you'd be wrong. Mercantilists advocated the development of the nation, and wanted to get as much of the world's wealth in their country as possible. They were empire builders. Cotton grown in India would be shipped to England to be made into cloth, then shipped back to India, even though the shipping costs and the cost of English labor made the cloth more expensive.

Capitalism broke that bond as well, dooming multinational empires. Once global capital was able to move production to undeveloped countries where the labor was cheap, and avoid paying the taxes that had supported the empire, the feedback loop that supported empires was gone. It is in the nature of capitalism that empires don't pay, and the new world order is one built on alliances within trading blocs.

Rethinking liberal theory 1: Thomas Hobbes, blasphemer and patriot
Rethinking liberal theory 2: The outlaw John Locke, terrorist, liberal, and advocate of freedom
Rethinking liberal theory 3: A compact to protect property, or a conspiracy to create meaning?
Rethinking Liberal Theory 4: John Milton and the many shapes of truth
Rethinking Liberal Theory 5: Adam Smith, moral philosopher of the marketplace
Rethinking Liberal Theory 6: Mythmaking and manufacturing
Rethinking liberal theory 7: Hegel, the end of history, and the triumph of the liberal idea
Rethinking liberal theory 8: Liberalism and individualism: The invention of the Util and the way west
Rethinking liberal theory 9 Property and freedom: Why language is the basis for the social contract 
Rethinking Liberal theory 10: Physiocrats & mercantilists: The economic philosophies of the founding fathers
Rethinking Liberal Theory 11:Stateless income, global capital, and the death of empires
Rethinking Liberal Theory 12:Capitalism:So much more than market
Rethinking liberalism 13: What is money? 

Rethinking Liberalism 14: Tribalism and the emerging new world order
Rethinking liberalism 15: The poverty of neoconservative philosophy
Rethinking Liberalism 16: More on the poverty of neoconservative philosophy

Monday, June 9, 2014

More on the poverty of neoconservative philosophy (rethinking liberalism)

by John MacBeath Watkins

Thinking further on my earlier post on the poverty of neoconservative philosophy, it seems to me that I should explore further Leo Strauss's idea that totalitarian is a result of the modern nihilism found in Thomas Hobbes's work. Strauss claimed that what is opposed to this is the effort to build the just society.

First of all, we should note that Marxists are all about building a "just" society, by their own standards. They are great believers in the idea that a just society can be achieved by overthrowing capitalism and building a communist society. The fact that they in practice failed to build a just society reveals defects in their thinking. For Strauss, Marxism represented the negation of any need for the political and economic institutions of society. This is what is known as political nihilism.

James Madison
And it's true that Marx made the error of thinking that institutions that cause great harm in society, such as private property and religion, could be eliminated and the harm they caused would stop. He then imagined that the state would wither away, not realizing that when you take away major organizing institutions in society, the gap will be filled by the remaining institutions In this case, the state filled the gap, which is what has happened wherever Marxism has been tried.

But Marx was not the last Marxist. Lenin stressed the need for a vanguard of intellectuals to push for the revolution and head up the revolutionary government. Totalitarianism could not have come from political nihilism, which denied the need for the state, but only from people who firmly believed that they needed to be in control. To claim that Stalin was a political nihilist who didn't believe in the need for political or economic institutions is utter nonsense. Stalin clearly, based on his actions, believed in a strong, centralized state, firmly in control of the economy, the political life, and even the beliefs of its citizens.

Anyone actually wanting to practice political nihilism in a Stalinist state would have been killed. Marx may have preached a sort of political nihilism, but the lacunae in his own philosophy meant that in practice, all Marxist rulers have been firm believers in a powerful central state.

And what of the fascists? Did they believe in abolishing the political, economic, and social institutions of society?

Hardly. They were big believers in the ideology of nationalism, a strong central state, and strong cooperation between the state and industry. They were authoritarian not because they believed political institutions should be abolished, but because they believed in, as Benito Mussolini put it, in "All within the state, nothing outside the state, nothing against the state."
In short, Marxists were totalitarian in practice because they were not believers in the basic tenets of political nihilism.

But were they moral nihilists, a more familiar sort of nihilist?

Strauss deemed Hobbes a nihilist because his philosophy was based on "mere preservation." Hobbes, after all, said that we needed a ruler to enforce laws, so that we would not meet a violent death.

But in saying the ruler has value because he (and the ruler Hobbes had in mind was his pupil, Charles II) does a job of work for the citizen, he was laying the groundwork for democracy. After all, what if the ruler sucks at his job? Shouldn't you be able to fire him? And why should the ruler pass the job on to his first born son? Shouldn't the citizens be able to hire the rulers they want?

Hobbes had invented a new basis for the legitimacy of rulers, which was needed because the 30-Years War and the Reformation had destroyed traditional bases for the legitimacy of rulers. But this new basis for the legitimacy of rulers didn't really support the outcome he wanted, which was the absolute monarchy of his friend and pupil.
Was he a moral nihilist? It's a bit hard for me to see him that way. He believed that the injustice of violent death, of theft and banditry, could only be avoided by having a ruler with the authority to enforce the laws we want enforced. In fact, he wanted a society more just than the chaos of the 30-Years War would permit. He had seen what the breakdown of political and social institutions could do, and charted a path away from that.

In Strauss's eyes, this focus on the material matter of remaining alive made Hobbes a nihilist. Since he was arguing in favor of political institutions, he cannot have been a political nihilist, so he must have been indicting Hobbes as a moral nihilist.

If we were to accept that Hobbes was a moral nihilist, should we also accept that this was the sort of modernist approach to ethics that led to the totalitarian philosophies of fascist and Marxist states?

This seems dubious. Marx clearly was motivated by an effort to build a just society, the same goal Strauss admired; he just disagreed with what constitutes a just society. Lenin agreed with Strauss that society would inevitably be ruled by a small group of the "best" people, he just disagreed with Strauss about the nature of the group that should rule.

Were fascists moral nihilists? I believe that rather, they had a perverted sense of justice. Keep in mind, worse things are done in the name of justice than have ever been contemplated in the name of crime. Mussolini even referred to fascism as a religion.

The Holocaust, the Inquisition, and the killing fields of Cambodia were not carried out be people who believed in the evil of what they were doing. They were carried out by people with a deep conviction in the justice of purifying the world of bad people. They were following the tenets of their beliefs in building a just world to the logical conclusion. People without such an ideology, such a central myth, people merely concerned with the preservation of their own lives, would not have acted in these ways.

Which may explain why the Nocturnal Council in Plato's The Laws bears more than a passing resemblance to the Inquisition. Plato considered central truths, that is, a central myth of society, to be necessary for building a just society, and a body to enforce that belief to be essential. But when it was put into practice, this notion produced a system that was notoriously unjust.

This makes it rather odd that Strauss would place such emphasis on belief in a central myth as being necessary for building a just society. Straussians tend to call that central myth "American Exceptionalism," by which they appear to mean something different than the Marxists who coined the term. Those Marxists claimed that America didn't have the kind of stratified class structure that made Marxism attractive to European workers.

Neoconservatives seem to mean it more in the sense of John Winthrop's 1630 sermon, "A Model of Christian Charity" which referred to "A City Upon a Hill," indicating the notion that America is a model of what the world should be. This is really a claim of national greatness, not so very different, in fact, from the claims of national greatness  made by fascists in Germany and Italy in the 1930s.

Certainly I prefer Winthrop's vision of Christian love and charity to Hitler's vision of a triumphant Aryan race. But it is not the central idea of America. Winthrop wrote his sermon in 1630 for a Puritan audience. Most American settlers were not Puritan. Many were Anglican, or Baptist, or Methodist, or Quaker, or Catholic or Jewish. To say that the Puritan project was the project of America is to vastly overstate their importance. Many of my ancestors were Quaker, and became so after one of them was kicked out of the Puritan church for giving aid and comfort to Quakers. Being the descendant of those who where kicked out of the church Winthrop belonged to for being too inclusive in their associations makes me skeptical of the notion that a Puritan in 1630 defined the essence of America.

The essential nature of the American experiment has much more to do with the thought of John Locke than John Winthrop.Winthrop did not believe in religious tolerance or democracy. He presided over the trial of Anne Hutchinson, who did not agree with the Puritan credo that it took both faith and good works to get into heaven. Like many modern-day Christians, she believed that faith alone was enough. For this she was labeled a heretic and banished from the colony, and Winthrop called her an "American Jezebel."

Nor was Winthrop an admirer of democracy, saying:
"If we should change from a mixed aristocracy to mere democracy, first we should have no warrant in scripture for it: for there was no such government in Israel ... A democracy is, amongst civil nations, accounted the meanest and worst of all forms of government.  [To allow it would be] a manifest breach of the 5th Commandment."
To my Catholic and Lutheran readers I should explain that Winthrop was using the Calvinist system for numbering the commandments, so he was referring to "honor they father and mother," not "thou shall not kill."

Thomas Jefferson, author of the Bill of Rights, thought very highly of John Locke and did not approve of the intolerance of the Puritans.

James Madison, who had as much to do with the framing of the Constitution as anyone, argued in Federalist Paper #10 that religion was one of the major sources of faction in a country, and this tendency to faction could only be controlled in a large and diverse republic. Madison was also a major supporter of the Bill of Rights, in which the rule against the establisment of religion ensures that no one can be punished for not believing what people like John Winthrop might think they should.

The vision of America that Jefferson and Madison proposed was one that varies greatly with the vision neoconservatives insist on. The open society they wanted did not rely on perverse readings of ancient texts, but on easily understood concepts embodied in the constitution. Straussian readings tend to seek the hidden meaning of texts, but Jefferson and Madison and the other founding fathers were doing their best to make their meaning clear and persuasive. I don't think they would have seen the point of hiding their meaning, and I suspect any hidden meaning found is one made up by the reader.

In practice, the neoconservative notion of American exceptionalism amounts to an assertion of national greatness. In policy terms, the neoconservative idea seems to be that America can lick any man in the house, and should fight anyone who looks at us funny. Their American exceptionalism amounts to nothing more than Amerika über alles, hardly a slogan for a democratic country.

More on rethinking liberalism
Rethinking liberal theory 1: Thomas Hobbes, blasphemer and patriot
Rethinking liberal theory 2: The outlaw John Locke, terrorist, liberal, and advocate of freedom
Rethinking liberal theory 3: A compact to protect property, or a conspiracy to create meaning?
Rethinking Liberal Theory 4: John Milton and the many shapes of truth
Rethinking Liberal Theory 5: Adam Smith, moral philosopher of the marketplace
Rethinking Liberal Theory 6: Mythmaking and manufacturing
Rethinking liberal theory 7: Hegel, the end of history, and the triumph of the liberal idea
Rethinking liberal theory 8: Liberalism and individualism: The invention of the Util and the way west
Rethinking liberal theory 9 Property and freedom: Why language is the basis for the social contract 
Rethinking Liberal theory 10: Physiocrats & mercantilists: The economic philosophies of the founding fathers
Rethinking Liberal Theory 11:Stateless income, global capital, and the death of empires
Rethinking Liberal Theory 12:Capitalism:So much more than market
Rethinking liberalism 13: What is money? 

Rethinking Liberalism 14: Tribalism and the emerging new world order
Rethinking liberalism 15: The poverty of neoconservative philosophy
Rethinking Liberalism 16: More on the poverty of neoconservative philosophy