Google analytics

Wednesday, November 19, 2014

Is America really so violent?

by John MacBeath Watkins

People who compare the United States to European countries say we have an extraordinarily high murder rate. But is that the appropriate comparison?

For a country located in the Americas, the United states has a relatively low murder rate. Canada and Chile are the exceptions. I suspect the issue is cultural. One thing that has happened with colonization is that some cultural aspects of the mother country are preserved from the time of colonization. I would look to the murder rate in the mother country at the time the country was colonized to explain a high murder rate in that culture today.

The murder rate in Europe in the middle ages was extremely high, and dropped quite a bit during the time the
Murder rate per 100,000 inhabitants in 2012.
  0–1
  1–2
  2–5
  5–10
  10–20
  >20
Americas were being settled. Steven Pinker, in his book The Better Angels of Our Nature, states that murder rates were about 30 times higher in the middle ages than they are now. If my theory is correct, the earlier a country was settled, the more likely it should be to have a high murder rate.

This seems to go against the fact that Chile has a low murder rate, even though the conquest of Chile started in 1540. One answer to this is that the low murder rate in Chile reflects the relatively strong state there. A strong state tends to reduce the murder rate because it's not good for the state to have taxpayers killing each other, any more than it helps a farmer to have his livestock fighting.

The early Chilean state was small and homogenous, prevented from expanding northward by the desert or southward by the unconquered Mapuche Indians. The conquest of Chile was gradual, and as a consequence of failing to conquer the Mapuche, Chile relied more than most Spanish colonies on European settlers. In fact, parts of the country attracted German settlers in the mid-19th century. Much of the country's expansion occurred after it declared independence from Spain in 1818, and with many immigrants arriving after that, the country could be expected to be culturally closer to modern Europe that nations settled earlier.

One of the uses Britain made of its American colonies was as a place to transport criminals. Once transportation to America as a punishment became impossible, Australia and Canada began to absorb Britain's malcontents. And whereas the French had chosen mainly to trade with the Indians and send only people they could trust to the new world, the British sent people pushed off the land by the Inclosure Acts, criminals and pretty much anyone they felt they were well shed of. As a consequence, the British culture imported to Canada was that of the 19th century, while the British culture imported to America was that of the 17th century.

Contrast this to Venezuela, a country where Columbus actually landed, which was colonized to a great extent in the 16th century. We find that it has an intentional homicide rate of 53.7 per 100,000 annually, in contrast to the 3.1 of Chile  or the 1.6 of Canada, and the United States of America turns out to be one of the least dangerous countries in the new world with a murder rate of 4.7 per 100,000 (all figures are for 2012.)

So, if the culture of violence in new world countries reflects the timing of their formative European colonization, what made European murder rates fall so much?

For one thing, violence became harder to get away with. As European states became more centralized, policing got better, and it became harder to walk away from a murder and start over elsewhere. In addition, as states became more centralized, warfare within a country became less practical -- dukes who might have tried to expand their duchy found that they were restrained by the increasing power of kings.

Another factor was the decline of subsistence farming and the increase in trade and industry. The key to wealth and power became less how many farms you could subjugate by the sword, and more the trade and industry you could dominate. Power moved from men with horses and armor to men with ledgers and gold.

While many a duke had risen to his post by violence (a duke was originally a war leader) few merchant princes found violence the path to influence and wealth. Because commerce is not a zero sum game, cooperation was a better path.

The shift from agrarian empires to mercantilist empires was a shift from warring tribes to warring nations, in which the violent domination of resources and trade routes led to greater national wealth. This was the great era of colonization. The shift from mercantilist empires to capitalism put further emphasis on cooperation, and undermined the colonial empires. Modern global capital creates stateless income that undermines colonial empires and makes wars less rewarding. Because the capital doesn't enrich the state that spends money one wars, but goes where it won't be taxed, much of the feedback mechanism that made empires possible is gone.

So, it's easy enough to see why violence has become less common in Europe. From the top down, it has become less rewarding and harder to get away with. The question remains, why did their colonies preserve the barbaric attitudes of an earlier age, and what can be done to move them beyond that?


Friday, November 14, 2014

Anomie and the search for meaning

by John MacBeath Watkins

The French have a word for it: Anomie. No norms. It is a condition when people find themselves so disconnected from social norms that they cannot find their place in the world. Emile Durheim used the term in his book, Suicide, published in 1887.

His theory was that a rapid change in the values and standards of society would lead to a feeling of alienation
and purposelessness. Picture the situation; society is changing rapidly, and while it may try to prepare you for your place in it, that place is no longer there by the time you are trained for it. Your entire life plan, the existence you have spent your childhood and adolescence preparing for, is nowhere to be found.

Are you a failure? No, worse. There was no path to a life of honorable labor, no place for you in the world.

You cannot even fail, because all that you have prepared for is simply not there. You were groomed to play a part in a pantomime that has been cancelled. And here you are, alone on the stage in a parody of makeup for a part no one cares to see you play. How meaningful is your life, then? If society were a dictionary, you would not even be a word, just an indecipherable squiggle in the margin.

That is anomie, diagnosed at the end of the 19th century, discussed to death to 20th century, a wallflower at the party in the early 21st century.

As you might expect from the title of Durkheim's book, suicide was one common response to this condition. Perhaps it still is. We don't talk about anomie much anymore. People still kill themselves, people still feel disconnected from social norms, but that 19th century term is less common than it once was. It's a shame, because the term explains a lot.

Much of what makes us human is in our interaction with others. It is in the social realm that we display our sanity or madness, and our very humanity. That is why solitary confinement is such a severe punishment, one that can even produce psychological effects such as hallucinations, paranoia and obsessive thoughts. We are meant to be social creatures, incomplete without interaction with others.

Once, society changed slowly, and when we spoke of the Old Kingdom, the Middle Kingdom, and the New Kingdom, we meant social orders that differed little and lasted a thousand years each. Then it was possible for generation following generation to fall easily into their social roles, and we can suppose anomie was not a problem. Those days ended in the Axial age, which we discussed in this post..

When the world started changing too rapidly for an entire society's structure to adapt new places for its members, individuals had to find their own places. That may seem hard enough, but when they invented their new positions, they had no norms established for the new ways of life they were inventing. They needed guidance, and they got it in a great age of prophesy. Across Europe, the Middle East, and Asia, prophets told people that they should be compassionate, that they should do unto others as they would have done unto them. And that was enough, for two or three millennium. People could think for themselves, and still think about others, with the guidance of the prophets.

And then, the world started changing faster, and faster, and faster. The feeling of disconnection from social norms, social roles, spread wider and wider. Some felt the change, and said “God is dead.” Some felt the change, and said, “God, save me!” and started churches dedicated to preventing change. Some felt the change, and the loneliness, and the pain, and became angry, and said, “God, I will kill those who caused this!” and became terrorists. And some, strangely enough, said, “God is dead. I bet we can build a better one,” and started dreaming of an all-knowing computer.

Do you want to know how they felt? Do you know who's to blame? Look in a mirror. No, seriously, that's one way to study the problem. Psychologists have people look in a mirror in order to get them to focus on themselves, in order to study one of the central problems of psychotherapy.

People come to see a psychologist very often because they are depressed. The psychologist needs to assess the problem, so has the client talk about themselves.

This self-focus causes the people talking about themselves to become sadder if they perform this self-focus in private, or to experience social anxiety if they do it in public. In essence, they experience a heightened sense of anomie, of disassociation from the warmth and comfort of human contact, because they are focused on themselves.

There is the problem, then. To be human requires participation in human society, and rapid social change can cast us adrift, maroon us in an island of the self. And as we try to understand ourselves, we focus on ourselves, and feel more isolated and alone as a consequence.

The shared hallucinations of our social constructs are meaningless if we are alone. If we are only animals, eating, sleeping, reproducing, we are only the appetites our genes have programmed us to have. If we are human, we live in a world invisible to most animals, a world of language and symbol, in which what we pass on to others may not even be physical matter, such as genes. It may be our ideas, ideals, songs and gods. It may be the world of meaning, the most human world of all.

However out of place we may feel, however useless our social skills and unattainable our aspirations, what makes us human is the people who have shaped us. We are never alone, because they are a part of us, and we are a part of those whose lives we've touched. Even the worst families teach their children to be human. What those children rejects from those who have shaped them sets the boundaries of their souls, what they accept gives those souls their content.

Unlike most animals, we can cooperate with one another even without family ties. This is because in that ethereal world of symbolic thought, we can pass on a part of who we are to people genetically unrelated to us. Our thoughts are at least as fecund as our bodies, and we lust for the sort of social intercourse that will allow us to transmit our wisdom to each other and build up something greater than ourselves.

Anomie is a symptom of the failure to do this, a sign that we must find a way to reach one another and find comfortable niches for ourselves in the great body of civilization.

Friday, November 7, 2014

How democracy ends: The Sjem-Wiemar problem

by John MacBeath Watkins

The Polish-Lithuanian Commonwealth was once a force to be reckoned with, a country more powerful than Russia and far bigger than most of the countries of Europe. What happened to that empire?

Well, the commonwealth was one of the few countries in Europe that had a really influential parliament. It was called the Sjem, and it operated as a legislative body starting in 1493 and became the legislative body of the Polish-Lithuanian Commonwealth when that was founded in 1569. It was, like many republics prior to the modern era, not particularly democratic. Its members were indirectly (by regional bodies) elected by the nobility, which amounted to about 10% of the population.

For much of its existence, any member could nullify legislation that had just passed and end the session by shouting "Nie pozwalam!" (I do not allow.) This is known as a liberum veto.

Harvard political scientist Grzegorz Ekiert argued that:
The principle of the liberum veto preserved the feudal features of Poland's political system, weakened the role of the monarchy, led to anarchy in political life, and contributed to the economic and political decline of the Polish state. Such a situation made the country vulnerable to foreign invasions and ultimately led to its collapse.
For one thing, foreign regimes discovered they could bribe legislators to use their veto, thereby paralyzing the government. This led to the partition of the empire and foreign occupation.

In Germany,  the Wiemar Republic had a rough start, but after the hyperinflation got tamped down, there were some very good years -- until the crash of 1929. The American banks that were helping Germany pay its reparations for WW I had to call their loans in, unemployment went up just as it did in other countries, and the people responded by throwing the bums out. Unfortunately, the bums they threw in tended to be people who didn't believe in democracy, like the the German National Peoples' Party, the Communists, and the Nazis.

Unable to form a majority coalition, Heinrich Brüning formed a minority coalition, but was forced to often rule by emergency decree, because the Reichstag could not pass legislation. Unfortunately, his policies for dealing with the Depression were exactly wrong -- he tightened credit and rolled back wage increases, making him unpopular with the electorate and the Reichstag.

Since his decrees were actually ruining the country, Brüning opened the door for the election of populists like the Nationalist Party and the Nazis. Even business interests turned against him, though it must be admitted that some started financing Hitler long before Brüning became chancellor.

In each case, democracy failed because it could not govern. Francis Fukuyama, in Political Order and Political Decay, argues that American political order is decaying because it has become to easy for special interests to veto decisions. This, he claims, leads to a government unable to function well enough to address the nation's challenges, which undermines the peoples' faith in the ability to address their problems, which leads them to deny it the resources to address their problems, which leads to...well, you get the idea.

The destruction of the Polish Commonwealth and the descent of Germany into the totalitarian hell of Nazi dictatorship had this in common -- democratic, representative government ceased to function. When democracy can't address the peoples' problems, they will turn to a strongman or watch things get worse and worse.

So it is with real dread that I read this:
To prevent Obama from becoming the hero who fixed Washington, McConnell decided to break it. And it worked. Six years into the affair, we now take it for granted that nothing will pass on a bipartisan basis, no appointment will go through smoothly, and everything the administration tries to get done will take the form of controversial use of executive power.
 Sound familiar? This is the way democracy is destroyed. As long as politicians find they can increase their clout by making sure government does not address peoples' problems, and not take the blame for how things turn out as a result, our democratic system is in danger.





Wednesday, October 29, 2014

How to start a dark age and what myths should do for you

by John MacBeath Watkins

The term "dark ages" is not much used anymore, but it still conjures up notions of an age of ignorance following the fall of a great civilization.

It was first applied to the entire Middle Ages in about 1330 by Petrarch. Light and darkness had symbolized good and evil, but Petrarch made them symbols of knowledge and ignorance. He saw his own time as one of darkness, and aspired to a time of greater light.

That time of light arrived as the Renaissance some time later, the dawning of a time when people admired knowledge and it became more widespread. Then came a time when archaeology started digging up the "dark ages" and found a great deal had been known and accomplished in the middle ages, so now we seldom used the term for anything but the early middle ages.

It's easy to put a starting date to the dark ages. Emperor Justinian closed pagan and Jewish school in 529 AD, and the dark ages began.

The decree, as translated by James Hannam, reads as follows:
We wish to widen the law once made by us and by our father of blessed memory against all remaining heresies (we call heresies those faiths which hold and believe things otherwise than the catholic and apostolic orthodox church), so that it ought to apply not only to them but also to Samaritans [Jews] and pagans. Thus, since they have had such an ill effect, they should have no influence nor enjoy any dignity, nor acting as teachers of any subjects, should they drag the minds of the simple to their errors and, in this way, turn the more ignorant of them against the pure and true orthodox faith; so we permit only those who are of the orthodox faith to teach and accept a public stipend. 
Justinian seems mainly to have aimed this at the Athenian Academy, which traced its (sometimes interrupted) existence back to its founding by Plato in the early 4th century BCE, but he also closed Jewish schools and schools run by those judged to be heretics.

In so doing, he centralized power over what was deemed to be true. The decree made it illegal to teach things that were contrary to the teachings of the "catholic and apostolic orthodox church."

There were Greek philosophers who had figured out not only that the earth was round, but had calculated pretty accurately its circumference. They knew that the rotation of the earth explained the sequence of day and night. Justinian didn't make it a crime for the great pagan scholars of his age to write and publish -- that came later -- but he shut down the Academy, leaving the scholars to make their own way.

Hammon is a skeptic about the impact of this action. Many pagan documents survived, and were even taught in Christian academies.

But the schools in the Eastern Roman Empire were survivors after the fall of the Western Roman Empire in
Justinian
476. Justinian was the last of the Latin-speaking emperors of the Eastern Roman Empire. Justinian sought to reconquer the territory that had been the Western Roman Empire, but failed. As the empire's grip over Europe failed, political institutions that had united it failed, and the only pan-European institution remaining was the Church. It became the dominant force in the preservation of knowledge and the maintenance of teaching institutions and traditions. And it demanded allegiance to what the Church believed.

Some scholars and some texts made there way to Persia, and with the rise of the Muslim religion, schools that remained in Alexandria and Cairo fell into Muslim hands. Thus began the golden age of Muslim science and philosophy, early in the 7th century AD.

The golden age of Muslim science and philosophy spanned from 750 AD to about 1100 AD. What happened then?

The Incoherence of the Philosophers, that's what. The second-most influential Muslim cleric (after Muhammad) was a scholar named  Abu Hamid Al Ghazali, who wrote a book of that title published in the late 11th century. He argued against those Muslim scholars who had based their works on Plato and Aristotle were wrong -- essentially, heretical. The spread of his thought led to religious institutions that taught that human reason by itself cannot establish truth. Although Al-Ghazali himself had nothing against science, this in effect meant that if you really wanted to establish truth, you didn't go to a scientist or a philosopher who had devoted his life and efforts to learning about the thing in question. Instead, the final arbiter of truth would be a cleric who specialized in the Koran.

This led to a decay of Muslim science and philosophy. Some would say, it led to a dark age for their civilization.

This seems to be the way to cause a dark age: You simply give religion authority over establishing what is true of the physical world.

Religion is in the business of delivering eternal verities, not of discovering new things. In fact, in such celebrated cases of the discovery of new things as Galileo's astronomy or Darwin's Origin of Species, religion has fought against new knowledge of how the universe works.

Joseph Campbell, in Myths to Live By, wrote that religion or myth (the difference seems to be that myths are religious beliefs no longer in use) serves four functions:

One, "to waken and maintain in the individual a sense of awe and gratitude in relation to the mystery dimension of the universe..."

Two, "to offer an image of the universe that will be in accord with the knowledge of the time..."

Three, "to validate, support, and imprint the norms of a given, specific moral order, that, namely, of the society in which the individual is to live."

Four, "to guide him, stage by stage, in health, strength, and harmony of spirit, through the whole foreseeable course of a useful life."

Can a religion that fails in the second function succeed in the other three? I doubt very much it can, because a failure in one area undermines faith in the truth of sacred knowledge in all the others. How could a church that taught the earth was flat have any authority after we had photographed the earth from the moon?

But the Catholic Church did not remove Galileo's books teaching heliocentrism from the its  Index of Forbidden Books until 1758, and in 1992 the Pope announced that the church accepted that the earth moves around the sun. I can find no indication, however, of the verdict of the Inquisition against Galileo being rescinded. The committee Pope John Paul II appointed in 1979 had, by 1992, concluded that the Inquisition had acted properly by the standards of its day, although Galileo was right about the sun and earth.

So, that's all right. Retard intellectual progress by a century of so, and it's all in good fun. In 2008, Pope Benedict XVI cancelled an appearance at La Sapienza University because some students and professors sent him a letter protesting the Pope's expressed views on Galileo. He was probably thinking, "why you talkin' 'bout old stuff?"

It was the notion that there had been a dark age that gave people the notion to call the blossoming of knowledge and science the Enlightenment.

The Counter-Enlightenment, which started not long after the Church took Galileo's books off the Index of Forbidden Books, has argued that the Enlightenment undermines religion and the political and social order. This is, in fact, the basic stance of conservatism since at least Edmund Burke. The term "Counter-Enlightenment," as I'm using it here does not refer to a single coherent movement with identifiable leaders, more to a wide span of groups and individuals who have argued against the goal of constant progress to new knowledge and a more rational society espoused by the great Enlightenment thinkers.

They are probably right in arguing that the Enlightenment has undermined religion and the existing social order. After all, the Inquisition is a shadow of its former self, the church has had to repeatedly retreat on who is listed on the Index of Forbidden Books, and the most recent Pope has finally said that the beliefs of the Church do not conflict with the big bang theory about the origins of the universe or Darwin's ideas about the origin of species. It would be better if the church had not involved itself in such matters in the first place, but if it must make pronouncements about the nature of the physical world, it will have to change its tune when our knowledge changes or be undermined by new knowledge.

We are still fighting this battle. Zealots want their religion's version of the origin of specie taught in public schools (they originated as God made them) and moral notions, such as whether it is better to condemn homosexuals or accept them, are being fought out as the culture changes. A church that has failed to distinguish between its core beliefs and issues that seem less religious than social must change or fail the test of providing a world view in harmony with the knowledge of the society to which it offers spiritual guidance.

The Catholic Church is a handy way to talk about this, precisely because it is so well organized. But it is accompanied in its problems with the Enlightenment by people of many faiths. The easy way to deal with such problems used to be the one used on Galileo, tell the inconvenient person to shut up or die. But at this point in history, the world is changing too fast and the knowledge base outside the church is to big to be controlled.


Saturday, October 25, 2014

Market power, monopsony and the porn industry

by John MacBeath Watkins

In a previous post, we discussed how changes in the music industry explain a bit of the Solow paradox, the fact the new technology is being adopted, but productivity hasn't seen much increase. Now we have another example of a way in which technology is suppressing, rather than increasing, productivity growth.

It also shows how power can transfer wealth from one group to another in ways a free market wouldn't allow based on monopsony, the dominance of a buyer in the marketplace.

The porn industry, once an economically vibrant part of the economy, has been devastated by changes in the business even as it adopts new technology. Porn stars once had a decent income from their performances, but now many have to work as prostitutes on the side to support themselves. It's a bit like the musicians who used to make most of their money from recordings, and now find they must get their living from live performances.

Like the musicians, part of their problem is piracy. Computer technology allows the rapid and almost perfect copying of music and videos. As a result, many viewings of porn have been taken entirely out of  the economic sphere.

But in the case of porn, there's another problem, the market power of the main distributor. The industry is dominated by Mindgeek, formerly Manwin. The company describes itself as being founded in 2013, but that's just when it changed its name back to Mindgeek after a period of being known as Manwin. Each name change came after its owners ran into legal trouble, resulting in the sale of the business.

Mindgeek has something like monopsony power over the porn studios. They own an array of "tubes," the Youtube-like on-line distribution channels for porn.  They also own a lot of porn producers, and are essential for the distribution of the works of other porn producers. According to a recent Slate article, Mindgeek doesn't always pay the porn producers when they put up a video on one of their sites:
Even content producers that MindGeek owns have trouble getting their movies off MindGeek’s tube sites. The result has been a vampiric ecosystem: MindGeek’s producers make porn films mostly for the sake of being uploaded on to MindGeek’s free tube sites, with lower returns for the producers but higher returns for MindGeek, which makes money off of the tube ads that does not go to anyone involved in the production side.
 The result is that performers have to have sex more times to support themselves, performing for the videos and doing their "live" performances as prostitutes.But isn't more work for less money lower productivity as we account for such things?

There was a time when one company in an industry owning most of the production and distribution would have set off alarms in the Justice Department and resulted in anti-trust action. That changed in 1980 with the election of Ronald Reagan. Word soon went out that the justice department would not be worrying about practices such as predatory pricing, and in fact, was really only worried about monopoly power if it resulted in higher prices to consumers, essentially meaning that the Justice Department was now mainly interested in price fixing in its anti-trust enforcement. It was a legal theory advanced by Robert Bork in a book titled The Antitrust Paradox.

This radically changed the incentives for American businesses. Predatory pricing, a practice that got Safeway in trouble with the Justice Department in the 1960s, became a notorious tactic of WalMart. The key was not to use this power to raise prices, but to dominate its markets and use its market power to squeeze producers.

Mindgeek is using a similar tactic. It is distributing the product for free on ad-supported sites, while squeezing porn production companies and performers to lower its costs. It routinely violates the intellectual property rights to sexual performances, but is so essential to production companies and porn performers for distribution that many say they can't speak out about the problem.

So, why don't the production companies get together and refuse to sell to Mindgeek unless they get paid? Well, if they demand a given price for their goods, that would be price fixing, one of the few aspects of the anti-trust act that the government is still enforcing.

Production of porn films is down 75 percent from the year before Mindgeek was founded. DVD sales of porn are down 50% over the same time span, because who wants to pay for porn they can watch for free if they tolerate some ads?

Netflicks and Amazon are starting to produce their own content. We can expect more ethical behavior from them than we see from Mindgeek, but the incentives will be the same. We need to re-examine how our legislation regarding market power affects people selling their wares to distributors or working for them.

The paradox referred to in Bork's book was that antitrust action to increase competition could increase, rather than decrease, prices. What he either failed to realize or didn't care about was that monompsony power, the market power of a dominant buyer, interferes with the business arrangements of people who contract to sell their wares or labor to that buyer. This represents a transfer of wealth from one group to another based on power rather than the workings of a free market just as much as price fixing does.

Wednesday, October 15, 2014

Are we prisoners of language or the authors of our lives?

by John MacBeath Watkins

The Sapir-Whorf hypothesis tells us that language, because it gives us the categories we use to think, affects how we perceive the world. Some researchers have gone so far as to propose that people who have different color lexicons actually see colors differently.

Color me skeptical. I think it highly likely that the Sapir-Whorf hypothesis is correct on more culturally conditioned matters like our sense of fairness, but find it unlikely that it has much, if any, effect on how we see color, as opposed to how we talk about what we perceive.

But this basic insight, which has really been with us since Ferdinand de Saussure's book,  A Course in General Linguistics, was published in 1913, gets at a deeper question. Are we prisoners of the languages that give our minds the categories we think with? Do we have individual agency, or are we prisoners of the structure of meaning?

Is language a prison that restricts us, or a prism through which we see new things?

Marxist political theory has insisted that the structure of meaning is a prison, that those who initiate us into it are enforcing capitalist cultural norms. Structuralist thinkers like Roland Barthes argued against what he called the cult of the author, and in general, structuralists argued against the relevance of human agency and the autonomous individual.

Is this what language looks like?
Structuralism has lost ground in its original field of linguistics. Noam Chomsky, for example, proposed that while structuralism was all right for describing phonology and morphology, it was inadequate for syntax. It could not explain the generation of the infinite variety of possible sentences or deal with the ambiguity of language.

When Saussure developed structuralism, the previous movement in linguistics had been philology, which studied texts through their history, and the meanings of words as they have changed. This is a necessary process when examining classical texts, and philology has sort of calved off from the glacier of linguistics.

Saussure proposed studying language synchronically, that is, at it exists at one time, which was perhaps a good corrective to the habits of his profession. But it did mean that the method was never intended to examine where the structure came from or how it changed. I doubt Saussure anticipated his method completely displacing the earlier methods of studying language. He simply felt is would be helpful to look at language as it exists, as well.

As the understanding of the power of language spread, however, it did tend to obscure the role of the individual. Its proposal to study language as it is, rather than try to attach it to its past, fit with the modernist movement's desire to shed tradition and make the world new and rational, sweeping away the dust and sentiment of the centuries and plunging into the future. At the same time, the concept of the structure of language and thought was frightening. How could we leave the past behind when all we could think was already in the structure?

Some tried to escape the structure of meaning, by making art that represented nothing, writing that tried to trick the brain into a space not already subsumed into the structure. But in the end, you cannot escape from meaning except into meaninglessness, and why do any work that is meaningless?

We are not words in a dictionary that can never be revised. We define ourselves, in fact, we are the source of meaning. The web of meaning we call language would disappear if there were no minds to know it, no people to speak and hear. We learn by play, and it is through creative play that we expand the realm of meaning. A web without connections is just a tangle of fibers. We are the connections, and our relationships to each other are the fibers.

Barthes was wrong. Authors are important, and authorship is pervasive. We are all the authors of our acts, writing the stories of our lives. Learning language and the other structures of society enable us to do this, to create new meanings, affirm or modify traditional meanings, and to influence others.

We need not choose between being ourselves and being part of humanity, because we cannot help being both. Yes, we are in large part made up of those we've known, the books we've read, the traditions we've learned, but we are the vessels in which those things are stored and remade and passed on with our own essence included.





Saturday, October 11, 2014

The Solow paradox, public goods, and the replicator economy.

by John MacBeath Watkins

Robert Solow, a Nobel-prize-winning economist, remarked way back in 1987 that "what everyone feels to have been a technological revolution...has been accompanied everywhere...by a slowdown in productivity growth.”

This has become known as the Solow paradox.

The golden age of productivity growth in the U.S. was between 1939 and 2000, with a slowdown in the 1980s, an increase in the Clinton Administration, and a slowdown again since.

What happened in 1939? Well, we began preparing for war. We didn't just build tanks, guns, ships, and aircraft, we also built roads and airports, and we dredge harbors and improved port facilities. Prior to World War II, flying boats were popular for serving areas that didn't have airports. After the war, there were plenty of airports.

The infrastructure binge continued after the war, and Dwight Eisenhower thought his greatest accomplishment was the Interstate Highway Act, which knit the country together with ribbons of road. Eisenhower understood logistics. He also understood that training was important if you wished to mobilize a large enterprise, and he elevated education to a cabinet-level office.

The federal investment in roads and education set loose the potential of the people and the land. And what have we done with this legacy of supply-side investment in public goods?

We've disinvested.  Our public goods are getting old, and we've pushed onto students the cost of financing their education, so that someone can come out of college very easily in $100,000 debt. Higher education keeps getting cut while more is spent on other things, like prisons and welfare. Yet providing better education is one way we should be able to spend less on prisons and welfare.

Our bridges are getting old, some of our roads are getting rough.

But why didn't our technology give us the added productivity our disinvestment in public goods was taking away?

Maybe it did. Or maybe, sometimes technology is not necessarily useful for increasing measured productivity.

You measure productivity by seeing how many widgets are produced over a period of time by a given number of people. For example, in the cottage industry of music that existed before recorded music came along, you had to either make your own or hire a musician to make the music for you. Every song required a person making music to happen.

When recorded music cam along, you no longer had to have a musician present to have a song. This meant fewer people would be employed as musicians, but also that people at the top of the profession could provide music for a larger number of people. A musician could sing a song once, and millions of people could buy that song and play it repeatedly. There was more music in our lives, it was made by the best musicians, and the cost was lower. Productivity increased.

But we don't know how much, because we weren't calculating the productivity of musicians. A few musicians at the top were more productive, but once a record had been sold, it could be played many times. Those repeat performances were taken out of the economic sphere, and not counted as performances in any accounting sense. The metric became the sale of the record, rather than the performance of the song.

But what happened with the digital revolution in music? Well, this:

http://www.theatlantic.com/business/archive/2013/02/think-artists-dont-make-anything-off-music-sales-these-graphs-prove-you-wrong/273571/

Unless there was a dramatic decrease in the number of musicians, this represents a huge decrease in productivity. Far fewer songs are being sold, and if the number of musicians remains constant, their productivity, measured by the usual economic methods, has decreased dramatically.

But we know that this has not been accompanied by an increase in the cost of a song. What has happened instead is that much of the music produced has been taken out of the economic sphere altogether. People are pirating the songs, and getting music for free. There is a cost to this; it's not really as easy to steal a song as to buy it, but those who wish to sell a song are competing with the free copy that can be pirated by acquiring some skill and jettisoning some scruples.

In the realm of classified ads, most of those are free on Craigslist. Until recently, most newspapers have made their digital product free. As a result, whole swaths of the economy have come out of the economic sphere. When you produce something for a lower price, you increase productivity. When you produce it for free, in economic terms you aren't producing anything.

Thus, we have a different paradox, that of the replicator economy. On Star Trek, replicators can make anything you want for free. But if everything you need is free, how does anyone get paid? Musicians are already facing the replicator economy. Writers may face it soon.

This shows that not all technology produces increases in economic productivity, because some of it takes things out of the economic sphere.

In addition, highly-skilled artisans who were more productive than the average person found it impossible to keep making money at their craft. Take the example of the weavers and what William Black called the "dark, satanic mills" that replaced them.

They increased the number of yards of fabric per worker, and reduced the level of skill required by the worker. Weavers, who had made a good living because they were more productive than average, were put out of work. Some became Luddites, smashing the machinery that was eclipsing their way of life, but in the end, they lost.

They were replaced by low-skilled, low-paid workers, including in many cases children. The price of fabric went down, but the way of life of the people working to make the fabric became worse. And while productivity was increased in the making of fabric, the skilled artisans found their skill no longer required.

A skilled artisan who ends up working as a laborer or a waiter is going to become less productive. And every disruptive technology must have the effect of obsoleting some skills. It takes time for people to adjust, and some never will. Society as a whole may benefit, but in the disrupted industry, there is some immiseration, and among the displaced workers, there will be a decline in productivity. In fact, the immiseration of the obsolete workers removes the incentive for other industries to become more productive, because it drives down the price of labor.

So, what does increase productivity?

Full employment. I know, I know, productivity actually climbs in a recession because you lay off your least productive workers, but in the long run, only a shortage of workers convinces companies to make capital investments to reduce the number of workers needed. If you have to bid up the price of workers to attract employees, it makes sense to increase productivity.

Right now, we have the spectacle of cash-rich companies buying back their own stock, which is great for managers who have stock options, but not great for productivity.

Disinvestment in infrastructure has been bad for productivity, and we could kill two birds with one stone by catching up on that, which would increase employment, and build improvements that would unleash some productivity. Investment in public capital goods could increase employment enough to stimulate investment in private capital goods.

But what are the chances of that? We have an entire political party dedicated to the proposition that government spending can't produce jobs.Until we get better lawmakers, we won't have better policy.



Tuesday, October 7, 2014

Undead persons, born at the crossroads of law and money

by John MacBeath Watkins

We argue about what a person is, in terms of the biology of the individual, but what if we were to apply the same standards to those undead things we call persons, the corporations?

The Citizens United decision determined that corporations are people for the purpose of free speech, in particular in spending money to influence political races. The Hobby Lobby decision granted corporations an exemption from a law because the corporation was considered to have religious views. And legislators in several states want to give a zygote the legal status of a person at the moment the sperm enters the egg.

I think these legal maneuvers reflect confusion about what a person is. A corporation has long been a person in terms of being able to sign contracts. but they are composite beings, made up of many biological persons. It is difficult to imagine them as persons in the sense of having faith, when they are likely made up of people of differing faiths, or of being politically engaged as citizens when they are made up of citizens with differing views. It is difficult to imagine a zygote having faith or political views as well.

This used to be a matter of religion, when philosophers argued about at what point a baby is ensouled. Aristotle argued that the baby did not have a soul until it laughed, which he said would happen about three months after birth. This allowed space for the Greek custom of exposing a child who was deformed, illegitimate, or otherwise found wanting, so that it died if it was not rescued by the gods or a passer-by. This possibility of rescue cleared the parents of the charge of murder.

When I saw Abby Hoffman debate Jerry Rubin, he claimed his views on abortion were shaped by his religion:

"The Jewish mother does not consider the fetus a person until it finishes graduate school," he joked.

But he did have a sort of point. We may consider a newborn a person, but we don't allow it to sign a contract until it reaches its majority at 18 years of age. And yet, we allow newborn corporations to sign contracts and dodge taxes with the best of their human competitors.

This is because the corporation is not a human person, it is a gestalt being made up of human persons who are of age to sign contracts. We think it is owned by shareholders, but as a person, it cannot be owned. Shareholders buy a right to some of the corporation's future earnings, just as gangsters used to buy a piece of a fighter hoping to gain part of any purse he won (then made sure of it by paying the other guy to go in the tank.)

If you owned a piece of a fighter, you couldn't say, "I'm a bit peckish, cut off a leg for me and I'll eat it," because you can't own a person the way you can own a chicken. Nor can a shareholder demand the corporation sell off part of itself to buy out said shareholder. The shareholder must find a greater fool to buy the shares.

But what is a human person? We certainly grant them greater rights for being human, and increase their rights as they become more mature in their judgement. In short, we regard them, as Abby Hoffman's mother did, as more of a person when they have more age and experience.

One way to explore when a person begins is to ask, at what point does personhood end? In general, our medical experts agree that human life ends when brain activity ends. Why, then, would we consider a zygote, which has no brain, to be a person?

While some who oppose abortion have claimed there is brain activity at 40 days, this does not seem to be the case. Certainly anyone with a heartbeat has some brain activity, but they would not be considered alive if they have no higher-level cognitive brain activity. One traditional notion was that the child was alive at its quickening. That would be when the mother first feels it kick, at about 16 or 17 weeks from conception.

But many thinks kick and are not human. Brain activity that includes higher-level cognition happens at about 26-27 weeks. But that doesn't mean baby is ready to sign its first contract. Becoming human involves having a human brain, and while a baby is beginning to develop one at 6 months, it hasn't yet. More important, it hasn't yet been programmed.

The real distinction between human and non-human life is the strange sort of virtual reality of the world of symbolic thought. This is part of the reason we delay responsibilities of citizenship such as being able to sign a contract or vote -- it takes a while to gain wisdom. Another reason is simple biology. Our brains mature and with changes in our brains, our judgement matures.

All of this biology is lost in discussions of what sort of person a corporation is. When does brain activity begin in the corporation? Never. Servants of the corporation do the thinking. When does the life of the corporation end?

The corporation cannot be killed by driving a wooden stake through its heart, like a vampire, or with a silver bullet. It can theoretically go on forever, never living, but undead, a creature born at the crossroads of law and money, able to corrupt its servants with rewards and punishments and make them do things they would never do as individuals. The corporation is never ensouled.

A corporation can only die if certain words are inscribed on certain papers and placed in the hands of properly sanctified public servants, perhaps with a sacrifice of money.

They are a locus of power that has its own logic, but not its own soul or conscience, or in any way its own mind. Sometimes their servants manage to gain control of them and use them to increase their own power and wealth while sucking strength out of the corporation, like a demon chained to serve a mage, who is in turn warped by the pull of the soulless thing they have exploited.

Is it any wonder that corporations, these strange and powerful persons, continue to expand their reach and their power, even in the halls of law? They are like an alien hand in the market, a part of the body politic that can act in ways we don't associate with ourselves.

And yet, our Supreme Court has ruled that these undead things are persons who act as citizens, with the same rights of free speech as someone with a mind, and the same rights of religious conscience as someone with a conscience. The alien hand has extended its reach, and gripped our most precious institutions.

Can we find the words to limit their reach, or the make the sacred documents that can confine them? Or can we find a way to ensoul them, so that they will be worthy of the responsibilities the court has thrust upon them?




Friday, October 3, 2014

Don't let your babies grow up to be booksellers

Mamas, don't let your babies
(to the tune of Mamas, don't let your babies grow up to be cowboys, with apologies to the late Waylon Jennings.)



by John MacBeath Watkins

Booksellers ain't easy to love and they're harder to hold.
They'd rather give you a book than diamonds or gold.
thick glasses and old faded Levis,
And each book begins a new day.
If you don't understand him, an' he don't die young,
He'll prob'ly just get fat and turn gray.

Mamas, don't let your babies grow up to be booksellers.
Don't let 'em quote Dickens or drive them old trucks.
Let 'em be doctors and lawyers and such.
'Cos they'll never leave home and they'll recite obscure poems.
Even to someone they love.

Booksellers like reference rooms and gray rainy mornings,
Not little puppies and children and girls on the stairs.
Them that don't know him won't like him and them that do,
Sometimes won't know how to take him.
He ain't wrong, he's just different but his obliviousness won't let him,
Do things to make you think that he cares.

Mamas, don't let your babies grow up to be booksellers.
Don't let 'em quote Dickens or drive them old trucks.
Let 'em be doctors and lawyers and such.
Mamas don't let your babies grow up to be booksellers.
'Cos they'll never leave home and they'll recite obscure poems.
Even to someone they love.

Saturday, September 27, 2014

A friend to entropy and an anarchist at heart

by John MacBeath Watkins

S. was a tall woman, in her private life a sort of den mother for anarchists with whom she shared a house. Some time after she started working for me, she began dating a cousin of mine who I'd never previously met, and eventually she married him.

So, I suppose whatever forces shape our fate must have Intended that she be part of my cohort. I thought of her recently, when I asked my business partner where something was.

"Why do men always ask women where things are?" she replied.

That was an easy one.

"Because you move them."

She had, in fact, tidied away the object in question, and knew exactly where it was in precisely the way I did not. And that is one of the many great things about Jamie. She generally knows where she puts things.

Not so with S. And this was a problem, because of the way I tend to organize things.

If I want to be able to find something, I do the obvious thing: I leave it out in plain sight. This tends to lead to a bit of clutter, with the most often-used items on top.

S. wanted a neat work environment. To her, this meant less clutter. The way she achieved less clutter was in the obvious way: She put things out of view. Unfortunately, once things were out of view, she seemed to think the problem was solved, and actually finding the object next time it was needed was not a high priority for her unless it was something she used.

I came to view this in terms of entropy. Entropy isn't just a good idea, it's the law, and it clearly states that the universe is going from a higher state of organization to a lower state of organization.

My system of organization acknowledges this. My environment is in a state of apparently increasing disorder, and yet, for the most part, I can find things. The system S. used involved the expenditure of energy, which is entropy itself, to bring the environment to a state of greater disorder, in which information about where things were was destroyed, which is entropy again.

Now, it is possible for a system of putting things out of sight to preserve this information, even for it to preserve information better than my somewhat sedimentary system of piles. You would, for example, put stuff under "S" for "stuff," and other stuff under "O" for "other stuff."

This was not the method S. employed. Her method was to expend energy to destroy information, and I cannot help but think that on some level, she did so as a friend to entropy, an anarchist at heart.



Wednesday, September 24, 2014

The Self-conscious mythology of literature (The Strangeness of being human, cont'd)

by John MacBeath Watkins

There was an age of myth, when we explained the world to each other by telling stories about the gods. There was an age of fable, when we explained morality to each other by telling folk stories that belonged to the culture.

And there is the age of literature, when we know who wrote the story, and make it their property.

In the age of myth, we told each other stories that were supposed to be true, and didn't know where they came from. During the age of fable we understood them as parables. In our age of literature, we understand them as personal insight.

We regard all as contributing to our understanding of the nature of human nature, but by stages, they have become more tenuously connected with socially constructed truth, and more subject to our self-conscious understanding. We ask ourselves, is this a story we can accept as telling a truth about humanity, or do we reject it? Rejecting the myths was not optional during the time those religions were active. People lived in societies where the truth of the history of the gods was too socially accepted.

To reject the story of a fable, we would have to say that we disagree with the culture, not with the gods. To disagree with an author, we have only to disagree with one individual. The judgments of the author and the reader are those of individuals, with the social acceptance mediated by markets -- which books people talk about, and buy, or feel left out because they haven't read.

We have other ways of understanding human nature, such as the more rigorous storytelling of science, the unreliable narrators of our families and friends explaining themselves as best they understand themselves, or the frantic efforts of our news sources trying to attract our attention to fragments or figments of information or gossip they think we might like to know.

But it is literature which works the most like mythology, transporting us into stories and allowing us to experience things that have not happened in our own lives. It instructs us or subverts us in ways mere facts do not, influencing the emotional armature on which we hang our facts and shape them into our beliefs.

As our culture has changed, we've become more self-conscious of the process. We may choose to judge a book by its author. We might decide that if Ayn Rand could live off Social Security in her old age, perhaps the philosophy she pushed, which would claim only the morally inferior "takers" would need a safety net, was not even something she could live by.

Or we may say to ourselves, "J.D. Salinger seems so deep when I was so shallow, such a sallow youth, but now that I'm in the working world I have put aside that juvenile cynicism and taken up the more useful and manipulative cynicism of Dale Carnegie."

The ability to do this makes our emotional structure more malleable than we would be if the stories we based our lives on were eternal verities handed to us by the gods, as if the clay of our feet never hardens. This gives us an adaptability our ancestors never knew or needed, but what is the cost? Do we become chameleons, taking on the coloration of our social surroundings to better camouflage our true selves, or do we change our true selves at a pace never before seen in human history?

I suspect the latter. We are bombarded with stories, on television, in games, in books, even, for the dwindling few, in magazines. We grow by accepting them into ourselves, or set boundaries by rejecting them, and we are constantly reshaped, little by little, meme by meme.

Tuesday, September 16, 2014

On the persistence of print and absorbing information (publishing in the twilight of the printed word and the strangeness of being human)

by John MacBeath Watkins

I've been reading The Shallows, a 2011 book by Nicholas Carr about how the internet is rewiring our brains, and in the midst of this alarmist text on how much shallower we shall become because of the internet, I've found a cause for hope.

You see, the PewResearch Internet Project has found that younger people are more keenly aware of the limitations of the internet then their elders.

I am not a digital native. You might call me an internet immigrant, or even a digital alien. I've come to use the internet quite a lot, but I'm keenly aware that much of what we know isn't there. It's in books, or in peoples' heads.

But on this issue, as on so many others, I find that young folks today are in better agreement with me than my own generational cohort. From the report:

Despite their embrace of technology, 62% of Americans under age 30 agree there is “a lot of useful, important information that is not available on the internet,” compared with 53% of older Americans who believe that. At the same time, 79% of Millennials believe that people without internet access are at a real disadvantage.
I think that's a very realistic assessment. The internet makes it easy to find the information on it, but there's a lot that just isn't there.

And there is also the issue of what you want to read on the screen. In At Random, Bennett Cerf's memoir of his life in the publishing business, he noted that prior to the introduction of television, fiction outsold non-fiction about three to one. After its introduction and subsequent ubiquity, that reversed.

But when I looked at Amazon's list of top-selling e-books recently, there wasn't a non-fiction book in the top 40. The Barnes & Noble list of the top-selling hardcover and paperback books shows five of the top 10 being non-fiction.

It would appear that peoples' reading habits are adapting to the reality of reading things on a screen which can also be used to go on the internet and buy things or roam around the infosphere. It is Carr's contention that silent reading, which invites us into the private contemplation of the information and thinking of the author better than the public performance of reading aloud, has been with us for about a thousand years. Printed books invited us into this quite, private world, while reading on a device connected to the internet invites constant interruption. A text littered with links, GIFs, and videos invites cursory and distracted reading.

But stories were performed by a storyteller or a cast of actors long before silent reading came about. We can immerse ourselves in stories without thinking deeply, let them wash over us and sweep us away without trying to interpret or challenge their thinking. That seems to be the sort of thing we are willing to read on the screen, partly because the experience of being transported into the story makes lower demands on our intellect.

It seems odd that the newest technology is best for the sort of mythopoetic storytelling where we don't consciously absorb information, while books that demand our use of instrumental logic are best read on paper. Mr. Carr has himself noted that while e-books are now about a third of the market for new books, they are only about 12 percent of the sales of his own, somewhat intellectually demanding books.

Perhaps this is only a pause in the twilight of the printed word, until e-publishers work out the interface a little better. But I found when I was reading A Course in General Linguistics on line, I wasn't getting as much out of it as I did when I got a paper copy. The text was not interspersed with links, and the copy I got in book form had the distraction of marginalia, but I found it easier to immerse myself in a text I needed to read critically and contemplatively when it lay before me on paper.

Now, you might think that the young, more adapted to reading on the screen, would simply read more of the sort of short, punchy stories about who was showing side boob at the Oscars and watching cat videos and porn, but according to the Pew study, they are more likely to have read a book in the past year.

Some 43% (of millenials) report reading a book—in any format—on a daily basis, a rate similar to older adults. Overall, 88% of Americans under 30 read a book in the past year, compared with 79% of those age 30 and older. Young adults have caught up to those in their thirties and forties in e-reading, with 37% of adults ages 18-29 reporting that they have read an e-book in the past year.
Interestingly, e-books seem to have caught on with older adults first, perhaps because you can adjust the type size in an e-book.

But for now, e-books seem to have unexpectedly plateaued, and printed books persist.

Wednesday, September 10, 2014

Religion as an interface: The Strangeness of being human cont'd.

by John MacBeath Watkins

One of the most popular posts on this blog explores the roots of religion, and the need we have for a mythopoetic understanding of the world. Scot Adams, blogger and cartoonist of the Dilbert strip, says that religion is not a bad interface with reality.

And it strikes me that as we've made our machines more compatible with us, we've made them more artistic and poetic. I do not speak machine language, but I am able to communicate with my computer through my simple faith that when I reverently click an icon, the file will open.

On rare occasions, I have to use the command line to communicate in a more concrete way with my computer, and sometimes I even have to open the back and stick in more memory. But I don't really understand the machine in the way my nephew Atom Ray Powers, a network administrator, does, nor do I understand the software the way his brother, Jeremy, a programmer does. And neither has studied assembler code, which my uncle Paul learned after he was injured out of the woods as a logger.

It's as if we are replicating the way people perceive the world. The graphical user interface gives us a visual, metaphorical understanding of how to face the reality of the computer, just as religion gave us a metaphorical, poetic, and often visual way of interacting with the reality of the world. The command line gives us greater control of the computer, just as technology gives us the control of nature.  Science attempts to learn how the world really works, at deeper and deeper levels, similar to knowing how the transistors work and how to read machine language..

The fact that computer scientists, who started at the scientific end of things, felt a need to make the interface more metaphorical and even artistic tells us something about how humanity interacts with the world. The intuitive approximation is vital if we are not to be overwhelmed with detail. It is sometimes said that ontogeny recapitulates phylogeny, because every fetus goes through phases of looking like a primitive fish, then a salamander, and eventually takes on human form. It would appear that the same thing happens cognitively.

Those of us, like myself, who follow the methods of the metaphorical interface in our daily lives often seek guidance from computer gurus. And those gurus, when they are not repairing malfunctioning machines or recalcitrant code, operate their computers in the symbolic realm made possible by the GUI.

We seem to have some difficulty doing this in our world of faith and science. This is usually because each side insists that its way of understanding the world is truth, therefore the other cannot be truth. But a model of an atom isn't what an atom really looks like, because an atom is smaller than a visible light wave. All of our understanding is metaphor and artistic license at some level. In my view, we have understandings at different levels.

Now, perhaps I've offended some religious people by saying religion is metaphor. But all sacred texts were written to be understood by people, not by gods. All of our understanding is metaphor. "For now we see through a glass, darkly" a biblical passage says. We understand the world by telling stories about it, and deciding which best describe it. Sometimes, as with math, the stories can be very precise, and the grammar quite rigorous, but they are stories none the less.

Tuesday, August 26, 2014

On the spell of the spiritual and the mechanism of philisophy

by John MacBeath Watkins

The Guardian has an interesting article on the failure of anglophone philosophy here. In it, Roger Scruton argues that the analytic philosophy of English-speaking philosophers has taken philosophy out of realms where it might be relevant to peoples' lives.

Scruton says:
Academic philosophers in the English speaking world still regard philosophy as Locke defined it in the 17th century, as “the handmaiden of the sciences”: it doesn’t explore the world beyond science but the limits of science, with the result that philosophy doesnt really intrude into the public world. In the early 20th century were were caught up by the movement to form analytical philosophy, based in the study of logic, the foundations of mathematics, the syntax of ordinary language, the validity of arguments, something very formal. So when people have a big question, especially now since the decline of the orthodox religions, they don’t turn to philosophy for the answer but try to formulate it in whatever technical words have been bequeathed to them, and when a scientist comes along and says “I have the answer”, or even “there is no question”, they think “this guy knows what he’s talking about, I’d better lean on him”.
The French, he notes, did not fall into this trap. Sartre was willing to address the great moral questions, even if the morality of his actions in World War II might be a little questionable (he gained his teaching position during the war because Vichy law eliminated a Jew from that position, and chose not to be active in the resistance.)

But Scruton fails to note that many people don't look to science for their answers. Some turn to religion, some turn to New Age gurus. Both reflect a backlash against the Enlightenment ideas reflected in modern philosophy. Most modern philosophy (yes, even the French) is unwilling to deal with the spiritual feelings people have.

Part of the problem is that people tend to believe in the spiritual in an a priori manor., and will interpret any attempt to analyze it as an attempt to destroy it, to reduce it to the physical world. Any logical and analytical approach to the spiritual that does not treat the existence of the spiritual as an accepted fact and a realm not readily explained by the physical world will be seen as the reductive destruction of the spiritual, equivalent to trying to understand the Mona Lisa by turning it to powder and doing chemical analysis of the molecules.

Any attempt to find the part of the brain that needs to believe in god will receive this reception. My own attempts to understand the spiritual in terms of the ethereal parallel world of symbolic thought have been received this way. As an agnostic, I am open to the possibility of the existence of a spiritual world but not convinced of it. And I have to wonder, if we could understand the spiritual world, would that be tantamount to its reductive destruction?

In my series of posts on the strangeness of being human,  I have stuck with trying to explain what I can, which has restricted me to the physical and analytical. I remain skeptical of those who claim a special knowledge of the spiritual world, because so many have been shown to be frauds, but I respect the impulses and the work of sincere ministers of many faiths. For many people, faith has been a support for them spiritually, psychologically, morally, and socially. Scot Adams, long a vocal atheist, said on his blog recently:
In recent years I've come to see religion as a valid user interface to reality. The so-called "truth" of the universe is irrelevant because our tiny brains aren't equipped to understand it anyway. 
 As a pragmatist, I find this appealing. Were I a Christian, I might find it appalling, for the same reason the Catholic Church found Pascal;s Wager appalling: It does not accept the truth of religion as its reason for practicing religion.

Yet in many ways, worrying about the truth of religion is a modern luxury. If you lived in most societies for most of the history of religion, the penalty for failing to believe in the God or Gods of your people was death, ostracism, or incomprehension by your fellows. The notion that religion should have to justify itself was uncommon until recently. Socrates was charged with undermining the young's faith in the gods, and condemned to death. Society was punishing him, not for proving the gods did not exists, but for raising the question of how we might logically confront religion.

Thomas Aikenhead was executed in Scotland in 1697 for the same thing. Thomas Hobbes might have lost his life on a charge of blasphemy for claiming God exists, but is a material being, had he not had protection from the king, who he had tutored.

Although Aikenhead was the last person in the United Kingdom executed for blasphemy, the last successful prosecution in the UK for blasphemy was in 1977. The law has since been repealed.

There are parts of the world where the law says you can still lose your life for leaving the established religion, although in the best-known cases governments have backed off.

But even for the unchurched, the spell of the spiritual has an appeal that the logical mechanisms of philosophy cannot address. This is an interesting problem, because for centuries, philosophy was taught in Europe at Christian institutions. In fact, if you wanted to be educated in Europe after the rise of Christianity, for centuries you had to take orders.

This led to exactly the sort of reductive logic chopping we now see in our more materialistic philosophy. Schoolmasters were ridiculed for arguing how many angels could dance on the head of a pin (my view is, all of them or none of them, depending on whether angels have a sense of rhythm -- after all, they are as immaterial as the question.)

So the problem of the relevance of academic philosophy is not a new one. One of the aspects of the academic environment is that to be wise, you must specialize, so that you may know more about something than anyone else. That specialization takes you away from the big questions. Another is that the trap of irrelevance is not always obvious. The question of whether angels had a material presence interested some philosophers, and the thought experiment about them dancing on the head of a pin was a thought experiment intended to illustrate it.

The real trap was in failing to understand that in the grand sweep of things, whether angels had a material presences was irrelevant to the important questions of how we should live. The conversation became attenuated because those involved did not realize that they had lost the plot.

And if philosophy leaves the questions of how we should live our lives to the soft science of psychology or the realm of new=age gurus, it will be irrelevant to the questions they attempt to answer. Perhaps these questions are not the ones modern philosophy wishes to deal with, but if so, people will continue to ask, what is it for?

Scruton thinks the notion that philosophy is the queen of the sciences makes it beholden to the sciences, but that is wrong. Philosophy is the mother of the sciences, having spun them off. There was a time when naturalists called themselves "natural philosophers." It was philosophers who first examined the basic questions of physics, math, and astronomy.

Philosophy should not now turn its back on its children, but should integrate them, and show how they affect the way we live. But it seems to me that philosophy is the child of the spiritual rather than its queen or mother. We first tried to understand the world in a poetic and mythic way, and only later brought our problem-solving logic to bear on those understandings. It is much harder for the spirtual's logical child to understand its parent, because its business has been to supplant mythic understanding with logical understanding.

But it can talk about the questions the spiritual attempts to answer. After all, the Buddha had little to say about the gods, nor did Confucius. The question is, will academic philosophy reward such efforts, or view it as an enterprise left to some other field of study?



Friday, August 22, 2014

On the illusion of the self: The Strangeness of being human #27

By John MacBeath Watkins

As we discussed in an earlier post, Julian Jaynes introduced the intriguing concept of the origins of consciousness in the bicameral mind. He supposed that brains worked differently until about 1200 BC, that the part of the brain that produces hallucinations was speaking to us with the irresistible compulsion of the voices of the gods.

This represented a different sort of mind than we now experience, a mind without the metaphorical self-narrating person in our heads.

This brings up several questions. Jaynes claims that only the mentally ill still hear voices from that part of the brain, which is not much used by modern humans. But surely the part of the brain responsible for these hallucinations existed prior to human culture. What role did it play before that, and what role does it play in the style of perception used by animals other than man? Is it part of a system of perception for a spiritual world that is real, or the source of the invention of the spiritual? 

I propose that the supposition of the breakdown of the bicameral mind is unnecessary. Psychologists refer to a healthy psyche as a well-integrated personality. This recognizes that a personality is made up of many motivations, often conflicting – the self who wants sweets and the self who wants to be slender, the self who wants children and the self who is selfish, the self who aspires to goodness and the self who cheats on its spouse. Some of us avoid conflicts by compartmentalizing. Some actually fragment into different personalities.

There was a case a few years ago in which a man was accused of raping a woman with multiple personality syndrome. What had happened was that the accused had started having sex with the woman's adult personality, then asked to speak to her little girl personality. The woman had consented to have sex in one personality, but not in the other – in fact, that personality was incapable of consenting to sex. The man was convicted, but the conviction was overturned.

That the woman had shattered into several personalities is considered pathological, but what if a single, well-integrated personality is as much an hallucination as the gods were? Does that mean that neither is real, or that both are real, or something in between?

I propose that both are ways of constructing reality. Scott Adams says that religion is a pretty good interface with the world, and I suspect that for many people it is. Think of it as a graphical user interface. The real world of computers is a world of 1s and 0s, but this is not a way of thinking about computers that enables us to work smoothly with them.

Similarly, the world we perceive is one of differing amplitudes and frequencies of light and sound, of the atoms we are composed of interacting with the atoms of other objects. Who knows, it may even be one of our spirit interacting with other spirits, though I see no particular need to suppose this. We have several levels of perception, memory, and constructing all the evidence of our senses into a narrative that “makes sense” of our lives. The product of all this is a useful interface, a sort of useful illusion of the world.

When societies became larger and needed coordination beyond the clan level, we developed institutions and patterns of behavior that made that possible, resulting in the great age of religion, which gave societies a sort of group mind.

This group mind gave us a structure that allowed stable societies of great size to develop, but it was not adaptable. As Jaynes pointed out, in the Iliad, there are almost no references to individuals having motivations that were not the gods dictating their actions. The later Ulysses is all about one clever, adaptable individual making his way through changing circumstances that his gods did not issue instructions for.

About the same time, the great age ofprophecy began, and for about a thousand years, new religions told people how to act as individuals. And those religions focused on human prophets, less than on ethereal gods. Mohammed gave the word of God to Muslims, Jesus gave the world of God to Christians, and while Siddhartha had no brief agains the Hindu gods, his followers focus on his teaching more than on worshiping those gods.

Each, in his own way, taught people not to be selfish. It may have been literally unthinkable in the age of myth to be selfish, but in a world where adaptable individuals made their way, it was an ever-present danger.

An it is a danger. Any society that relies for its survival on people having and raising children requires some level of self-sacrifice. Any society that needs to defend itself from aggressive neighbors requires it as well.

We live in a transitional era, when adherents of the prophets are worried about the relentless rise of unbelief, when prophets of the Singularity are trying to invent an entirely material god, when atheism is no longer the creed that dare not speak its name. Reason rules our world more than myth, although often, it is motivated reasoning that seeks out desired conclusions.

But what role does reason really play? Often, our reason justifies things we already want to do, but have not consciously acknowledged. What if, when we spoke to the gods to get our guidance, the same thing was happening there as happens when we talk to ourselves?

If Jaynes was right about the literary evidence pointing to a different sort of mind prior to 1200 BCE, it may be that it was a different way of integrating a personality than our current mode, rather than a completely different way of using our brains.

The strangeness of being human is a series of posts about the way language makes us human, giving us abstract categories we use to think and memes that make up much of what we are.

1
http://booksellersvsbestsellers.blogspot.com/2011/06/to-read-is-to-become-stolen-child.html
2
http://booksellersvsbestsellers.blogspot.com/2012/03/on-disenchantment-of-world.html
3
http://booksellersvsbestsellers.blogspot.com/2012/02/blue-man-speaks-of-octopus-ink-and-all.html
4
http://booksellersvsbestsellers.blogspot.com/2012/05/bicameral-mind-and-strangeness-of-being.html
5
http://booksellersvsbestsellers.blogspot.com/2012/05/structure-of-thought-and-death-of.html
6
http://booksellersvsbestsellers.blogspot.com/2011/11/ane-how-will-our-minds-be-rewired-this.html
7
http://booksellersvsbestsellers.blogspot.com/2012/07/sex-death-and-selfish-meme.html
8
http://booksellersvsbestsellers.blogspot.com/2012/10/what-is-soul-of-man_10.html
9
http://booksellersvsbestsellers.blogspot.com/2012/11/stories-language-parasites-and-recent.html
10
http://booksellersvsbestsellers.blogspot.com/2013/02/god-language-and-structure-of-society.html
11
http://booksellersvsbestsellers.blogspot.com/2013/02/be-careful-who-you-are-more-on.html
12
http://booksellersvsbestsellers.blogspot.com/2013/02/the-strangeness-of-being-weird.html
13
Night of the unread: Why do we flee from meaning?
 14
http://booksellersvsbestsellers.blogspot.com/2013/03/night-of-unread-do-we-need-ethnography.html
15
http://booksellersvsbestsellers.blogspot.com/2013/03/when-books-become-part-of-you.html
16
http://booksellersvsbestsellers.blogspot.com/2013/04/drunk-on-milk-of-paradise-spell-of.html
17
http://booksellersvsbestsellers.blogspot.com/2013/04/the-power-of-forbidden-words-and.html
18
http://booksellersvsbestsellers.blogspot.com/2013/04/so-like-filler-words-you-know-they-uh.html
19
The conspiracy of god, the well-intentioned lie, and the strangeness of being human
20
Spiritual pluralism and the fall of those who would be angels
21
Judging a book by its author: "Fiction is part confession, part lie."
22 
What to do when the gods fall silent, or, the axis of ethics
23 
Why do we need myths?  
24 
Love, belief, and the truth we know alone
25 
"Bohemians"-- The Journey of a Word
26
On being a ghost in a soft machine
 27
On the illusion of the self