Wednesday, June 1, 2011

I hate Facebook

(This is the whole, uncut content of my last posting on my facebook page. I don't even want to go into all the hassles that I had to do to break it up into modules of the character size that facebook allows. Or speculate how long it will last once facebook realizes that it was an anti-facebook posting.)

Hi to all my facebook friends who jumped over here after reading my last posting. You really are my friends and I really do like you, but I HATE FACEBOOK! so do not take it personal. (That incredibly stupid question party game thing really topped it for me.)
I encourage all of you to communicate with me through my blog, my website, and email.
http://worldpeacealgorithm.blogspot.com and
My website
buck@matterofmail.doc
moono@seawolf.sonoma.edu
buck @peacemoon.org
Also I highly recommend two good books for your edification,
"You Are Not a Gadget" by Jaron Lanier and
"Program or be Programmed" by Douglas Rushkoff.
Plus watch this very enlightening video about facebook and google's hidden agenda:
Watch this great TED.com interview with Eli Praiser
These are NOT Luddite, anti-cyberspace books. They are written by guys in the high-tech field who are explaining why the promise of the global internet revolution has been betrayed by the dumbing down of content by these stupider-than-thou (my expletive) social networks that make you do things you wouldn't do if you were using your own blog or designed your own website.
READ AND WATCH ALL THE ABOVE NOW BEFORE FACEBOOK MAKES IT ALL DISAPPEAR.
--BUCK

Monday, May 9, 2011

antiwar memes

       While I am sympathetic to Floridi's ideal that IE should apply to everything, meaning ontocentric, rather than either biocentric or anthrocentric, I am not sure that it is necessary in order to deal with information. I am reluctant to bring up Nietzsche in this context but he did have some wise words about information philosophy. First he noted that all "truths" are really just metaphors, metonymies, and Anthropomorphisms. I find that redundant because the reason truths are metaphors and metonymies already follows from the Anthropomorphism. In other words we cannot escape the anthrocentric stance. Rocks cannot think of themselves as ontocentric and flowers cannot think of themselves as biocentric, it is only our anthrocentric stance that gives them "respect" as moral patients. Nietzche also pointed out that since all truths are lies, the only honest lies are the ones we make up ourselves, after first understanding that it is all lies. As I said, I appreciate Floridi's ecological perspective but I do not think he has thought out the implications of such inclusiveness. In the world wide ecology of the biosphere we can separate moral responsibility from moral accountability, but i don't think we can in the infosphere. Floridi seems to think that hacking is negatively related to entropy, but I maintain that hacking is sometimes the only way to counteract violent evil in a nonviolent way. Companies that buy, sell, or deliver weapons to third world countries (usually selling to both sides in any conflict) should be at least exposed, if not sabotaged, and this can be done nonviolently in the infosphere without killing any one. I would call that reducing entropy.
       I am opposed to hacking against non-violent groups for their freedom of speech rights. I am only concerned with neutralizing actual violence. See my website: http://www.peacemoon.org Obviously there is a fine line between the violent and the nonviolent when it comes to hate groups like the westboro church and the stormfront nazis, so I guess individual hackers will have to make decisions on a case-by-case basis. Then there is the problem of "hacking the hackers" which takes it all to another level. For instance, I totally support wikileaks, and if some hackers tried to mess with them, I would like to recomend that somebody hack the hackers in return, but then the whole thing could get waaaay out of hand. This could take a whole research paper to think about.
       What is scary to me about this news article is not what hackers can do, it is that the superpowers who have not given up violence as a problem-solving strategy are trying to frame the situation so that they can call all hackers terrorists and deal with them violently. Iran, Israel, and China are not the only superpowers in this category if you all know what I mean and I think you all do.
The hackers who would be most vulnerable to the charge of "terrorist" would obviously be any independent or NGO hacker. You know damn well that the superpowers have their own hackers trying to neutralize some other superpower's weapons systems unilaterally, but they aren't called "terrorists."
       If only one superpower gets their weapons neutralized, even nonviolently, they can legitimately yell "terrorist" because it would make them vulnerable to the other violent superpowers. The only solution is not unilateral, but omnilateral; a coordinated strike by all hackers, worldwide, to nonviolently neutralize all weapons systems at the same time. It would also have to be a "surgical strike" that only targets weapons systems and not any other infrastructure, and especially not the internet itself. It can be done, but only the same way that porcupines make love, verrry carefully.
       It has occurred to me, since my previous posting about hacking the hackers, that while some teams of hackers can specialize in hacking and sabotaging weapons manufacturing and delivery systems, other teams of hackers can specialize in hacking those hackers who are working on the side of the violence mongers, and a third set of teams can specialize in designing firewalls to protect all infrastructures, especially, but not exclusively, hospitals and other emergency services. By "teams" I mean individual hackers who are spread out all over the world but coordinate themselves through underground communication networks, with no central control that can be neutralized. Also, since the superpowers are trying to frame the ethics of information by calling all hackers (except their own of course) "terrorists," it is important that the world peace hackers refuse to buy in to the rhetoric of calling this "Cyber-warfare" or "a cyber war." While the terrorist and government sanctioned hackers will be waging cyber-wars against each other, the NGO hackers will be waging a cyber "antiwar" against both sides of any conflict. In chapter 8 of our textbook, John Arquilla is obviously trying to encourage terrorists to switch from violent acts against civilians to nonviolent hacking methods, (ha, like terrorists are reading his article?) but admits that there is no guarantee they will have any change of heart. A well coordinated network of antiwar hackers could force all parties to settle their conflicts using nonviolent strategies. They do not even have to be NGOs, there are plenty of small, neutral, non-super power countries with the resources to aid the antiwar campaign. Using all forms of media to spread the antiwar meme, the superpowers could be shamed into giving up violence as a problem-solving method. Then the truly "rogue" states and terrorist organizations will be isolated and labeled as such, and can then be neutralized with economic sanctions and no popular support.
       As Isaac Asimov wrote: "Violence is the last refuge of the incompetent."

Monday, May 2, 2011

You Are Not A Gadget by Jaron Lanier
Alfred Knopf, New York, 1910.
       On one hand, I agree with Lanier about almost everything. On the other hand, I do not see that the problems are as drastic as he sees them. While Lanier and I both agree than humanism is more important than the bit, I see most of the problems he raises as temporary and self-correcting. To jump right into my conclusions: (1) Even if computers can be programmed for Artificial Intelligence, they will never be more intelligent than humans. (2) If there ever is a “singularity,” it will be an exponential increase in human intelligence, not artificial intelligence. (3) The problems that Lanier discusses can be divided into (a) those that can be solved by re-programming computers, and (b) those that can be solved without computers altogether.
       I’ll start with Lanier’s examples of music. He mentioned that MIDI was designed from the viewpoint of a keyboard player, and it uses discrete notes rather than sliding scales, like the voice or a violin. But since computers are digital anyway, I am not sure how that can be avoided. I still have some of my vinyl albums of the same music that I also have on CD. The vinyl ones, being analog, sound better, because the digital recordings leave out things in the spaces between the discrete states. I think Lanier is a genius, but I also think that he has succumbed to the very problems that he is trying to warn us about. He is so immersed in the computer programming culture that he thinks it is more important than it really is. He really wants us all to be Humanists, but he is still seeing Humanism from the point of view of a Reductionist. I think this ironically very funny, but sad too. I am not criticizing what he is doing because he had to immerse himself in it to learn as much as he knows about digital technology. All reductionist scientists have to think that way to be any good at what they do. I respect that. I should also give him credit for having the intelligence to eventually figure it out for himself.
       I agree with him that the online culture sucks, but there is no law that forces us to spend all our time online. Some of us have a life, and having a life means putting all the great things that computers and the internet are good for in perspective. Also, I am not quite sure that the reason the online culture sucks is solely due to the lock-in of the software. I can see his point about social templates, and reducing human personality to the level of programs, but that is also the fault of the people who do it to themselves by buying into that culture. Without critical thinking computers are no better than any other medium, and with critical thinking, computers can be very useful and harmless.
       I am not sure why Lanier expected “open culture” to produce more creative people, but because he has, I do understand why he feels disappointed that it did not. So the mashup of retro culture on social networks and YouTube is stupid. That is not all the computer’s fault. Most of the people that enjoy “reality tv,” “stupid pet tricks,” and “America’s funniest videos” are just average people who would never be creative in any medium, pre-web or post-web. The reason they make silly videos or do and say silly things is because they are entertaining themselves and this is the only level of creativity that they can do. Real creative people can work in any medium. It is not that there are no creative people online, it is that the silly ones are in the vast majority and you have to search very hard to find the creative ones. If computers were really creative, i.e. had emotional and value-judgement capabilities, then search engines could easily weed out the non-creative online postings and favor the creative ones. But they can’t, so they won’t. And there is not enough space here for me to explain all of what that implies, but I briefly hinted at it in my previous essay on Turing and Watson.
I am mostly in agreement with Lanier when he discusses the economic problems, but I do not agree that the internet caused the problems. I think they were inherent in our economic system and when we change our economic system, some of them will go away. To me, the economic problem is the most important problem that Lanier discusses. He uses the phrase “we are facing a situation where the culture is eating its own seed stock.” I agree. So how do we fix that?
       I am a process philosopher, so I am always skeptical of Eith/Or fallacies and discrete-state thinking. I am a humanist, not a reductionist, but I think I may have mentioned in a previous essay, that reductionists have a valuable place in the evolution of the human mind. Lanier is thinking like a reductionist because he is thinking digitally, even while he is trying to criticize digital thinking, but that is a good thing, not a bad thing. I do not think digitally, but it is educational for me to read someone who does, so that I can get a better understanding of it. And I also think that reductionists who read the works of the holistic scientists and the Gestalt/Existential psychologists, can learn something to factor into their own inductive data. To me, all reality is a continuum, a spectrum of differing data, so I find it very enlightening to observe and examine how other human beings arbitrarily divide up the spectrum into their personal and social constructs of discrete states. Of course, some things fall more easily into natural categories, but most do not. As a process philosopher, for instance, I do not believe that inductive and deductive reasoning are as different from one another as some other philosophers may believe. I also do not believe that metaphysics and epistemology can be separated. I am not going to argue those points in this paper, I only mentioned them to point out that there is a whole spectrum of ways of thinking about reality that range from extreme reductionism to extreme holism, and further, that the whole spectrum can be divided up into any number of arbitrary digits. (The process of human intelligence evolution is partly driven by a synthesis of all the various discrete points on the spectrum between those two extremes.)
       My computer programming knowledge is pretty limited, but I am sure that today’s computers are so much more powerful than those of twenty and thirty years ago, that someone could easily engineer 64 bit, 128 bit, and much higher architecture, if they haven’t already. And if they have, already, then they should be able to design better fake-analog replications of things. It is so easy to tell the difference between a true analog clock or watch and a digitally faked analog clock, because you can see the hands jumping between the seconds instead of moving smoothly around the face of the clock or watch. But if each byte in the chip had, say, 128,000 bits, you wouldn’t be able to notice the discrete jumps of the second hand on the face of the clock.
       High definition of anything just means more pixels. A line can be divided up into an infinite number of discrete points, but the number of those points is limited by the number of pixels in the technology. As Lanier pointed out, this is less of a problem with hip-hop music based on discrete, even beats and notes.
       But all of that only covers videos and music. The plastic arts, painting and sculpture, as Lanier pointed out so well, can never be represented digitally, (and I must add that it is a waste of time to even try). Lanier says “What makes something fully real is that it is impossible to represent it to completion.” So, even given an infinite number of pixels (resolution) …”an oil painting (or any other plastic art) changes with time…” I think he almost gests it here, but thinking as a reductionist, he seems to be saying that the problem arises because, as programmers, we can never remember to add all the dimensions of changes that a real object will go through… “but it will always turn out that you forgot something like the weight or the tautness of the canvas.” But, a process philosopher would point out that all of reality is a work-in-progress. When the sculpture or painting or whatever, decays it gradually turns into something else so that over time, it reaches another discrete point on the continuum of reality, and even human programmers do not (yet) have enough inductive evidence in their wetware (to borrow a sci. fi. term from William Gibson) databases to run a statistical probability calculus to determine what the real object may turn into at any given time in any given space. The reason I modified that sentence with the word “yet” is to make my point that if there ever is a singularity, it is because humans will expand their intelligences exponentially as they are trying to make computers smarter. But the computers can never catch up!
       My problem, because it concerns me economically and personally, is writing. And this is where the economic system, not the digital system that it supports, is the real problem. When Lanier says the culture is “eating its own seed stock” he is talking about all the non-creative, non-artists who are using old media to create retro mashups in the new, digital media. But I take it as a reference to all the bad business decisions of the print media to go along with the digital fad of “open culture.” They are cutting their own throats; making themselves obsolete. Writers should get paid for what they write, and it is up to the publishers to enforce that. When I said, earlier, that there is no law that says we have to stay online 24/7, I meant no “legal” law. But by putting all their energy eggs into this digital culture basket, and abandoning all the other cultural outlets, the media barons have created an economic law that forces us to stay online by default. I do not have a political/economic solution to this yet, but my point is only that the digital culture is only partly to blame.
       Since this essay represents only one discrete point on the continuum of philosophic thought, I must arbitrarily cut it off here to avoid continuing ad infinitum, ad nauseum.


Monday, April 25, 2011

Turing paper

This is a paper I wrote for Phil. 306, Computer Ethics, at SSU. It is the first in a series of essays I plan to post on this blog.

Alan Turing, Daniel Dennett, & IBM/Watson
       The first part of Alan Turing’s paper brings up the question of whether a digital computer can theoretically imitate human thought enough to fool a real human. Turing believed it is possible. The real point of this “imitation game,” however, is to explore the hypothetical possibilities of Artificial Intelligence. A readisng of Daniel Dennett brings some input about neural science and consciousness theory to the discussion. Videos about IBM’s Watson computer offers a concrete example of a machine that has been programmed according to the criteria of Turing’s thought experiment, to be tested by playing the “Jeopardy Game.” This paper tries to assess IBM/Watson’s potential as it relates to Turing’s paper.
       The bulk of Turing’s paper is concerned with addressing specific arguments against his hypothesis. I am taking the position that his hypothesis is valid, but I want to explore how well he defends it, specifically concerning the following issues:
       Even if a computer can successfully imitate a human, it does not count as “thinking” unless it can also be shown that humans can “think,” otherwise one would argue that humans are only meat puppets anyway, and a machine imitating one is no big deal. And that brings into the discussion the questions of, what is Intelligence? What is Consciousness? What is Free Will? And how do all these concepts relate to each other? This is where Daniel Dennett offers some input.
       Then, if we can safely assume that humans are not just meat puppets, does Turing adequately explain how a digital computer, (being a discrete-state machine), can imitate a non-discrete-state human?
       Turing’s first two issues: the Theological Objection and the “Heads in the Sand” Objection, are easy to dismiss. His third one is The Mathematical Objection, based on Godel’s theorem that any logically consistent system will necessarily contain statements that can neither be proved nor disproved. Turing dismisses this objection as irrelevant, since it also applies to the human intellect. I agree.
       Turing’s fourth issue is The Argument from Consciousness. This is the most interesting argument and, as he notes, the ones that follow are just variations on this one. But his ninth issue, The Argument from Extrasensory Perception is, to this writer’s mind, a complete waste of time. Turing says it is a strong argument, and even claims that the statistical evidence for telepathy is “overwhelming.” I don’t know what kind of statistics they were doing in his time, but the inductive evidence for ESP would not even come close to an acceptable margin on the Pearson R scale today. It is considered pseudoscience by most scientists. Maybe Turing threw those paragraphs in there for comic relief.
       Turing’s strongest defense of his hypothesis is in the last five pages under the heading Learning Machines. To discuss learning, we have to go back and include the issues raised by the questions of consciousness, and the question about the difference between discrete-state computing and non-discrete-state computing.
       I began this paper by discussing Turing’s points in the order that he presented them, but now I have “decided” to rearrange them the way I “want” to. Turing did the same thing when he wrote, “Let us return for a moment to Lady Lovelace’s objection, which stated that the machine can only do what we tell it to do.” Can a machine “choose” to digress from its program? (The reader does not have to answer rhetorical questions; the answer is “yes.”) The digression is called a “subroutine” and the “choice” is made by whichever “state” the computer finds itself in. I am not going to get into a nit-picking language quarrel about whether modules and subroutines are not just part of the greater algorithm that computers use for their book of rules to follow, because it can then be argued that humans make their “choices” the same way. The real question is whether the difference between a discrete-state computer and a non-discrete-state human mind is an important one. In this same context, Turing goes on to use the analogy of a neutron bombarding an atomic pile of subcritical mass, compared to a neutron bombarding an atomic pile of supercritical mass. The neutron is analogous to an idea that sets off a chain-reaction of other ideas in humans, but not in computers. And then Daniel Dennett says that evolution showed us that consciousness was created, not by a higher intelligence with a teleological algorithm, but by thousands of discrete mini-module bio-electro-chemical algorithms that formed together by chance over a huge expanse of time, and that there is no “Ego” that centrally controls our nervous system, and I say that Lord Byron’s daughter was a brilliant mathematician who wrote computer programs for Charles Babbage’s machine a hundred years before Turing broke the Nazi Enigma code, and I am trying to get all this information down on this paper as fast as I can because I am
running past the word count requirements and I want to complete this algorithm by the time it is due.
       Now, how can a digital computer successfully imitate the previous paragraph, which I just wrote, with scattered, chain-reaction ideas presented in no logical order? That was another trick question; if a human can think like that, a human can write a program like that. (It’s called “spaghetti-code”.) The choices of input, output, and subroutines are not part of the problem. The real problem is that I inserted the feeling of want in there. Does a computer want to finish the algorithm, or does it just do it because it has to? If a computer is instructed by a specific algorithm to write a 300 word paper on a given subject, can it “choose” to write a longer one? Again, to paraphrase Lady Lovelace, it is up to the human programmer to make the input and output choices for the computer. Can a human programmer just ignore the poor computer’s wants and feelings? Yes, but first he or she has to program those wants and feeling in there, and this writer does not know anything about how to write a program like that.
       I wrote, in a previous digression from whatever subroutine I was in at the time, that Turing’s strongest arguments are under the heading of Learning Machines. He makes the brilliant suggestion that, instead of trying to program computers to imitate adult humans, we should program computers to imitate children who can learn to be adults. Brilliant! Program computers to learn! Now we are on the right rack if our real goal is to explore the possibility of Artificial Intelligence. We should be writing computer programs that, once written, can re-write themselves after that. We humans re-write ourselves all the time, so why can’t we write programs for computers to do the same?
       Turing writes that, “...this (learning) process will be more expeditious than evolution. The survival of the fittest is a slow method for measuring advantages. The experimenter, by the exercise of intelligence, should be able to speed it up. Equally important is the fact that he is not restricted to random mutations. If he can trace a cause for some weakness he can probably think of the kind of mutation which will improve it.”
       Turing has given us a brilliant idea here, but his own field is mathematics and computer science. Human psychology does not seem to be his strong point although he does show a basic understanding of the feedback relationship between genetics and environment. He also apparently has some knowledge operant conditioning, and he understands how punishment and rewards relate to learning. But here is where Turing seems to be way out of his element. He writes, “The machine has to be so constructed that events which shortly preceded the occurrence of a punishment signal are unlikely to be repeated, whereas a reward signal increased the probability of repetition of the events which led up to it. These definitions do not presuppose any feelings on the part of the machine…(my italics).”
       The difference between a reward and a punishment is one of value judgment. Values are not like mathematically logical operations; there is no objective “right” or “wrong” answer that a discrete-state machine could understand. Reward signals and punishment signals would have no meaning to a computer that did not have emotions. Why would a computer care whether the programmer thought one signal meant “bad boy” or another signal meant “good girl?”
       I have an extremely limited knowledge of modern computer programming languages, so I will have to take the stance of a theoretical philosopher and engage only in thought experiments. Whenever I hear people talk about how emotions are illogical, or emotions get in the way of clear thinking, or some other variation on the perceived conflict between emotions and logic, I always have to ask, “would humans be as intelligent as we are if, like Spock on Star Trek, we were only logical and not emotional?” I think not.
       I argue that emotions are not any different in kind than what is commonly referred to as “logic.” I argue that emotions are logical, they just follow a different algorithm than the traditional “thinking” kind of logic, with different starting premises and different goals. Emotions are the algorithms that contain all the value judgment subroutines. In a “good” state, the subroutine switches to one module, in a “bad” state it switches to another. I will further argue that without emotions, humans cannot learn, except by rote, in which case computers really could imitate humans, because humans would only be meat puppets. True learning, in the higher, human sense, means growing and improving, and this cannot be done without value judgments. I hypothesize that the human nervous system has more than one algorithm running at the same time, in parallel, and the emotional logic interfaces with the computational logic during various subroutines creating a self-conscious gestalt that can make value judgments about what it wants to learn and how it wants to learn it. I further hypothesize that a computer can be programmed with separate, but parallel algorithms, one with a purely mathematical component that makes “yes” and “no” judgments, and another with a value judgment component that makes “good” and “bad” judgments, and various interfaces where one of the four combination of the four “states” direct the subroutines. The four combinations would be yes/good, yes/bad, no/good, and no/bad. A mathematical model of how operant conditioning works could produce a learning algorithm. This might easily be done using binary math.
       If my hypothesis is tried and does not work, maybe someone with more knowledge than I have about programming could look into the possibility of non-discrete-state programmable computers. Now that I have gotten that rant out of my system, I will try to examine the IBM/Watson.
       Could IBM/Watson pass the Turing test? Yes, if we are using the “imitation game” criterion. I think that chess-playing computers already have done so, although I am sure that any grand master chess champion who lost to a computer would disagree. Machines imitate humans better when they make mistakes and lose. Watson is more sophisticated than a chess playing computer. A chess-playing computer can only imitate a chess-playing human in a situation where both machine and human are thinking in logical algorithms. As the engineers in the IBM video pointed out, the game of Jeopardy entails open ended questions stated in natural language. Also, making a mistake does not necessarily count as imitating a human. The IBM video demonstrated the difference between mistakes of fact and mistakes of form. If a computer merely gives the wrong answer it could be imitating a human, but if the computer can not even understand the question, as earlier versions of Watson did not, its non-humanness is given away. The newest version of Watson has about a thousand parallel CPUs designed to understand open ended questions and natural language, as well as vast database of information on the question categories.
       Is this the realization of Turnings Dream? No, it is still only capable of imitating the people who programmed it. It does have the capacity to learn how to make better choices about the probabilities of potential answers being right or wrong, based on previous experience. This is a great advancement over all previous computers, but even this advantage is limited to the databases that the programmers have given it, and the algorithms that the programmers have written for it. It does not know how to write new algorithms of its own, based on emotional and esthetic value judgments, as real humans do.
       What might be done with a machine like Watson? The IBM programmers have already figured out that Watson needed thousands of CPUs running parallel algorithms, and some of those algorithms can make limited value judgments based on objective mathematical and statistical logic, but I do not believe that Watson can make subjective value judgments about emotions or esthetics, and I do not believe that Watson can understand the “meaning” of its correct answers. Since Watson’s “learning” is done by “rote,” it can not teach itself to grow and progress. If some of those parallel CPUs were also running on good/bad code instead of just yes/no code, and if a “starting state” algorithm could be designed, as in Turing’s “Learning Child” thought experiment, maybe Watson could teach itself to “think” like an adult human. But even then, it would also have to be rigged up with light sensors, heat sensors, and sound sensors, that register not just wavelengths, temperatures, or decibels, but value judgments about those states, like too hot, too bright, or not loud enough. While computers are currently able to run diagnostics on itheir hardware, Watson should also be able to run diagnostics on all of its software modules, and then rewrite the algorithms that result in mistakes, based on value judgments about what it wants to accomplish. The “Learning Child” starting state should include an algorithm that defines its wants in extremely general, non-specific wants, not like “I want to be a cowboy when I grow up,” but “I want to be successful in life,” and then the learning programs should be written with the goal of learning what “Life” is, and what “successful” means. After Watson has been set up, its self-learning program would generate questions to ask real people through audio-visual interfaces. These real people would no longer be “programming” Watson because they would not be making the input and output decisions for Watson. Watson would be generating its own, original output that requests original input based on good/bad algorithms created by its starting state of wants. The starting state would be analogous to human heredity, and the output/input feedback loop would be analogous to human operant conditioning based on experience with the environment.
       Some people like to argue that machines cannot have emotions or make esthetic value judgments but, as Daniel Dennett has pointed out, human judgments are not that much different from mechanical ones, being just bio-chemical-electrical algorithms. I am not a reductionist myself, but I do believe that reductionists offer valuable empirical data for us non-meat puppet types to use as input for our gestalt-generating programs.


Thursday, March 31, 2011

Comix and satire from the 1960s

I have now updated my Website to include most of the comix and satire that I published back in the 1960s, '70s, & '80s. I will also be adding newer humor that I have more recently written.
Click here for comix

Friday, March 25, 2011

Prof. Stanley Fish on Wisconsin Union Busting

On March 21, Dr. Stanley fish, Professor of Humanities and Law at Florida International University, posted an article on the New York Times OPINIONATOR blog titled "We're All Badgers Now"
In that article he admitted that he has changed his mind from a previous opinion and now supports the public workers unions. His blog posting had mostly positive response comments from intelligent and educated readers. Here is my comment on their comments.

Some very good points have been made here about why academics should side with unions and try to rid themselves of the elitist stereotype, and I am glad to see that Dr. Fish has come over to this side. Also some good points were made about the bigger picture; what the Republican agenda is really about, which concerns all Americans, not just college teachers or unions, but all of us who are either trying to achieve middle-class status or maintain our vulnerable middle-class status. The Republican agenda is, and has been for many years, trying to destroy the middle-class altogether so that the rich will be very rich and the poor will be very poor, leaving nothing in between.

Janet, from NYC, pointed out that unions created the the middle class, "the engine of our economic vibrancy." Also, I felt some nostalgia reading her quote from Woody Guthrie's "Union Maid" (hum the tune to "Red Wing" in your head to get the full effect).

Anna,from Albuquerque, and Candice,from Berkeley, pointed out that the right-wing agenda is about "privatization" of everything and used for profit. (Our current health care system is so far below every civilized country in the world precisely because it is privatized for profit only.)

And Michael Beilfuss, from Bryan, TX, pointed out the connection between union busting and the Supreme Court's decision to allow both corporations and unions to spend their money on political donations. The Republican solution to this? Obviously, bust the unions so that only the corporations will have the financial power to elect politicians. Talk about equality in collective bargaining, the corporations already have the "collective bargaining" advantage through their lobbyists who get hired by the government to run departments they will deliberately trash in order to privatize everything for profit.

But, I must take exception to a comment by Doug, from San Francisco, who claimed that "management is not bargaining with its own money" and "you can always go soak the taxpayers for some more," as if that was the fault of the unions. The fact is that taxpayer money is wasted on top-heavy administrations with too many redundant managers. Schools are supposed to be about education, but management seems to think it is only about their salaries and profits. Their idea of saving taxpayer money is not to trim administrative budgets, but to fire teachers, cancel classes, and increase the size of the classes that are left. Oh, yes, and raise the tuition of the students who will now be getting less for their money than they were before. The modern corporate ethic of making profits without providing any services (Enron redux) exists even in the public sector management mentality.
One last rant. Beware of using the term "meritocracy" as if it were a good thing. It doesn't even exist. The corporate "for profit" bosses get to define "merit" any way they want, and if you try to frame your moral position in terms of a "meritocracy" you will be falling into their rhetorical trap.

Stay tuned to this blog. On Monday, April 25th, I will be starting a series of essays about computers, cyber-ethics, and the role of hackers in the World Peace Algorithm.
www.peacemoon.org

Sunday, March 6, 2011

Why Peace, Why not war?

      The following link is to the Thesis page on my website, the essay that explains how the World Peace Algorithm works. It was written almost 20 years ago, so I would really appreciate some comments from my readers. I plan to update it sometime in the near future.

Introduction

--Buck Moon