This year’s Oscars “celebrate” yet another mostly dull year for motion pictures, with about as much drama as a glass of milk; most of the major awards were in films I never heard of, and won’t ever. This past year was such a snore that the only film I found worth wasting good money on was “Machete.” I figured it must be good if a white person told me that he didn’t like it; it was also the only film this year that had a socially trenchant message: Take the bigots names, and kick their ass. So what were we force-fed this year? Kings and ballet dancers. If the subject matter is snobby enough, it tends to remind Hollywood “royalty” of themselves, and they can’t get enough of themselves. Everyone knew who was going to win the top awards this year, because “The King’s Speech” and “Black Swan” were the only movies that ever got much press in the run-up. At least this time we were allowed to assume it instead of being told it was a stone-cold lock; sometimes the winners are telegraphed a thousand miles, like how TIME shamelessly plugged Kate Winslet for “The Reader”—apparently because she worked so hard to put on so much make-up to look older. Her character’s moral and ethical ambivalence was so disturbing that it bordered on mental retardation; she apparently never once dwelled on the moral question of mass murder even after years in prison. Was this movie supposed to give us “insight” on why Germans looked away? A law student asked the question; like in the movie itself, he didn’t get an answer.
Meanwhile, boxing movies continue to enjoy some Oscar recognition. “Rocky,” “Raging Bull,” “Cinderella Man,” “Million Dollar Baby” and this year “The Fighter” have all been big winners; "Fighters" Christian Bale won for Best Supporting Actor, and apparently not for knocking someone out on the set. Other sports receive less respect; last year, despite his gritty, honest portrayal in the honest, gritty movie “The Wreslter,” Mickey Rourke criminally lost out to Sean Penn; Penn already won an Oscar in Clint Eastwood’s morally and ethically despicable vigilante movie “Mystic River,” and the social message ground of “Milk” was already covered when Tom Hanks won for “Philadelphia”—bypassing the more worthy Liam Neeson in “Schindler’s List.” Hillary Swank won Best Actress for the fight film “Baby,” another despicable film by Eastwood which dispensed ugly racial stereotypes freely; after many hypocritical characterizations (including a racist white guy who was made to look sympathetic) Swank’s character was felled by a black fighter with pure evil on her mind, and ended-up dying thanks to some mentally-incompetent Latino corner man who didn’t know what side a stool a person was supposed to sit on. The screenwriter of “Baby” was Paul Haggis, who went on to write and direct the Oscar-winning “Crash,” another “even-handed” characterization of race relations which did its best to “justify” white racial stereotypes. I don’t know what was worse—this movie or the dishonest “American X,” where the effect of neo-Nazis activity was diluted when they attacked unsympathetic black gangbangers; everyone knows that white supremacists terrorists are too cowardly for that—they’d rather attack a couple outside Fort Bragg, North Carolina or an inebriated man walking down a lonely road in Jasper, Texas.
At least this year, politics wasn’t involved in the Oscar selection. In 2008, “Slumdog Millionaire” won for best picture, apparently because of its “exotic” nature. Chicago film crtic Jan Lisa Huttner made a stink because when the various awards nominations were put out, Loveleen Tandan—who was listed as “co-director (India)” in the credits—was not named in the best director category. Tandan was embarrassed by the politicking of American feminists, and why not? Tandan was given the credit for PR reasons in India; her role was second-unit expository filming in slum areas where English was unknown, advice on the culture and performing casting functions when looking for slum residents to put in the movie. The politics carried over the next year, when Kathryn Bigelow became the first female to win Best Director for the “The Hurt Locker” which also happened to win Best Picture. Nearly everyone (that is, civilians) marveled at the film, although one reviewer on some feminist website complained about the lack of female soldiers in the movie and accused Bigelow of being too “macho.” Most Iraq war veterans who commented on the film found it about as realistic as a “Roadrunner” cartoon, which is probably not Bigelow’s fault but the script writer, who allegedly was an “embedded” journalist. Probably to keep the budget in check, the film focused on three explosive ordinance disposal specialists who ran all over Iraq in a Humvee; in reality, these soldiers would never be out by themselves, especially without security; instead, they’re are allowed to wander about in open desert, dark alleys and insurgent-infested neighborhoods, and never once does an Iraqi threaten to capture or kill them—nor does an MP or superior stop them and ask why they are so stupid. In one amusing scene, a British unit can’t fix a flat tire or use their sniper rifle, but the EOD guys act like they’ve been doing it all their lives; those Second Amendment rights sure come in handy. But all of that is beside the point; a woman won best director, and that is the point.
The political sensitivities of the old fogies in the AMPAS have its limits, of course. This year, not a single black person was nominated for an award. No film directed by a black man or woman has ever been seriously considered for Oscars’ highest prizes, regardless if they made money or not (“The Hurt Locker,” as mentioned before, had more positive reviews than gross receipts). The Spike Lee sprawling bioepic “Malcolm X” was the kind of film that always gets nominated for Best Picture, and the director always gets nominated, but “X” was shutout on both counts. There was suspicion that many of the white Academy members were expressing covertly their personal prejudices; people today consider Muhammad Ali a sports icon, but like Malcolm he was in fact hated and feared by a majority of whites for his political and social views in the 1960s.
I will end my Oscar ruminations by noting that there wasn’t even a “feel good” award this year. Last year, Jeff Bridges finally won an Oscar; Bridges was one of those actors who film after film achieves a certain uninterrupted level of competence that people couldn’t tell if a performance was good or bad because they had nothing to draw a comparison to. Another actor who never seems to be in the right place at the right time is Michelle Pfeiffer, who like Bridges consistently offers gives fine performances in parts that are not necessarily meant to maintain a carefully-coiffed image. Maybe when she’s 70, the Academy will say WTF, let’s get it right before she dies.
Monday, February 28, 2011
Thursday, February 24, 2011
Lesson from 1848 is wait and see
There will no doubt be much debate by historians about the 2011 Revolution in the Muslim world, perhaps even akin to the 1848 Revolution in Europe. Beginning in Italy, the spirit of constitutional democracy spread to the Low Lands, France, Germany, Prussia, the Austro-Hungarian Empire and Denmark. In the Low Lands and Denmark, modest reforms were conducted and accepted, and everyone went home. Elsewhere there was required a more concerted effort required disentrench the monarchies involved (in France, the 1789 Revolution was a brief, bloody interlude), and in the end, despite some movement to appease the reformers, those countries monarchies remained with most of their authority intact. It wasn’t until after France’s defeat in 1871 that the monarchy in that country was finally abolished, and in 1918 the absolute monarchies elsewhere. The reason for this delay was, predictably, because competing reform factions were disorganized and had competing agendas; no party was willing to let someone else to speak for the whole; in France, public disgust with infighting allowed “President” Louis-Napoleon to dissolve the National Assembly and crown himself Napoleon III.
There may be lesson here for those in the media arrogantly acting out like they are a part of history they clearly don’t understand; the business end of "revolutionary" discontent are not the "friendly," fresh-faced kids TIME put on its cover--as some self-imagined media "super stars" operating without the cover of U.S. military have discovered. Democratic revolution, like all modernizing “revolutions,” seem to come late to the Muslim world, and real revolution does not come without pain. In the West, the Reformation sought to “purify” the Christian religion back to its “essence,” although none too successfully, and merely served as an excuse to litter the battlefields of Europe with dead bodies. The Enlightenment freed-up the human mind to discover different explanations for the workings of nature and human intercourse that dispensed with "comforting" notions like the will of God--forcing humanity to confront the costs of its baser nature. The Industrial Revolution began the inexorable march toward demolishing previous traditional mores and turning this limitless globe into a very small world indeed. Not that the Islamic world would not eventually discover the fruits of these labors following various foreign interventions in the Twentieth Century, but their acceptance has been at best a matter of suspicion and cagey tolerance. Modernization and the West have been so intertwined that its failure to uplift the life of a majority of Muslims has placed in the mind of many, perhaps most, that both are anathema to their way of being.
Not that there were no modernist thinkers in the Muslim world; Ahmad Kasravi, an Iranian intellectual, political reformer and anti-clericalist in the first half of the 20th Century, was swayed by Western science that could explain the workings of the natural world without resort to superstition and magic tricks taught by Imams. Kasravi also believed that Shiite Imams deliberately led their followers into ignorance and fear for their own selfish purposes—including the cult of personality every Imam seemed to insist upon themselves. He even went so far as to propose that the Shiite faith be cleansed to its essence, meaning stripping Imams of all trappings of personal idolatry and forcing them to perform their true role as “enlightened shepherds.” Kasravi, of course, made many enemies with the Iranian clerics of his day, and he would be assassinated by an Islamic fanatic.
There were constitutional movements in Iran from at least 1900, but there was always only sporadic cooperation between reformers and Muslim clerics; they might agree in principle to a goal (overthrowing the power of the shah), but not on what to do afterwards. The turn-of-the-century National Consultative Assembly sought to give “the people” a voice in affairs, and secularize some institutions in Iran, such as schools. This is was opposed by extremist clerics like Sheikh Fazlullah Nuri, who was eventually tried and executed for treason—but is now viewed as a “martyr” against the vices of Western-style democracy. Ali Shariati, who was a disciple of Kasravi and was the Ayatollah Khomeini’s principle rival in revolution, might have led Iran in a different direction had he lived; but he mysteriously died before he could take part in the 1979 revolution.
The media still seems to be under the impression that Western-style democracy and Islamic fundamentalism can coexist. We have been led to believe that the Muslim Brotherhood in Egypt is of fairly recent origin, but in fact it has been in existence since at least the 1950s, and even then it was preaching an Islamic state; the Egyptian military is virtually the sole instrument for modernization, and we know that. Nial Ferguson in his idiotic piece in Newsweek may not know that (why is it that someone who speaks with a “British” accent can sell any dumb American a sack of shit?), but we should. Westerners seem to believe that with the march of modernity, the Islamic world would become more secularized, and religion would become less important in shaping national, social and economic identity as it had in the West. But this has not happened in states that did not first have powerful secular forces shaping their destiny (Pakistan, Turkey). Most Muslims are simple farmers for whom tradition is the guiding force in life; they are not the ones we see marching in the streets. In the West, modernization comes despite tradition; in the Muslim world, tradition is maintained in spite of modernization. If modernization doesn’t conform to Islamic tradition, it is malformed into a shaped that we wouldn’t recognize. An unemployed Muslim may not have understood the utility of a modern explosive device in his everyday life, but when told by the appropriate person, like a cleric, that this explosive forms a link between yesterday, today and tomorrow following a “tradition” like jihad, it shows how little we understand what we are dealing with. And if an Islamist understands “democracy” at all, it is a legal means to creating a one-party state based on Islamic, not secular, law. What this means is that we should not be jumping up and down shouting “freedom” with these folks. It remains to seen what comes out of all of this.
Even without the Islamists involved, we have to remember that most of the countries in the Mid East are artificial creations (much as those in Africa are). There are significant religious minorities or competing tribal units, amongst whom there is no shared feeling of nationalism, and those created states have held together through the compulsion of military force; in fact, in countries like Egypt, the military has been and is the only entity currently capable of maintaining order out of potential chaos. People in Egypt and Tunisia want jobs; they want to be “free” to find jobs—and beyond wishing to express their displeasure without being arrested and tortured, they don’t care if it is Islamists or some other combination of forces that creates jobs. True democracy will not come easy to Egypt (if at all), as it will not in Tunisia or anywhere else. For that matter, we don’t even have a true democracy in this country, just shadow governments run by corporations. We already know that there is no true democracy in Iran, as the Revolutionary Guard is already calling for the arrest, trial and execution of opposition leaders.
Meanwhile, the Economist recently reported that in the West Bank, ¾ of the people identify themselves as a religious unit rather than a national one. Although the Palestinian Authority has regained some control—mainly by replacing Hamas’ charity networks with their own, and suppressing the imams who prefer to preach jihad rather than religion—Hamas is lying low, its leaders readily admitting that they are busy replenishing and re-equipping its militant cells not for peace or nationhood, but to resume their goal of driving Jews out of the region permanently. These are the people that the media and Palestinian sympathizers are calling “victims” of Israeli aggression.
Despite oil wealth and toying with “modernization,” there remains great poverty in the Muslim world, and the reality that must be faced is that Muslims have tended to blame this on the foolish acceptance of Western ideas, for which they are being punished. Islamists, especially Shiite Imams who always railed against acceptance of the West, were left unscathed, even though they didn’t offer anything markedly different save a promise of "paradise" in the hereafter. They now have the upper hand, if “freedom lovers” allow them to have it.
There may be lesson here for those in the media arrogantly acting out like they are a part of history they clearly don’t understand; the business end of "revolutionary" discontent are not the "friendly," fresh-faced kids TIME put on its cover--as some self-imagined media "super stars" operating without the cover of U.S. military have discovered. Democratic revolution, like all modernizing “revolutions,” seem to come late to the Muslim world, and real revolution does not come without pain. In the West, the Reformation sought to “purify” the Christian religion back to its “essence,” although none too successfully, and merely served as an excuse to litter the battlefields of Europe with dead bodies. The Enlightenment freed-up the human mind to discover different explanations for the workings of nature and human intercourse that dispensed with "comforting" notions like the will of God--forcing humanity to confront the costs of its baser nature. The Industrial Revolution began the inexorable march toward demolishing previous traditional mores and turning this limitless globe into a very small world indeed. Not that the Islamic world would not eventually discover the fruits of these labors following various foreign interventions in the Twentieth Century, but their acceptance has been at best a matter of suspicion and cagey tolerance. Modernization and the West have been so intertwined that its failure to uplift the life of a majority of Muslims has placed in the mind of many, perhaps most, that both are anathema to their way of being.
Not that there were no modernist thinkers in the Muslim world; Ahmad Kasravi, an Iranian intellectual, political reformer and anti-clericalist in the first half of the 20th Century, was swayed by Western science that could explain the workings of the natural world without resort to superstition and magic tricks taught by Imams. Kasravi also believed that Shiite Imams deliberately led their followers into ignorance and fear for their own selfish purposes—including the cult of personality every Imam seemed to insist upon themselves. He even went so far as to propose that the Shiite faith be cleansed to its essence, meaning stripping Imams of all trappings of personal idolatry and forcing them to perform their true role as “enlightened shepherds.” Kasravi, of course, made many enemies with the Iranian clerics of his day, and he would be assassinated by an Islamic fanatic.
There were constitutional movements in Iran from at least 1900, but there was always only sporadic cooperation between reformers and Muslim clerics; they might agree in principle to a goal (overthrowing the power of the shah), but not on what to do afterwards. The turn-of-the-century National Consultative Assembly sought to give “the people” a voice in affairs, and secularize some institutions in Iran, such as schools. This is was opposed by extremist clerics like Sheikh Fazlullah Nuri, who was eventually tried and executed for treason—but is now viewed as a “martyr” against the vices of Western-style democracy. Ali Shariati, who was a disciple of Kasravi and was the Ayatollah Khomeini’s principle rival in revolution, might have led Iran in a different direction had he lived; but he mysteriously died before he could take part in the 1979 revolution.
The media still seems to be under the impression that Western-style democracy and Islamic fundamentalism can coexist. We have been led to believe that the Muslim Brotherhood in Egypt is of fairly recent origin, but in fact it has been in existence since at least the 1950s, and even then it was preaching an Islamic state; the Egyptian military is virtually the sole instrument for modernization, and we know that. Nial Ferguson in his idiotic piece in Newsweek may not know that (why is it that someone who speaks with a “British” accent can sell any dumb American a sack of shit?), but we should. Westerners seem to believe that with the march of modernity, the Islamic world would become more secularized, and religion would become less important in shaping national, social and economic identity as it had in the West. But this has not happened in states that did not first have powerful secular forces shaping their destiny (Pakistan, Turkey). Most Muslims are simple farmers for whom tradition is the guiding force in life; they are not the ones we see marching in the streets. In the West, modernization comes despite tradition; in the Muslim world, tradition is maintained in spite of modernization. If modernization doesn’t conform to Islamic tradition, it is malformed into a shaped that we wouldn’t recognize. An unemployed Muslim may not have understood the utility of a modern explosive device in his everyday life, but when told by the appropriate person, like a cleric, that this explosive forms a link between yesterday, today and tomorrow following a “tradition” like jihad, it shows how little we understand what we are dealing with. And if an Islamist understands “democracy” at all, it is a legal means to creating a one-party state based on Islamic, not secular, law. What this means is that we should not be jumping up and down shouting “freedom” with these folks. It remains to seen what comes out of all of this.
Even without the Islamists involved, we have to remember that most of the countries in the Mid East are artificial creations (much as those in Africa are). There are significant religious minorities or competing tribal units, amongst whom there is no shared feeling of nationalism, and those created states have held together through the compulsion of military force; in fact, in countries like Egypt, the military has been and is the only entity currently capable of maintaining order out of potential chaos. People in Egypt and Tunisia want jobs; they want to be “free” to find jobs—and beyond wishing to express their displeasure without being arrested and tortured, they don’t care if it is Islamists or some other combination of forces that creates jobs. True democracy will not come easy to Egypt (if at all), as it will not in Tunisia or anywhere else. For that matter, we don’t even have a true democracy in this country, just shadow governments run by corporations. We already know that there is no true democracy in Iran, as the Revolutionary Guard is already calling for the arrest, trial and execution of opposition leaders.
Meanwhile, the Economist recently reported that in the West Bank, ¾ of the people identify themselves as a religious unit rather than a national one. Although the Palestinian Authority has regained some control—mainly by replacing Hamas’ charity networks with their own, and suppressing the imams who prefer to preach jihad rather than religion—Hamas is lying low, its leaders readily admitting that they are busy replenishing and re-equipping its militant cells not for peace or nationhood, but to resume their goal of driving Jews out of the region permanently. These are the people that the media and Palestinian sympathizers are calling “victims” of Israeli aggression.
Despite oil wealth and toying with “modernization,” there remains great poverty in the Muslim world, and the reality that must be faced is that Muslims have tended to blame this on the foolish acceptance of Western ideas, for which they are being punished. Islamists, especially Shiite Imams who always railed against acceptance of the West, were left unscathed, even though they didn’t offer anything markedly different save a promise of "paradise" in the hereafter. They now have the upper hand, if “freedom lovers” allow them to have it.
Blow-up deficit prediction, and Dr. Walker's snake oil will kill every Republican ailment
It seems that the state that was the birthplace of the Progressive Movement has produced something akin to the “It’s Alive” creature in Republican Gov. Scott Walker. There is an audio about the web that has recorded for posterity Walker unabashedly informing a prankster—who identified himself as billionaire Koch brother David—of his plans to destroy the public employees union just as the Ronald Reagan destroyed the air traffic controllers union. He also urged “David” to supply more money for the campaigns for Republicans who might get into trouble for supporting anti-union causes. And not just in Wisconsin: Ohio, Michigan and Florida Republicans needed help. “This is something big.” They need to get the message out “over and over again,” and that with means billionaire, “Citizens United” dollars. Walker is heard discussing the possibility of felony charges against the recalcitrant Democrats, and the faux-Koch brother tells Walker "Once you crush these bastards, I'll fly you out to Cali and really show you a good time." Walker, like Sarah Palin, was apparently never aware he was being punked. It just shows you the conceits and egotism of these people.
But knowing that Walker is a fanatic is one thing, getting beyond the Fox News types calling protestors in Madison “communists,” the Wisconsin progressive laws a throwback of the “old Soviet Union,” and the Comedy Channel’s Jon Stewart making an incoherent fool of himself again, what is actually going on there? Yes, we know now it is very definitely a naked power play by Republicans to eliminate public employee unions as a potential obstacle to their totalitarian ambitions—or rather, those who they are fronting for (and this isn’t the only thing they are doing to accomplish this: they are trying to suppress voting by proposing a voter photo ID law that disallows student IDs, tribal IDs and passports as acceptable). But is Walker’s claim that balancing the state budget requires the removal of public employee collective bargaining rights actually necessary? Walker and the media has told us that the state budget is facing an expected $3.6 billion deficit in fiscal years 2012/13; true—or just a Republican scare tactic to take a chainsaw to everything they don’t like?
The state does have a problem in that it has a large amount unpaid, but as yet undue, debt to pay off. However, the budget “crisis” is still a matter of opinion, and even now not sufficient to require a “budget repair” which would allow Walker and his cronies to do whatever they wish. I read a fairly detailed article composed by a conservative organization called the Wisconsin Policy Research Institute. Most of the current estimates of the Wisconsin 2012/13 budget deficit of between $3.1 billion and $3.6 billion assume that there will be no increase in revenues, and education, health care, government employee costs and criminal justice system costs will rise at usual rates. The revenue projections made by the Global Insight’s estimate of future economic recovery, in which the state Fiscal Bureau based some of its recent provocative suggestions that state may actually see a budget surplus, assumes a 3.2 percent increase in economic activity in the state. The WPRI finds that even if true over fiscal years 2012/13, that would still leave a deficit of $2.2 billion, but still rather less than the current estimates. The institute then suggests that if spending on the “Big Four” was frozen along with other programs, the deficit would be reduced to $778 million. A five percent across-the-board cut in state bureaucracy would save another $375 million (close to the $330 million Walker claims he intends to save just targeting benefits). If the economy rose at a more robust rate (say in the 5 percent range) and spending growth was slowed to a minimum, then there would be a tiny budget surplus. Additional taxes on those making $1 million or more was mentioned, although just a token amount; the fact is that the alleged tax increased passed in 2009 and supposedly meant to be used for retraining for the unemployed was barely a drop in the bucket, although that hasn’t prevented Walker from making the typical unproven Republican claim that rolling back those tax cuts will produce jobs and not just more money in the pockets of the rich.
Another fact, not mentioned in the WPRI report (or by Walker, for that matter), is that tax liabilities for businesses in Wisconsin are half of what they were in 1981, and that according to the Wisconsin Department of Revenue, 2/3rds of corporations in the state pay no taxes at all; of course, in the state of Washington, we also hear corporations crying poverty with high frequency. To make things sound as bad as they possibly can be is the typical Republican ploy to destroy people-centered institutions; is it not odd how Republicans run government like a business, and act like typical businessmen and women? Like their private world counterparts, they are out to bust unions so they can do as they please, lay-off people as many people they please, cut as many programs as they don’t like, avoid as many taxes as they can, just to make a “profit?” If Republicans were really interested in finding ways to hold down costs, they would attack health care costs which rise at least ten times the annual rate of inflation for no apparent reason; but they are not interest in reigning in the thieves whose deep pockets they depend upon to finance their drive for total annihilation of some of the founding principles of this country—like life, liberty and the pursuit of happiness? This country is not a democracy anymore; if it is a “republic,” than what we have is what Daniel Ben-Ami called it in the Economist:
“Although democracy does exist in a formal sense in many Western nations, it is far from genuine government by the people. Nowadays voters live in a world of diminished expectations in which elections provide little real choice. The days when political parties represented competing visions of society has, at least for the time being, disappeared. Elections have become technocratic affairs where the electorate, for those who choose to vote at all, have to select candidates on the basis of non-political criteria. These can include efficient management, likeable candidates and objectionable opponents. In effect voters have become disenfranchised from making real choices.”
If Republicans and the Tea Party Movement actually believe in government run by corporations, then they must also have similarly unfortunate ideas about overseeing an economy. In an economic crisis of the kind we have faced, the most intelligent plan is to raise government spending to employ enough people who become consumers who spend enough to create sufficient market demand to spur private sector growth. The public sector can draw-down once the private has made sufficient progress to act on its own. This is how this country has tackled economic crisis for 80 years. Why the Obama administration didn’t make this more clear to public is just another mystery. The U.S. in fact now sees still slow but greater growth than Germany and especially Britain, both countries which have tried severe austerity programs, and both failing miserably to convince businesses to do their part by increasing activity. Now, Republicans and their Tea Party allies want to repeat the mistakes of other countries for insensible partisan reasons; the reality is that the U.S. is still far from “insolvent” and its credit reserves still more than sufficient to pull the country out of its current bind with additional stimulus, since business still prefers to lag; however, the corporate elite, which bankrolled the Tea Party, has its own agenda, and far they mean to play their hand before “the people” become wise to them remains to be seen. For now, it is clear that it is the corporate and business community is calling the shots. Corporations control the media, they control when and if new hiring takes place, they decide who receives the “benefits.” The Republican governor of Ohio as much as admitted this when he implied that unless there were deep cuts in the public employment sector, there wouldn’t be enough money to give businesses all the additional tax breaks they were demanding.
But knowing that Walker is a fanatic is one thing, getting beyond the Fox News types calling protestors in Madison “communists,” the Wisconsin progressive laws a throwback of the “old Soviet Union,” and the Comedy Channel’s Jon Stewart making an incoherent fool of himself again, what is actually going on there? Yes, we know now it is very definitely a naked power play by Republicans to eliminate public employee unions as a potential obstacle to their totalitarian ambitions—or rather, those who they are fronting for (and this isn’t the only thing they are doing to accomplish this: they are trying to suppress voting by proposing a voter photo ID law that disallows student IDs, tribal IDs and passports as acceptable). But is Walker’s claim that balancing the state budget requires the removal of public employee collective bargaining rights actually necessary? Walker and the media has told us that the state budget is facing an expected $3.6 billion deficit in fiscal years 2012/13; true—or just a Republican scare tactic to take a chainsaw to everything they don’t like?
The state does have a problem in that it has a large amount unpaid, but as yet undue, debt to pay off. However, the budget “crisis” is still a matter of opinion, and even now not sufficient to require a “budget repair” which would allow Walker and his cronies to do whatever they wish. I read a fairly detailed article composed by a conservative organization called the Wisconsin Policy Research Institute. Most of the current estimates of the Wisconsin 2012/13 budget deficit of between $3.1 billion and $3.6 billion assume that there will be no increase in revenues, and education, health care, government employee costs and criminal justice system costs will rise at usual rates. The revenue projections made by the Global Insight’s estimate of future economic recovery, in which the state Fiscal Bureau based some of its recent provocative suggestions that state may actually see a budget surplus, assumes a 3.2 percent increase in economic activity in the state. The WPRI finds that even if true over fiscal years 2012/13, that would still leave a deficit of $2.2 billion, but still rather less than the current estimates. The institute then suggests that if spending on the “Big Four” was frozen along with other programs, the deficit would be reduced to $778 million. A five percent across-the-board cut in state bureaucracy would save another $375 million (close to the $330 million Walker claims he intends to save just targeting benefits). If the economy rose at a more robust rate (say in the 5 percent range) and spending growth was slowed to a minimum, then there would be a tiny budget surplus. Additional taxes on those making $1 million or more was mentioned, although just a token amount; the fact is that the alleged tax increased passed in 2009 and supposedly meant to be used for retraining for the unemployed was barely a drop in the bucket, although that hasn’t prevented Walker from making the typical unproven Republican claim that rolling back those tax cuts will produce jobs and not just more money in the pockets of the rich.
Another fact, not mentioned in the WPRI report (or by Walker, for that matter), is that tax liabilities for businesses in Wisconsin are half of what they were in 1981, and that according to the Wisconsin Department of Revenue, 2/3rds of corporations in the state pay no taxes at all; of course, in the state of Washington, we also hear corporations crying poverty with high frequency. To make things sound as bad as they possibly can be is the typical Republican ploy to destroy people-centered institutions; is it not odd how Republicans run government like a business, and act like typical businessmen and women? Like their private world counterparts, they are out to bust unions so they can do as they please, lay-off people as many people they please, cut as many programs as they don’t like, avoid as many taxes as they can, just to make a “profit?” If Republicans were really interested in finding ways to hold down costs, they would attack health care costs which rise at least ten times the annual rate of inflation for no apparent reason; but they are not interest in reigning in the thieves whose deep pockets they depend upon to finance their drive for total annihilation of some of the founding principles of this country—like life, liberty and the pursuit of happiness? This country is not a democracy anymore; if it is a “republic,” than what we have is what Daniel Ben-Ami called it in the Economist:
“Although democracy does exist in a formal sense in many Western nations, it is far from genuine government by the people. Nowadays voters live in a world of diminished expectations in which elections provide little real choice. The days when political parties represented competing visions of society has, at least for the time being, disappeared. Elections have become technocratic affairs where the electorate, for those who choose to vote at all, have to select candidates on the basis of non-political criteria. These can include efficient management, likeable candidates and objectionable opponents. In effect voters have become disenfranchised from making real choices.”
If Republicans and the Tea Party Movement actually believe in government run by corporations, then they must also have similarly unfortunate ideas about overseeing an economy. In an economic crisis of the kind we have faced, the most intelligent plan is to raise government spending to employ enough people who become consumers who spend enough to create sufficient market demand to spur private sector growth. The public sector can draw-down once the private has made sufficient progress to act on its own. This is how this country has tackled economic crisis for 80 years. Why the Obama administration didn’t make this more clear to public is just another mystery. The U.S. in fact now sees still slow but greater growth than Germany and especially Britain, both countries which have tried severe austerity programs, and both failing miserably to convince businesses to do their part by increasing activity. Now, Republicans and their Tea Party allies want to repeat the mistakes of other countries for insensible partisan reasons; the reality is that the U.S. is still far from “insolvent” and its credit reserves still more than sufficient to pull the country out of its current bind with additional stimulus, since business still prefers to lag; however, the corporate elite, which bankrolled the Tea Party, has its own agenda, and far they mean to play their hand before “the people” become wise to them remains to be seen. For now, it is clear that it is the corporate and business community is calling the shots. Corporations control the media, they control when and if new hiring takes place, they decide who receives the “benefits.” The Republican governor of Ohio as much as admitted this when he implied that unless there were deep cuts in the public employment sector, there wouldn’t be enough money to give businesses all the additional tax breaks they were demanding.
Brother, can you spare a care?
I once saw an episode of an old television drama called “Route 66,” in which Tod and Buz encounter a young boy whose father has just been knifed to death. He has no mother or siblings, only an aged relative to look after him. Predictably, his emotions are confused and he runs away. Tod wants to look for him; Buz, who was himself an orphan, empathizes with the boy’s feelings and believes he should be allowed to find his own way. Tod disagrees, saying that he believes in the adage “I am my brother’s keeper.” So he leaves and tries to find the boy without success. Buz, meanwhile, has second thoughts; he decides to help Tod find the boy, who is eventually located at his father’s grave site. The boy has no faith left in the justice of the world, but Buz, speaking from his own experience, tells him it’s OK to let out his anguish, but in the end everything isn’t so dark after all if you just hold on long enough to faith.
In the world of “Route 66,” compassion and self-sacrifice had not yet gone out of style. Regardless of the cost—including physical—Tod and Buz did not turn their backs on those they encountered who were in need. In this episode and many others, the theme of “my brother’s keeper” played out time and time again; unlike other later television series that followed the same theme but were more overtly religious with less than human characters, the protagonists of this series could have settled down anytime they wished and lived normal lives, but they chose to move on, rebelling against the strictures of a material-gathering society. But what they were able to give, they did.
In today’s world, there is not much mood for being charitable—even the kind espoused by Guy Grand in Terry Southern’s novel “The Magic Christian”—especially among the most fortunate. If corporate America and the super wealthy are not providing the jobs they promised with all the tax cuts they have been given, surely they are giving some of their largesse to aid food banks and homeless sheltering, right? Hell, maybe a job or two. No, they are providing aid to other poverty-stricken programs, like the local opera house, symphony and right-wing think tank. There are those like Bill Gates who are doing good work in finding ways to combat problems in Africa (which the Seattle Times recently bizarrely accused of being an example big money buying political favor); but that is over there, not over here. In hard times, organizations that supply aid to the poor and hungry in this country are hard-put because the people whose charity they depend on—the middle class—tend to be hard put as well.
Concerning how much more some people have to be charitable with, this past January the Economist explored the issue of the increasing gap between the rich and poor, and in keeping with its right-wing bent, predictably found it less disturbing than ought be the case. This despite the fact that today the top 0.1 percent of the population in the U.S. had 8 percent of the country’s wealth—compared to 2 percent in the 1960s—and that between 1980 and 2006, the earnings of the top 0.1 percent rose 80-times that the rate of the bottom 90 percent. The top 0.1 percent were obviously not paying 80 times the social cost of this inequality, either. The Economist did, however, suggest that there was a physical, emotional and mental impact to this inequality: It quoted some researchers who claimed that in impoverished environments, stresses of impoverishment prevented the production of certain hormones that from birth promote trust and social bonding, and this has something to do with the failure to improve one’s life when confronted with the reality of inequality.
Given the suggestion of environment-based variables, the Economist’s answer to income inequality is the government’s need to “keep their focus on pushing up the bottom and middle rather than dragging down the top.” That is “investing in (and removing barriers to) education, abolishing rules that prevent the able from getting ahead and refocusing government spending on those that need it most.” That, of course, is the government’s business, not the super rich and their potential tax dollars. We’ve heard that story before, but this costs money so as to avoid laying-off teachers in droves and reducing the availability of educational tools, and that kind of thing won’t get any traction around here these days. “Second, governments should get rid of rigged rules and subsidies that favor specific industries or insiders. Forcing banks to hold more capital and pay for their implicit government safety-net is the best way to slim Wall Street’s chubbier felines.” In other words, no charity for businesses, and forcing banks to work for the public good. Again, not over a tea partier’s dead body. They just don’t understand the concepts involved.
The “elite” have become super rich because they have been more “clever,” even if that cleverness was put to use in misusing regulations (or lack of) to manipulate financial transactions. Forget all that; they are actually more altruistic than we give them credit for. They might not be donating to food banks or homeless shelters, but we need the super rich to create the tools for which the common folk can live “easier” lives, although the presence of livable wages is probably a greater indicator of an “easier” life than toys. Giving people laptops, cell phones and other time-wasting devices can keep people oblivious to the finer things in life that the rich enjoy in abundance, like healthy food and health care (Ben-Ami calls this the "greening" of the elite: they don't want "Everyone" to live "well," because they'll use up all the Earth's resources reserved for their class). But the fact is that the “elite” are becoming less and less creative when it comes to advancing technology to make us forget; they are merely improving on old technology. Accepting so-called “hybrids,” there has been no major advance in the basic technology of the automobile in 100 years, since they almost all still run on some form of petroleum; if we had an automobile that ran on water, that would be a technological breakthrough. Until science discovers how to harness energy from a less dangerous source than uranium and less polluting than carbon-based fuels to cover our needs into the next millennium, one which presumably turn fantasy into reality and allow humanity to explore other worlds to plunder, we will have to be satisfied with slight improvements that are becoming less and less accessible for many of us, especially in the health care arena if the Republicans have their way.
The irony of the world of “Route 66” was that it was people who came face to face with misfortune who felt a greater desire to assist in alleviating it, because they felt a moral responsibility as human beings; the people who lived in the mansions on the hill were either completely oblivious or completely contemptuous. We are coming closer to a world where neither side cares—one side which is too distracted to notice, the other side that doesn’t want anyone to acknowledge that there is anything to care about.
In the world of “Route 66,” compassion and self-sacrifice had not yet gone out of style. Regardless of the cost—including physical—Tod and Buz did not turn their backs on those they encountered who were in need. In this episode and many others, the theme of “my brother’s keeper” played out time and time again; unlike other later television series that followed the same theme but were more overtly religious with less than human characters, the protagonists of this series could have settled down anytime they wished and lived normal lives, but they chose to move on, rebelling against the strictures of a material-gathering society. But what they were able to give, they did.
In today’s world, there is not much mood for being charitable—even the kind espoused by Guy Grand in Terry Southern’s novel “The Magic Christian”—especially among the most fortunate. If corporate America and the super wealthy are not providing the jobs they promised with all the tax cuts they have been given, surely they are giving some of their largesse to aid food banks and homeless sheltering, right? Hell, maybe a job or two. No, they are providing aid to other poverty-stricken programs, like the local opera house, symphony and right-wing think tank. There are those like Bill Gates who are doing good work in finding ways to combat problems in Africa (which the Seattle Times recently bizarrely accused of being an example big money buying political favor); but that is over there, not over here. In hard times, organizations that supply aid to the poor and hungry in this country are hard-put because the people whose charity they depend on—the middle class—tend to be hard put as well.
Concerning how much more some people have to be charitable with, this past January the Economist explored the issue of the increasing gap between the rich and poor, and in keeping with its right-wing bent, predictably found it less disturbing than ought be the case. This despite the fact that today the top 0.1 percent of the population in the U.S. had 8 percent of the country’s wealth—compared to 2 percent in the 1960s—and that between 1980 and 2006, the earnings of the top 0.1 percent rose 80-times that the rate of the bottom 90 percent. The top 0.1 percent were obviously not paying 80 times the social cost of this inequality, either. The Economist did, however, suggest that there was a physical, emotional and mental impact to this inequality: It quoted some researchers who claimed that in impoverished environments, stresses of impoverishment prevented the production of certain hormones that from birth promote trust and social bonding, and this has something to do with the failure to improve one’s life when confronted with the reality of inequality.
Given the suggestion of environment-based variables, the Economist’s answer to income inequality is the government’s need to “keep their focus on pushing up the bottom and middle rather than dragging down the top.” That is “investing in (and removing barriers to) education, abolishing rules that prevent the able from getting ahead and refocusing government spending on those that need it most.” That, of course, is the government’s business, not the super rich and their potential tax dollars. We’ve heard that story before, but this costs money so as to avoid laying-off teachers in droves and reducing the availability of educational tools, and that kind of thing won’t get any traction around here these days. “Second, governments should get rid of rigged rules and subsidies that favor specific industries or insiders. Forcing banks to hold more capital and pay for their implicit government safety-net is the best way to slim Wall Street’s chubbier felines.” In other words, no charity for businesses, and forcing banks to work for the public good. Again, not over a tea partier’s dead body. They just don’t understand the concepts involved.
The “elite” have become super rich because they have been more “clever,” even if that cleverness was put to use in misusing regulations (or lack of) to manipulate financial transactions. Forget all that; they are actually more altruistic than we give them credit for. They might not be donating to food banks or homeless shelters, but we need the super rich to create the tools for which the common folk can live “easier” lives, although the presence of livable wages is probably a greater indicator of an “easier” life than toys. Giving people laptops, cell phones and other time-wasting devices can keep people oblivious to the finer things in life that the rich enjoy in abundance, like healthy food and health care (Ben-Ami calls this the "greening" of the elite: they don't want "Everyone" to live "well," because they'll use up all the Earth's resources reserved for their class). But the fact is that the “elite” are becoming less and less creative when it comes to advancing technology to make us forget; they are merely improving on old technology. Accepting so-called “hybrids,” there has been no major advance in the basic technology of the automobile in 100 years, since they almost all still run on some form of petroleum; if we had an automobile that ran on water, that would be a technological breakthrough. Until science discovers how to harness energy from a less dangerous source than uranium and less polluting than carbon-based fuels to cover our needs into the next millennium, one which presumably turn fantasy into reality and allow humanity to explore other worlds to plunder, we will have to be satisfied with slight improvements that are becoming less and less accessible for many of us, especially in the health care arena if the Republicans have their way.
The irony of the world of “Route 66” was that it was people who came face to face with misfortune who felt a greater desire to assist in alleviating it, because they felt a moral responsibility as human beings; the people who lived in the mansions on the hill were either completely oblivious or completely contemptuous. We are coming closer to a world where neither side cares—one side which is too distracted to notice, the other side that doesn’t want anyone to acknowledge that there is anything to care about.
No pain, no gain
I remember when Oprah Winfrey was on CNN trumpeting her “Leadership School for Girls” in South Africa—that is, after the scandal about motherly matrons physically and sexually abusing their young charges, a story which Winfrey used her considerable wealth and influence to squash from media exposure. It is not just men there who are guilty of violence; I read somewhere that Jennifer Hudson is to star in a biopic of the “saintly” Winnie Mandela, whose track record to date suggests that she has a spot reserved someplace other than Heaven. Nelson Mandela divorced her, since having a “first lady” employing “bloodthirsty” rhetoric, involved in kidnapping and murder, and convicted of fraud and theft is somewhat awkward; nonetheless, although Winnie Mandela was reportedly upset that she wasn’t allowed script approval, I doubt that there will be much in this film to tarnish her image.
Anyways, Anderson Cooper asked Winfrey why she was spending so much money in South Africa when so many black youth in the U.S. were being failed by the education system. Winfrey claimed that in the U.S., girls had a choice to go to school, while in South Africa, they didn’t. This is gender politicking at its most disingenuous; in South Africa, a 2010 World Bank study showed that girls have had equal access to, and participation in, primary and secondary schools since the end of Apartheid—at a time when boys were also “discouraged” by the government from going to school. The problem in South Africa, as demonstrated by a report during the recent World Cup in that country, is that the education is poor for everyone; children—boys and girls—are seen sitting silently in a classroom all day. Where is the teacher? Shooting the breeze with other teachers in the break room. The principle admitted that his school had “problems” with getting teachers to teach. So in a way, Winfrey’s charges are getting an “unfair” advantage over other South African youth of both sexes. As journalist Daniel Ben-Ami stated in the Economist recently, the West’s interference in the personal and spiritual lives of people in developing country’s may improve the self-esteem of a few people, but accomplishes nothing in the long-term when a country’s basic economic system is unchanged—just more unhappy people. This isn’t the 1950s or 1960s anymore when the U.S. was actually was promoting economic equality on a national scale. Now it is doing just enough to make people feel “better” about themselves, even if they still live in poverty; perhaps Oprah will buy all the school’s graduates a new car if they can’t find a job—or maybe she’ll buy them a job. Lot of luck; in South Africa, after all, it is the white minority that still pulls the economic strings.
But just to show you I am an equal-opportunity offender, I was motivated to rehash the above old story after reading a recent newspaper story about the National (White) Women’s Law Center complaining about what their statistics are showing a so-called “masculine” tone to what is still a rather slow job recovery. Note that the white women who run such organizations didn’t complain when their demographic has had, and continues to have, the lowest unemployment rate among all demographics; according the Bureau of Labor Statistics, the unemployment rate for white women 20 years and older was as of this past January 7.2 percent—still far below the national average. For black men it was 18.4 percent (twice that of white men), and 12.9 percent for black women; for Hispanic men and women, it is 13 and 11.5 percent. In “good” times, these numbers only change proportionally; the unemployment rate for black men has never dipped below 10 percent in recent memory. When white women move into the job market, it is not white men who have to move aside. The interesting thing about the current economic situation is that men had to bear the brunt of the recession as the private manufacturing sector shed jobs, while jobs dependent on government funding—such as public education, health care and government bureaucracy—which are top-heavy in female employees, were temporarily saved by federal stimulus money, but are finding life much more difficult now that state governments are taking a chainsaw to their budgets. Complaints that some manufacturing jobs are coming back, and that men who suffered layoffs while women still remained employed were being rehired is simply more tiresome sour grapes from these advocates who have no sense of fairness or balance.
When I was living in Sacramento I needed to start making some money, so I signed-on to a temp agency. I was sent to some firm that employed people sitting in cubicles doing mailer piece-work. It wasn’t my kind of work, but I wasn’t exactly in a position to complain, and I expected to hang around until at least this particular mailer was done. When the day was done, the supervisor, some young light-haired white man with an affected air of self-satisfaction and phony friendliness, had us all come up to the front and started counting off each person, starting with the white women regardless of where they stood in the group. Coincidentally, when he counted the last white female, that accounted for all the people he would be needing going forward; the rest of us—black, Latino, a couple white men—would need to seek employment elsewhere. I remember thinking that he had just selected his harem. We sat in the supervisor’s office waiting for our time cards to be signed. I was privately steaming, not because the job wasn’t going to last as long as I had been told, but by the nature of the decision-making process about who would stay and who wouldn’t. I was looking at all the black faces sitting there and thinking “Don’t they care about what just transpired?” I finally couldn’t help myself, and blurted-out “You know, we need to work to.” Everyone stared at me in shock and surprise, but remained silent; somehow I think they understood less my point than amazed at this troublemaking loose cannon who would surely be reported to the temp agency for being a troublemaking loose cannon. After all, an employer can do whatever they wish; if they just want to hire white women, if only for self-aggrandizing personal reasons, they have a right to do that.
This particularly kind of hiring discrimination has gone on so long that now advocates and the media try to explain it as terms of “empowerment” and “equality” in order to avoid discussing the realities that other demographics must face when they go on a job search. Even in Oprah’s world, the only people who really have choice are the white women who form the core of her “fan” base. Don’t ask me to feel sorry for them now.
Anyways, Anderson Cooper asked Winfrey why she was spending so much money in South Africa when so many black youth in the U.S. were being failed by the education system. Winfrey claimed that in the U.S., girls had a choice to go to school, while in South Africa, they didn’t. This is gender politicking at its most disingenuous; in South Africa, a 2010 World Bank study showed that girls have had equal access to, and participation in, primary and secondary schools since the end of Apartheid—at a time when boys were also “discouraged” by the government from going to school. The problem in South Africa, as demonstrated by a report during the recent World Cup in that country, is that the education is poor for everyone; children—boys and girls—are seen sitting silently in a classroom all day. Where is the teacher? Shooting the breeze with other teachers in the break room. The principle admitted that his school had “problems” with getting teachers to teach. So in a way, Winfrey’s charges are getting an “unfair” advantage over other South African youth of both sexes. As journalist Daniel Ben-Ami stated in the Economist recently, the West’s interference in the personal and spiritual lives of people in developing country’s may improve the self-esteem of a few people, but accomplishes nothing in the long-term when a country’s basic economic system is unchanged—just more unhappy people. This isn’t the 1950s or 1960s anymore when the U.S. was actually was promoting economic equality on a national scale. Now it is doing just enough to make people feel “better” about themselves, even if they still live in poverty; perhaps Oprah will buy all the school’s graduates a new car if they can’t find a job—or maybe she’ll buy them a job. Lot of luck; in South Africa, after all, it is the white minority that still pulls the economic strings.
But just to show you I am an equal-opportunity offender, I was motivated to rehash the above old story after reading a recent newspaper story about the National (White) Women’s Law Center complaining about what their statistics are showing a so-called “masculine” tone to what is still a rather slow job recovery. Note that the white women who run such organizations didn’t complain when their demographic has had, and continues to have, the lowest unemployment rate among all demographics; according the Bureau of Labor Statistics, the unemployment rate for white women 20 years and older was as of this past January 7.2 percent—still far below the national average. For black men it was 18.4 percent (twice that of white men), and 12.9 percent for black women; for Hispanic men and women, it is 13 and 11.5 percent. In “good” times, these numbers only change proportionally; the unemployment rate for black men has never dipped below 10 percent in recent memory. When white women move into the job market, it is not white men who have to move aside. The interesting thing about the current economic situation is that men had to bear the brunt of the recession as the private manufacturing sector shed jobs, while jobs dependent on government funding—such as public education, health care and government bureaucracy—which are top-heavy in female employees, were temporarily saved by federal stimulus money, but are finding life much more difficult now that state governments are taking a chainsaw to their budgets. Complaints that some manufacturing jobs are coming back, and that men who suffered layoffs while women still remained employed were being rehired is simply more tiresome sour grapes from these advocates who have no sense of fairness or balance.
When I was living in Sacramento I needed to start making some money, so I signed-on to a temp agency. I was sent to some firm that employed people sitting in cubicles doing mailer piece-work. It wasn’t my kind of work, but I wasn’t exactly in a position to complain, and I expected to hang around until at least this particular mailer was done. When the day was done, the supervisor, some young light-haired white man with an affected air of self-satisfaction and phony friendliness, had us all come up to the front and started counting off each person, starting with the white women regardless of where they stood in the group. Coincidentally, when he counted the last white female, that accounted for all the people he would be needing going forward; the rest of us—black, Latino, a couple white men—would need to seek employment elsewhere. I remember thinking that he had just selected his harem. We sat in the supervisor’s office waiting for our time cards to be signed. I was privately steaming, not because the job wasn’t going to last as long as I had been told, but by the nature of the decision-making process about who would stay and who wouldn’t. I was looking at all the black faces sitting there and thinking “Don’t they care about what just transpired?” I finally couldn’t help myself, and blurted-out “You know, we need to work to.” Everyone stared at me in shock and surprise, but remained silent; somehow I think they understood less my point than amazed at this troublemaking loose cannon who would surely be reported to the temp agency for being a troublemaking loose cannon. After all, an employer can do whatever they wish; if they just want to hire white women, if only for self-aggrandizing personal reasons, they have a right to do that.
This particularly kind of hiring discrimination has gone on so long that now advocates and the media try to explain it as terms of “empowerment” and “equality” in order to avoid discussing the realities that other demographics must face when they go on a job search. Even in Oprah’s world, the only people who really have choice are the white women who form the core of her “fan” base. Don’t ask me to feel sorry for them now.
Wednesday, February 16, 2011
No license plate will erase Forrest's stains
As some people may or may not have heard, the Mississippi branch of the Sons of Confederate War Veterans wants to commemorate Confederate cavalry general Nathan Bedford Forrest with his very own license plate. Alright, so let the losers of the Civil War continue live in some nostalgic past where class lines were distinct, and social position was determined by the number of slaves you owned; it was a time when even if you were dirt poor, as long as you were white you could be secure in the knowledge that someone was worse off than you—and they’d stay that way, or else. Over on CNN, Ali Velshi can’t understand what the big deal is; the Sons say that they also plan to “honor” a black soldier who served in the Confederate army. Isn’t that wonderful? It all evens out. Velshi is foreign-born, and so he should be excused for his ignorance; there were a handful (literally) of blacks serving in the Confederate Army as soldiers, although most of them apparently in Forrest’s “escort company,” and all slaves (despite some discussion late in the war about recruiting slaves to fill the rebels depleting ranks, these discussions fell flat-faced on the issue of whether they should be subsequently freed).
One “Son” claims that Forrest should be forgiven his sins because he “repented” of his sins in his final years. His obituary in the New York Times had this to say:
“Of late years, his views had undergone a considerable change. The guerrilla chieftain had softened down into the retired veteran, anxious, apparently, only for peace with everybody. He was in favor of promoting good feeling between the two sections, and by the terms of his address to his old comrades in arms, asking them to join in decorating the graves of the dead Union soldiers. His last notable public appearance was on the Fourth of July in Memphis, when he appeared before the colored people at their celebration, was publicly presented with a bouquet by them as a mark of peace and reconciliation, and made a friendly speech in reply. In this he once more took occasion to defend himself and his war record, and to declare that he was a hearty friend of the colored race.”
Forrest’s fame in the South is based on his war record, which the Times went on to describe thusly:
“His daring and recklessness gave him more eclat at one period than his military services were really entitled to. Gen. (Joseph) Wheeler's raid around the rear of Sherman's army was the work of the daring man and the scientific soldier; Gen. Forrest's sudden dash through Memphis, with no more result than the killing of a few men on either side, was the recklessness of the mere guerrilla chief-- which Forrest essentially was.”
If Forrest was considered over-rated as a general by at least Northern observers—and beginning in July, 1864 Forrest’s forces suffered one reverse after another—there was the matter of why his reputation prevented him from being “remembered only as a daring and successful guerrilla cavalry leader,” and not by “the one great and indelible stain upon his name.” Forrest’s concern about his long-term reputation was such that “It was evident that he felt this, as his constantly-repeated defenses of himself show.”
Forrest was born poor and uneducated in Tennessee, but seems to have achieved considerable wealth, owning two plantations with hundreds of slaves (a tall and robust man, he was said to lay his hand on more than a few), as well as a dealer in slaves. It has been marvelled that after the war started, Forrest rose rapidly from mere private to general, although the circumstances of this rise are rather exaggerated. Forrest never saw action as a private where his “genius” was exposed; upon viewing the decrepit state of his cavalry outfit, he surprised his officers by offering to buy horses and guns out of his own pocket. After the stunned realization of his actual status as one of the richest men in the South—and such men were automatically entitled to buying their regiments to command—the now Lt. Colonel Forrest was subsequently allowed to recruit his own personal cavalry band. Forrest played good soldier at first, but rebelled from serving under what he saw as incompetent commanders, and often merely struck out his own; unlike General Wheeler, who was a West Point graduate, Forrest was essentially a more “respectable” version of the Southern guerilla chieftain who had a civilian background, and was more noted for their inclinations toward cold-blooded murder performed under the cover of war time—such as that practiced by other Southern glamour boys like “Bloody Bill” Anderson and William Quantrill (although Quantrill was born in Ohio, he was a pro-slavery fanatic and probably didn’t have a full deck).
There are two major stains on Forrest’s record—one during, and one after the war—that disturbs people when any honor is given in Forrest’s name, of which there are a great many, especially in Tennessee where there are 32 publically-displayed statues of him. The first is the Fort Pillow massacre in April, 1864. Naturally, what happened there depends on what side of the Mason-Dixon line you’re on. According to most historians, mostly black Union soldiers who tried to surrender were murdered in cold blood by Forrest’s men, angered that they would be confronted by armed blacks; other more sympathetic historians still insist it was a “battle” from start to finish. But given the fact that at least half the fort’s 574 men were killed compared to just 14 Confederates suggests rather strongly that a deliberate massacre had occurred. Besides the testimony of surviving Union soldiers, one Confederate soldier wrote:
"The slaughter was awful. Words cannot describe the scene. The poor, deluded negroes would run up to our men, fall upon their knees, and with uplifted hands scream for mercy but they were ordered to their feet and then shot down. I, with several others, tried to stop the butchery, and at one time had partially succeeded, but General Forrest ordered them shot down like dogs and the carnage continued. Finally our men became sick of blood and the firing ceased."
The second stain on Forrest’s reputation was his involvement with the Ku Klux Klan in the early days of that organization. Forrest denied participation in the KKK before a congressional hearing, but by then he was trying to rehabilitate his image. The KKK was founded in order to stop the radical changes taking place in Southern society by intimidation and violence, and former slaves and white empathizers were the principle targets; in the first few years after the war, 2,000 people were killed in Louisiana alone. It is said that Forrest had “romantic” notions about the Klan, but it is clear that as its first “grand wizard” he envisioned himself as the “general” of a well-organized, underground “army” to ward-off efforts at radical change. He didn’t want to kill off the former slaves, since he admitted that they must form the core of the Southern laboring demographic, preferably as near to their former condition as possible. Despite what apologists would later claim, Forrest did not object to violence; in fact he probably did personally participate in late-night raids that involved the killings of blacks and their white supporters. What he did object to was his inability to personally control the activities of the KKK in various parts of the South, which was getting bad press in the North; in 1869, the frustrated Forrest, much like the frustrated Forrest serving under commanders he did not respect, decided he didn’t want to play ball anymore and “ordered” the disbandment of the KKK, although a few bands ignored him.
The question now is whether Forrest’s (very) late conversion is at all sufficient to mediate against his record as a dealer and owner of slaves, the Fort Pillow massacre and his involvement with the KKK. Or a better question is for what reason would people actually opt of the Forrest license plate? Because he was a famous Civil War general—or because he was man who provided the “model” of how to put blacks in their place? In the state of Mississippi, it is certainly reasonable to conclude that white “night riders” and “guerillas” trying to stop the march of the civil rights movement imagined themselves as the “heirs” of Forrest.
One “Son” claims that Forrest should be forgiven his sins because he “repented” of his sins in his final years. His obituary in the New York Times had this to say:
“Of late years, his views had undergone a considerable change. The guerrilla chieftain had softened down into the retired veteran, anxious, apparently, only for peace with everybody. He was in favor of promoting good feeling between the two sections, and by the terms of his address to his old comrades in arms, asking them to join in decorating the graves of the dead Union soldiers. His last notable public appearance was on the Fourth of July in Memphis, when he appeared before the colored people at their celebration, was publicly presented with a bouquet by them as a mark of peace and reconciliation, and made a friendly speech in reply. In this he once more took occasion to defend himself and his war record, and to declare that he was a hearty friend of the colored race.”
Forrest’s fame in the South is based on his war record, which the Times went on to describe thusly:
“His daring and recklessness gave him more eclat at one period than his military services were really entitled to. Gen. (Joseph) Wheeler's raid around the rear of Sherman's army was the work of the daring man and the scientific soldier; Gen. Forrest's sudden dash through Memphis, with no more result than the killing of a few men on either side, was the recklessness of the mere guerrilla chief-- which Forrest essentially was.”
If Forrest was considered over-rated as a general by at least Northern observers—and beginning in July, 1864 Forrest’s forces suffered one reverse after another—there was the matter of why his reputation prevented him from being “remembered only as a daring and successful guerrilla cavalry leader,” and not by “the one great and indelible stain upon his name.” Forrest’s concern about his long-term reputation was such that “It was evident that he felt this, as his constantly-repeated defenses of himself show.”
Forrest was born poor and uneducated in Tennessee, but seems to have achieved considerable wealth, owning two plantations with hundreds of slaves (a tall and robust man, he was said to lay his hand on more than a few), as well as a dealer in slaves. It has been marvelled that after the war started, Forrest rose rapidly from mere private to general, although the circumstances of this rise are rather exaggerated. Forrest never saw action as a private where his “genius” was exposed; upon viewing the decrepit state of his cavalry outfit, he surprised his officers by offering to buy horses and guns out of his own pocket. After the stunned realization of his actual status as one of the richest men in the South—and such men were automatically entitled to buying their regiments to command—the now Lt. Colonel Forrest was subsequently allowed to recruit his own personal cavalry band. Forrest played good soldier at first, but rebelled from serving under what he saw as incompetent commanders, and often merely struck out his own; unlike General Wheeler, who was a West Point graduate, Forrest was essentially a more “respectable” version of the Southern guerilla chieftain who had a civilian background, and was more noted for their inclinations toward cold-blooded murder performed under the cover of war time—such as that practiced by other Southern glamour boys like “Bloody Bill” Anderson and William Quantrill (although Quantrill was born in Ohio, he was a pro-slavery fanatic and probably didn’t have a full deck).
There are two major stains on Forrest’s record—one during, and one after the war—that disturbs people when any honor is given in Forrest’s name, of which there are a great many, especially in Tennessee where there are 32 publically-displayed statues of him. The first is the Fort Pillow massacre in April, 1864. Naturally, what happened there depends on what side of the Mason-Dixon line you’re on. According to most historians, mostly black Union soldiers who tried to surrender were murdered in cold blood by Forrest’s men, angered that they would be confronted by armed blacks; other more sympathetic historians still insist it was a “battle” from start to finish. But given the fact that at least half the fort’s 574 men were killed compared to just 14 Confederates suggests rather strongly that a deliberate massacre had occurred. Besides the testimony of surviving Union soldiers, one Confederate soldier wrote:
"The slaughter was awful. Words cannot describe the scene. The poor, deluded negroes would run up to our men, fall upon their knees, and with uplifted hands scream for mercy but they were ordered to their feet and then shot down. I, with several others, tried to stop the butchery, and at one time had partially succeeded, but General Forrest ordered them shot down like dogs and the carnage continued. Finally our men became sick of blood and the firing ceased."
The second stain on Forrest’s reputation was his involvement with the Ku Klux Klan in the early days of that organization. Forrest denied participation in the KKK before a congressional hearing, but by then he was trying to rehabilitate his image. The KKK was founded in order to stop the radical changes taking place in Southern society by intimidation and violence, and former slaves and white empathizers were the principle targets; in the first few years after the war, 2,000 people were killed in Louisiana alone. It is said that Forrest had “romantic” notions about the Klan, but it is clear that as its first “grand wizard” he envisioned himself as the “general” of a well-organized, underground “army” to ward-off efforts at radical change. He didn’t want to kill off the former slaves, since he admitted that they must form the core of the Southern laboring demographic, preferably as near to their former condition as possible. Despite what apologists would later claim, Forrest did not object to violence; in fact he probably did personally participate in late-night raids that involved the killings of blacks and their white supporters. What he did object to was his inability to personally control the activities of the KKK in various parts of the South, which was getting bad press in the North; in 1869, the frustrated Forrest, much like the frustrated Forrest serving under commanders he did not respect, decided he didn’t want to play ball anymore and “ordered” the disbandment of the KKK, although a few bands ignored him.
The question now is whether Forrest’s (very) late conversion is at all sufficient to mediate against his record as a dealer and owner of slaves, the Fort Pillow massacre and his involvement with the KKK. Or a better question is for what reason would people actually opt of the Forrest license plate? Because he was a famous Civil War general—or because he was man who provided the “model” of how to put blacks in their place? In the state of Mississippi, it is certainly reasonable to conclude that white “night riders” and “guerillas” trying to stop the march of the civil rights movement imagined themselves as the “heirs” of Forrest.
Nothing old about the new Hawaii Five-O
I read somewhere that the new television series on CBS, “Hawaii Five-O,” was extended to a full season’s worth of episodes. Unlike other television series today, it has a really catchy theme tune. That’s about it. OK, it has sunny locales and bikini-clad beaches, and slightly more action and explosions than boring crime lab procedurals that are just an excuse to justify pre-conceived “suspicions,” and cop shows where you are told who the guilty party is five minutes in and it is just a matter of writers and actors filling in the blanks with intimidation, “clever” banter and political bluster. It makes some of us yearn for the detective shows of yesteryear, when the gumshoe had to be clever just to stay alive. Mannix, for example, was just your average apolitical “Joe” who drank a lot, smoked too much and got into even more trouble when working on a case. A client hardly ever told him the whole truth, and Mannix always had to find that out the hard way—usually after he reawakened from a fist-induced stupor or recovering from being shot by unknown assailants. For Mannix, life was one load of irony after another; if he was told one thing by anyone who wasn’t named Peggy, it almost invariably meant another thing. Mannix’s problem was that he was altogether too trusting—especially with someone in a short skirt. It was usually toward the end of an episode when he got wise to the way he was being played; if his epiphany came any earlier, nobody would believe him. After an episode full of trials and errors, Mannix eventually solved the case. The current generation calls this dull, but I call that pure laziness on their part. These loafers just want technology to solve everything; no thinking required.
Now, you might say that if I remember Mannix, how can I be so dumb not to remember that there is an “old” Hawaii Five-O? Well, I do remember that particular show, and except for the intro music and the names of the principles, there is no real connection between the old and the new. They are almost completely different animals. For one thing, the “old” show knew the difference between right and wrong; I admit that didn’t like Jack Lord’s McGarrett back in the day—he seemed like such a humorless hardass who was always thought he was right. But while he and his crew frequently ran afoul of evolving social mores they didn’t even try to understand, they didn’t try to step too far outside the boundaries of the law. How could they, when they always wore suits and ties? After all, there were a lot bigger fish to fry: it seemed like all the worst criminals were taking a holiday in Hawaii and spoiling all the amusement. The Five-O squad was like the island’s secret service—make the state safe for fun and frolic in the sand, and no one would be the wiser. Watching the series now, I’ve gained greater respect for Lord’s McGarrett: He wasn’t completely humorless, and he never lost his cool no matter how dark things looked. He was the Captain Kirk of crime fighting.
The new show, on the other hand, is just “new.” I don’t get it at all. These guys have no sense of right or wrong. They’re practically terrorists themselves. They may be answerable to the governor, but that is just proof that women in power have no more moral or ethical qualms than men. The new McGarrett has even less personality than the old, and his crew would probably be career criminals if they were not working in “law enforcement.” Nobody in suits, but plenty of exposed abs and occasional cleavage. This Five-O has plenty of action, lots of explosions and use of armor-piercing weapons that the old crew wouldn’t have dared use for fear of causing too much noticeable mayhem and scaring away the tourists. And then there is the dialogue; I admit that a lot of the “humor” in the old series was “Plan Nine” quality, but you were not supposed to laugh anyways. Every time McGarrett said something, you better believe it was pretty damned important, or else. In the new series, the dialogue—as in so much of the current crop of crime shows—is just there to fill in the empty spaces and sound clever while essentially conveying nothing in particular.
Nevertheless, the new show seems to have had a modicum of success in its time slot. But what does that signify? That for the current generation, the line between doing things the ethical way and the corrupt way and is blurred by all those explosions? That anything extralegal is “justified” so long as it accomplishes a desirable end? Because doing things the “right” way is too hard, too boring?
Now, you might say that if I remember Mannix, how can I be so dumb not to remember that there is an “old” Hawaii Five-O? Well, I do remember that particular show, and except for the intro music and the names of the principles, there is no real connection between the old and the new. They are almost completely different animals. For one thing, the “old” show knew the difference between right and wrong; I admit that didn’t like Jack Lord’s McGarrett back in the day—he seemed like such a humorless hardass who was always thought he was right. But while he and his crew frequently ran afoul of evolving social mores they didn’t even try to understand, they didn’t try to step too far outside the boundaries of the law. How could they, when they always wore suits and ties? After all, there were a lot bigger fish to fry: it seemed like all the worst criminals were taking a holiday in Hawaii and spoiling all the amusement. The Five-O squad was like the island’s secret service—make the state safe for fun and frolic in the sand, and no one would be the wiser. Watching the series now, I’ve gained greater respect for Lord’s McGarrett: He wasn’t completely humorless, and he never lost his cool no matter how dark things looked. He was the Captain Kirk of crime fighting.
The new show, on the other hand, is just “new.” I don’t get it at all. These guys have no sense of right or wrong. They’re practically terrorists themselves. They may be answerable to the governor, but that is just proof that women in power have no more moral or ethical qualms than men. The new McGarrett has even less personality than the old, and his crew would probably be career criminals if they were not working in “law enforcement.” Nobody in suits, but plenty of exposed abs and occasional cleavage. This Five-O has plenty of action, lots of explosions and use of armor-piercing weapons that the old crew wouldn’t have dared use for fear of causing too much noticeable mayhem and scaring away the tourists. And then there is the dialogue; I admit that a lot of the “humor” in the old series was “Plan Nine” quality, but you were not supposed to laugh anyways. Every time McGarrett said something, you better believe it was pretty damned important, or else. In the new series, the dialogue—as in so much of the current crop of crime shows—is just there to fill in the empty spaces and sound clever while essentially conveying nothing in particular.
Nevertheless, the new show seems to have had a modicum of success in its time slot. But what does that signify? That for the current generation, the line between doing things the ethical way and the corrupt way and is blurred by all those explosions? That anything extralegal is “justified” so long as it accomplishes a desirable end? Because doing things the “right” way is too hard, too boring?
Equality for all is not Title IX's game
Because of continuing massive cuts in California’s higher education funding, the University of California, Berkeley, announced a few months ago that it was cutting five athletics programs—men's and women's gymnastics, men's rugby, women's lacrosse and men's baseball. But because of a sudden discovery of new funding and complaints about Title IX compliance, the women's programs are being restored along with the men's rugby, although for now baseball and men's gymnastics continues to be on the chopping block; this means while all 14 of the women's sports will be retained, the men's side will be reduced to 11. Why the baseball team is being eliminated—when that sport has a successful professional league, and at least the dream for a successful and high-paying career to many low and middle-class students exists—while men’s rugby (and women’s lacrosse, for that matter), are country club sports that are merely pastimes for well-off (and white) elitists, is well beyond my comprehension.
Title IX, for those who do not know, is one brief sentence followed by a novel’s worth of exceptions. One of them concerns the YMCA, which is a “good” thing because it is now just an appendage of the YWCA, and has little to do with helping young men; another exception is financial aid awarded from beauty pageants (i.e. there is no requirement for men to have their own “equivalent”). One of the interesting things about Title IX is that it originally was not concerned with athletics; that only came after women’s advocates pressed the issue, and the Javits Amendment was tacked on. The original design was to address perceived hiring discrimination based on sex at institutions receiving federal funds regardless how much or for what, and it was thought necessary to address the issue of sex specifically since gender wasn’t specified in the 14th Amendment and the 1964 Civil Rights Act. But then it expanded into requiring equal access to all programs and activities, or at least access to ones similar if there was gender specificity. Neither the 14th Amendment or the Civil Rights Act has prevented de facto discriminatory behavior against racial minorities or mandates equality on any level per se, but Title IX has been particularly effective in advancing the women’s agenda—and more specifically white women’s agenda, which I will talk about after noting this tidbit contained in Title IX:
“Nothing contained in subsection (a) of this section shall be interpreted to require any educational institution to grant preferential or disparate treatment to the members of one sex on account of an imbalance which may exist with respect to the total number or percentage of persons of that sex participating in or receiving the benefits of any federally supported program or activity, in comparison with the total number or percentage of persons of that sex in any community, State, section, or other area: Provided, That this subsection shall not be construed to prevent the consideration in any hearing or proceeding under this chapter of statistical evidence tending to show that such an imbalance exists with respect to the participation in, or receipt of the benefits of, any such program or activity by the members of one sex.”
Huh? In the first half of this paragraph it appears to ban “preferential” treatment to one sex to address gender imbalances in participation in federally-funded programs, but does a back-flip in the second part, it allows the “consideration” of evidence of said imbalances of participation. “Consideration” to do what? Apply preferential treatment? The paragraph actually makes more sense when one realizes that the level of partisan treatment given to women (again, largely white women), has been advanced to such an extent in our public schools and centers of higher learning that it is no longer about “equality” but aiding and abetting unfairness and de facto inequality. Today, women in colleges outnumber men 60-40, with white women (and Asian students) represented far in excess of their percentage of the population. Under-represented minorities, meanwhile, continue to be under-represented. Thus while white women demand “proportionality” in participation in athletics (a de facto “quota”), they have also been the face of anti-affirmative action and school re-segregation cases before the U.S. Supreme Court (two of them originating in the state of Washington—against the UW law school and the Seattle school district). They only seem to be “high-minded” and “moral” when self-interest is the sole criteria for consideration; “preferential treatment” and “quotas” are OK if white women benefit, but not OK if under-represented minorities benefit. Terms like “selfishness” and “hypocrisy” come to mind. The above paragraph can also be interpreted to mean that we can talk about disparities in college admissions, but we can’t do anything to address the current state of the issue—especially if it ‘hurts” (white) women. That Title IX is administered by the Office of Civil Rights makes one gag, given all the inequality it has fostered in the pursuit of a gender political agenda.
Returning to the issue of athletics, we have been told, without any supporting evidence, that girls have equal interest in participating in college sports as boys. There have been suggestions that a survey should be taken to measure this, but none so far has surfaced, and it likely would not be in the interest in of the gender advocates that such a survey was undertaken. Instead, the OCR forces schools to either “show that the proportion of women in athletics is the same as the proportion of women in the general student body,” or at best “demonstrate that the institution has fully accommodated the interests and abilities of the underrepresented sex.” One gender activist claims that this latter “gives institutions considerable flexibility in meeting the requirements of the law,” which of course is bull. As in the Cal case, the only issue in question was proportionality; the school was being told that it had to have equal numbers of male and female student athletes, and if it carried out its program cuts as previously announced, it would fall out of compliance. Thus the “equal accommodation” option is a sham: If a school does not want to cut men’s athletic programs, but needs to raise the number of women’s programs to reach compliance, it can presumably add a number of sports that will theoretically allow a sufficient number of female participants—that is if they wish to participate (and not just “anyone,” but people who actually play a sport). If all the slots are not filled, is a university nevertheless still fulfilling its obligations under Title IX? Given the current gender activist climate, what do you think?
I remember watching something on TV where these butch-looking women were complaining that women’s sports were not getting enough attention on television, and that is why women’s sports were not taken seriously. If they did receive more airtime, everyone would see how enjoyable watching the WNBA was. Well, for one thing, if women’s sports has spokespersons like that, it’s no secret why men are not interested in watching. Well then, want about the female audience? They would surely watch a WNBA game, since they are presumably supportive of the women’s game. While 111 million people watched the Super Bowl (probably a low number), an average of 200,000 viewers watched the WNBA finals. That’s enough to fill Cowboys Stadium twice over, but that’s about it. ESPN knew there was a problem; that’s why when they promoted the WNBA playoffs, they didn’t tell you to watch because of the quality of play, but because of the “effort” the women were putting into it.
Title IX was necessary for its time; now it is just a tool to promote hypocrisy and inequality. It is now used to give tyrannical women’s advocates the power to maintain the “status quo” that suits them.
Title IX, for those who do not know, is one brief sentence followed by a novel’s worth of exceptions. One of them concerns the YMCA, which is a “good” thing because it is now just an appendage of the YWCA, and has little to do with helping young men; another exception is financial aid awarded from beauty pageants (i.e. there is no requirement for men to have their own “equivalent”). One of the interesting things about Title IX is that it originally was not concerned with athletics; that only came after women’s advocates pressed the issue, and the Javits Amendment was tacked on. The original design was to address perceived hiring discrimination based on sex at institutions receiving federal funds regardless how much or for what, and it was thought necessary to address the issue of sex specifically since gender wasn’t specified in the 14th Amendment and the 1964 Civil Rights Act. But then it expanded into requiring equal access to all programs and activities, or at least access to ones similar if there was gender specificity. Neither the 14th Amendment or the Civil Rights Act has prevented de facto discriminatory behavior against racial minorities or mandates equality on any level per se, but Title IX has been particularly effective in advancing the women’s agenda—and more specifically white women’s agenda, which I will talk about after noting this tidbit contained in Title IX:
“Nothing contained in subsection (a) of this section shall be interpreted to require any educational institution to grant preferential or disparate treatment to the members of one sex on account of an imbalance which may exist with respect to the total number or percentage of persons of that sex participating in or receiving the benefits of any federally supported program or activity, in comparison with the total number or percentage of persons of that sex in any community, State, section, or other area: Provided, That this subsection shall not be construed to prevent the consideration in any hearing or proceeding under this chapter of statistical evidence tending to show that such an imbalance exists with respect to the participation in, or receipt of the benefits of, any such program or activity by the members of one sex.”
Huh? In the first half of this paragraph it appears to ban “preferential” treatment to one sex to address gender imbalances in participation in federally-funded programs, but does a back-flip in the second part, it allows the “consideration” of evidence of said imbalances of participation. “Consideration” to do what? Apply preferential treatment? The paragraph actually makes more sense when one realizes that the level of partisan treatment given to women (again, largely white women), has been advanced to such an extent in our public schools and centers of higher learning that it is no longer about “equality” but aiding and abetting unfairness and de facto inequality. Today, women in colleges outnumber men 60-40, with white women (and Asian students) represented far in excess of their percentage of the population. Under-represented minorities, meanwhile, continue to be under-represented. Thus while white women demand “proportionality” in participation in athletics (a de facto “quota”), they have also been the face of anti-affirmative action and school re-segregation cases before the U.S. Supreme Court (two of them originating in the state of Washington—against the UW law school and the Seattle school district). They only seem to be “high-minded” and “moral” when self-interest is the sole criteria for consideration; “preferential treatment” and “quotas” are OK if white women benefit, but not OK if under-represented minorities benefit. Terms like “selfishness” and “hypocrisy” come to mind. The above paragraph can also be interpreted to mean that we can talk about disparities in college admissions, but we can’t do anything to address the current state of the issue—especially if it ‘hurts” (white) women. That Title IX is administered by the Office of Civil Rights makes one gag, given all the inequality it has fostered in the pursuit of a gender political agenda.
Returning to the issue of athletics, we have been told, without any supporting evidence, that girls have equal interest in participating in college sports as boys. There have been suggestions that a survey should be taken to measure this, but none so far has surfaced, and it likely would not be in the interest in of the gender advocates that such a survey was undertaken. Instead, the OCR forces schools to either “show that the proportion of women in athletics is the same as the proportion of women in the general student body,” or at best “demonstrate that the institution has fully accommodated the interests and abilities of the underrepresented sex.” One gender activist claims that this latter “gives institutions considerable flexibility in meeting the requirements of the law,” which of course is bull. As in the Cal case, the only issue in question was proportionality; the school was being told that it had to have equal numbers of male and female student athletes, and if it carried out its program cuts as previously announced, it would fall out of compliance. Thus the “equal accommodation” option is a sham: If a school does not want to cut men’s athletic programs, but needs to raise the number of women’s programs to reach compliance, it can presumably add a number of sports that will theoretically allow a sufficient number of female participants—that is if they wish to participate (and not just “anyone,” but people who actually play a sport). If all the slots are not filled, is a university nevertheless still fulfilling its obligations under Title IX? Given the current gender activist climate, what do you think?
I remember watching something on TV where these butch-looking women were complaining that women’s sports were not getting enough attention on television, and that is why women’s sports were not taken seriously. If they did receive more airtime, everyone would see how enjoyable watching the WNBA was. Well, for one thing, if women’s sports has spokespersons like that, it’s no secret why men are not interested in watching. Well then, want about the female audience? They would surely watch a WNBA game, since they are presumably supportive of the women’s game. While 111 million people watched the Super Bowl (probably a low number), an average of 200,000 viewers watched the WNBA finals. That’s enough to fill Cowboys Stadium twice over, but that’s about it. ESPN knew there was a problem; that’s why when they promoted the WNBA playoffs, they didn’t tell you to watch because of the quality of play, but because of the “effort” the women were putting into it.
Title IX was necessary for its time; now it is just a tool to promote hypocrisy and inequality. It is now used to give tyrannical women’s advocates the power to maintain the “status quo” that suits them.
Thursday, February 10, 2011
True story or not, Alzheimer's doesn't discriminate
The late Carl Sagan, who popularized astronomy with his public television mini-series “Cosmos,” also wrote the Pulitzer Prize-winning book “Dragons of Eden.” In this book I read an anecdote that I found fascinating; Sagan observed that a person might awaken from an exceptionally vivid dream and declare “I’ll surely remember this dream, and tell everyone about it in the morning.” The problem was that by the time morning came around, he could barely remember it at all. But if he had decided that “I better write this down, because I might forget it,” by morning the dreamer could remember the nighttime fantasy without referring to the notes he wrote down.
Funny thing about memory; it is the basis for what we call “intelligence.” Some people can memorize and recall things better than others, thus they are more “intelligent.” For some of us, memory is like a test in school; if it is a multiple choice test, we have a better chance at recalling the right answer than if it is a fill-in-the-blank test. As people get older, answers to questions they perfectly well know are harder to resurrect from their slumber; but they are there somewhere. All you need is a hint or brief brain-wracking. Twenty years ago, just to convince myself that I could, I memorized all of the lyrics to Don McLean’s seven-minute opus “American Pie.” On occasion I’ll recite the words in my head in order to persuade myself that my brain, or at least what’s left of it, hasn’t gone completely to pot. At work, I’ll make an effort to memorize the numbers on cargo carts before I write them down; sometimes I can remember all of them, but usually not.
So what is the point of this meandering discussion? Maybe some people have seen the cover of a recent edition of the American tabloid/scandal sheet called The Globe; Prince Charles, it is alleged, is suffering from Alzheimer’s Disease, and this story is causing a slight stir in Britain. The reasoning behind this allegation is that the 62-year-old prince seems to be having memory loss, which apparently cannot be explained by advancing age. Other symptoms of Alzheimer’s, like disorientation, mood swings, and deterioration of language and motor skills, is not mentioned. Not surprisingly, the Palace is not commenting on the veracity of this story. The “drama” now centers around whether this disqualifies Charles from inheriting the throne, although even at 84, the queen doesn’t appear to be ready to go anywhere anytime soon.
This, of course, may just be rumor-mongering to sell a few more copies of the tabloid, although frankly I think most Americans care as much about British royal family doings of the Prince of Liechtenstein’s (although his personal wealth is $5 billion, and the fact that twice as many businesses are incorporated in the principality than citizens is explained by the fact that it has a corporation-friendly no- questions-asked environment). However, it is interesting to note, perhaps not coincidentally, that Prince Charles has displayed great interest in the topic of Alzheimer’s Disease in the past. In 2006, he visited a mental health facility, praising its use of “holistic” treatments for Alzheimer’s, claiming that such techniques had done wonders for his own health. Two years ago, he gave a speech before the Alzheimer's Research Trust conference at the Royal Institution in London, in which he proclaimed that the country faced a “catastrophic burden of dementia,” in which 700,000 Brits were at that very moment afflicted with the disease, causing a untenable weight on the already teetering British health care system. It was his wish that the “shroud of mystery” be lifted from the affliction, so that its sufferers would not be “stigmatized.” Sufferers lived in hope that an effective treatment could be found. Hmm.
As indicated before, no one except The Globe and its “sources” are making such insinuations. However, many well-known people have suffered from Alzheimer’s: writers Jonathon Swift, Immanuel Kant, and Ralph Waldo Emerson; political figures Winston Churchill, Barry Goldwater, Cyrus Vance and Ronald Reagan; civil rights figure Rosa Parks; and actors Burgess Meredith, James “Scotty” Doohan, Charles Bronson and Charlton Heston.
Funny thing about memory; it is the basis for what we call “intelligence.” Some people can memorize and recall things better than others, thus they are more “intelligent.” For some of us, memory is like a test in school; if it is a multiple choice test, we have a better chance at recalling the right answer than if it is a fill-in-the-blank test. As people get older, answers to questions they perfectly well know are harder to resurrect from their slumber; but they are there somewhere. All you need is a hint or brief brain-wracking. Twenty years ago, just to convince myself that I could, I memorized all of the lyrics to Don McLean’s seven-minute opus “American Pie.” On occasion I’ll recite the words in my head in order to persuade myself that my brain, or at least what’s left of it, hasn’t gone completely to pot. At work, I’ll make an effort to memorize the numbers on cargo carts before I write them down; sometimes I can remember all of them, but usually not.
So what is the point of this meandering discussion? Maybe some people have seen the cover of a recent edition of the American tabloid/scandal sheet called The Globe; Prince Charles, it is alleged, is suffering from Alzheimer’s Disease, and this story is causing a slight stir in Britain. The reasoning behind this allegation is that the 62-year-old prince seems to be having memory loss, which apparently cannot be explained by advancing age. Other symptoms of Alzheimer’s, like disorientation, mood swings, and deterioration of language and motor skills, is not mentioned. Not surprisingly, the Palace is not commenting on the veracity of this story. The “drama” now centers around whether this disqualifies Charles from inheriting the throne, although even at 84, the queen doesn’t appear to be ready to go anywhere anytime soon.
This, of course, may just be rumor-mongering to sell a few more copies of the tabloid, although frankly I think most Americans care as much about British royal family doings of the Prince of Liechtenstein’s (although his personal wealth is $5 billion, and the fact that twice as many businesses are incorporated in the principality than citizens is explained by the fact that it has a corporation-friendly no- questions-asked environment). However, it is interesting to note, perhaps not coincidentally, that Prince Charles has displayed great interest in the topic of Alzheimer’s Disease in the past. In 2006, he visited a mental health facility, praising its use of “holistic” treatments for Alzheimer’s, claiming that such techniques had done wonders for his own health. Two years ago, he gave a speech before the Alzheimer's Research Trust conference at the Royal Institution in London, in which he proclaimed that the country faced a “catastrophic burden of dementia,” in which 700,000 Brits were at that very moment afflicted with the disease, causing a untenable weight on the already teetering British health care system. It was his wish that the “shroud of mystery” be lifted from the affliction, so that its sufferers would not be “stigmatized.” Sufferers lived in hope that an effective treatment could be found. Hmm.
As indicated before, no one except The Globe and its “sources” are making such insinuations. However, many well-known people have suffered from Alzheimer’s: writers Jonathon Swift, Immanuel Kant, and Ralph Waldo Emerson; political figures Winston Churchill, Barry Goldwater, Cyrus Vance and Ronald Reagan; civil rights figure Rosa Parks; and actors Burgess Meredith, James “Scotty” Doohan, Charles Bronson and Charlton Heston.
Wednesday, February 9, 2011
Democracy and enlightenment in the Muslim world
Egypt continues to be embroiled in confusion and just this side of chaos, not made more coherent by protesters’ refusal to “negotiate” terms in which they will agree to disperse. The newly-appointed vice president—Omar Suleiman, like most of the top echelon of the government a military man—has declared that Egypt is not ready for democracy, and he may be right; the protesters have not articulated a “plan” for governing beyond Hosni Mubarak’s resignation. The protesters cannot be blamed for doubting any promises for reform, of course; promises of governmental reform have been made and then broken before, although it is reasonable to believe that this time the powerful military is concerned enough with the possibility of chaos, and if only for its own safety take seriously the need for a more effective outlet for the people’s frustration. This was certainly the point being made to me by an immigrant from Algeria, who I have observed to be increasingly on a slow boil as events have unfolded, and is impatient with a typical American’s doubtfulness. Whether that is a separate issue from the Muslim Brotherhood’s fundamentalist—and inherently anti-democratic—ideology is another matter.
The only Muslim countries with democratic governments that are close to the Western sense are secular states like Turkey, Pakistan and Lebanon. In Turkey, Kemal Ataturk was the most powerful and influential personality in what was left of the Ottoman Empire following its defeat in World War I, and was a committed secularist who wanted to form a society based on the Western model. Pakistan was obviously influenced by centuries of British rule, and in Lebanon there had to be some accommodation between its mixed Muslim and Christian population. But in the main, the Muslim world, which challenged the West for supremacy at least until the 16th century in terms of culture and technology, has since that time been bogged down in religious fanaticism, fatalism and poverty. While the West experienced a Renaissance, an Enlightenment, and an Industrial Revolution, these periods seemed to pass most of the Muslim world by. In some parts, even re-interpreting the Koran to fit the realities of modern life is punishable by death, or at least the threat of it.
Western thought had its “dark age,” of course—when everything was explained by the supernatural, and everything was understood to be guided by unchangeable rules set by a divine being; science, culture and literature was not permitted to stray far these rules, thus calling them into question. However, during the Enlightenment, scholars, philosophers and scientists no longer tolerated being confined to fixed “systems” that did not take into account the often chaotic forces of nature and were contradicted by empirical observation; the laws of nature such as that formulated by Newton were better able to explain what was once unknown, opening the world to a greater understanding of its mysteries. Since these ideas were a direct threat to religious dogma, they were not readily accepted; but by the 18th century intellectual persecution, because of increasing skepticism, was unsustainable. The evolution of political thought (particularly in questioning the concept of divine right of rulers) inevitably followed.
The West was thus able to break out of the straightjacket of religious dogma and advance socially, culturally and technologically. This did not occur in the greater part of the Muslim world, and its society has remained for the most part adverse to new ideas that question the relevance of religious fanaticism in the modern world, and open the mind to new possibilities without the straightjacket of intolerance. The fact the Egyptians on the street have not been able to define what “freedom” means to them beyond being a concept, let alone the responsibilities it entails, is problematic for the future. So much of the Muslim world has not laid the foundation in which independent, individual expression can be expressed without the strictures of intolerance.
The only Muslim countries with democratic governments that are close to the Western sense are secular states like Turkey, Pakistan and Lebanon. In Turkey, Kemal Ataturk was the most powerful and influential personality in what was left of the Ottoman Empire following its defeat in World War I, and was a committed secularist who wanted to form a society based on the Western model. Pakistan was obviously influenced by centuries of British rule, and in Lebanon there had to be some accommodation between its mixed Muslim and Christian population. But in the main, the Muslim world, which challenged the West for supremacy at least until the 16th century in terms of culture and technology, has since that time been bogged down in religious fanaticism, fatalism and poverty. While the West experienced a Renaissance, an Enlightenment, and an Industrial Revolution, these periods seemed to pass most of the Muslim world by. In some parts, even re-interpreting the Koran to fit the realities of modern life is punishable by death, or at least the threat of it.
Western thought had its “dark age,” of course—when everything was explained by the supernatural, and everything was understood to be guided by unchangeable rules set by a divine being; science, culture and literature was not permitted to stray far these rules, thus calling them into question. However, during the Enlightenment, scholars, philosophers and scientists no longer tolerated being confined to fixed “systems” that did not take into account the often chaotic forces of nature and were contradicted by empirical observation; the laws of nature such as that formulated by Newton were better able to explain what was once unknown, opening the world to a greater understanding of its mysteries. Since these ideas were a direct threat to religious dogma, they were not readily accepted; but by the 18th century intellectual persecution, because of increasing skepticism, was unsustainable. The evolution of political thought (particularly in questioning the concept of divine right of rulers) inevitably followed.
The West was thus able to break out of the straightjacket of religious dogma and advance socially, culturally and technologically. This did not occur in the greater part of the Muslim world, and its society has remained for the most part adverse to new ideas that question the relevance of religious fanaticism in the modern world, and open the mind to new possibilities without the straightjacket of intolerance. The fact the Egyptians on the street have not been able to define what “freedom” means to them beyond being a concept, let alone the responsibilities it entails, is problematic for the future. So much of the Muslim world has not laid the foundation in which independent, individual expression can be expressed without the strictures of intolerance.
This observer won't remain "mum" on this real life tragedy
After two weeks, there is still no follow-up information provided by the local media in regard to the Walmart shooting that killed a Utah man, a 13-year-old girl and wounded two police officers. “Officially” the Pierce County Medical Examiner is being “mum” on the case, although this contradicts a statement from a Salt Lake City television station last week that reported that they were informed by the medical examiner that Astrid Valdivia had been shot multiple times by police fire when she ran to the aid of Anthony Martinez after he was shot. The local media here claims that there was a “shootout,” although police have only said that after Martinez took off running while he was either being questioned or escorted to a police van, he fired a weapon over his shoulder without aiming. Obviously this tragedy would not have occurred had Martinez willingly allowed himself to be detained, but this case is more complicated than it would first appear, both locally and in Utah.
If in fact Valdivia was killed by indiscriminate police fire, it also stands to reason the possibility that the two officers were also wounded by police fire. This could explain the delay in releasing information on the case; local law enforcement have had their share of bad publicity lately, not just in regard to the John T. Williams case and other instances of shocking police abuse, but the $10 million civil judgment against King County and its sheriff’s department after a deputy’s actions causing permanent brain damage to an innocent man. I have also observed that there has been a great deal of misinformation and demonizing on various crime victim websites that have been allowed to shape the perception of the case. Martinez was being described (mainly on the parents' word) as violent and dangerous, with no detailed information to justify this claim.
I discovered a newspaper story in Utah dated this past December that shed a little more light on the case. Martinez and Valdivia had just been discovered in California and returned to Utah. Both appeared in court concerning the relevance of a charge of kidnapping; Valdivia had in fact written a note to her family stating that she was leaving home and staying with a “friend.” Until 2010, Valdivia had not seen Martinez since she was 4 or 5 years old, when he had helped watch over her and her siblings when he had a relationship with her mother. Martinez—who back then had not known that Valdivia’s mother was married until one day the husband showed-up unexpectedly and he was forced to hide in closet—presumably encountered the girl one day and was made of aware of her identity. After talking to her, he told his brother that he had the impression that she was “troubled” and “suicidal” and wanted to leave home, and he wanted to help her. Martinez’s brother has stated that his brother had a “good heart,” and likely understood given his previous experiences with the parents.
It seems likely that the girl did in fact have a troubled home life, given what occurred in the courtroom: The judge admonished the parents several times to show their daughter more “respect,” and after the father derided her “hair,” the judge tossed him out of the courtroom and told him not to return (That reminds me of something I observed in a barbershop when I was younger; a boy was getting a haircut while his mother was malignantly instructing the barber. The boy was clearly in distress, with tears in his eyes; the barber, realizing his distress, would only snip a little off each time the mother ordered him to cut more off. It couldn’t have been a comfortable experience for the barber, seeing how much the mother was enjoying the boy’s misery). Rather than return the girl back to her family, the judge ordered her placed in a foster care facility; Martinez, meanwhile, was released on bail; and from there we have this tragic shooting.
Instead of telling the truth of a tragic tale, the local media has been “mum,” allowing people only the comforting “knowledge” of yet another unfortunately “justified” police killing. But this is real life about real people we are talking about here, not just another “unfortunate” incident.
If in fact Valdivia was killed by indiscriminate police fire, it also stands to reason the possibility that the two officers were also wounded by police fire. This could explain the delay in releasing information on the case; local law enforcement have had their share of bad publicity lately, not just in regard to the John T. Williams case and other instances of shocking police abuse, but the $10 million civil judgment against King County and its sheriff’s department after a deputy’s actions causing permanent brain damage to an innocent man. I have also observed that there has been a great deal of misinformation and demonizing on various crime victim websites that have been allowed to shape the perception of the case. Martinez was being described (mainly on the parents' word) as violent and dangerous, with no detailed information to justify this claim.
I discovered a newspaper story in Utah dated this past December that shed a little more light on the case. Martinez and Valdivia had just been discovered in California and returned to Utah. Both appeared in court concerning the relevance of a charge of kidnapping; Valdivia had in fact written a note to her family stating that she was leaving home and staying with a “friend.” Until 2010, Valdivia had not seen Martinez since she was 4 or 5 years old, when he had helped watch over her and her siblings when he had a relationship with her mother. Martinez—who back then had not known that Valdivia’s mother was married until one day the husband showed-up unexpectedly and he was forced to hide in closet—presumably encountered the girl one day and was made of aware of her identity. After talking to her, he told his brother that he had the impression that she was “troubled” and “suicidal” and wanted to leave home, and he wanted to help her. Martinez’s brother has stated that his brother had a “good heart,” and likely understood given his previous experiences with the parents.
It seems likely that the girl did in fact have a troubled home life, given what occurred in the courtroom: The judge admonished the parents several times to show their daughter more “respect,” and after the father derided her “hair,” the judge tossed him out of the courtroom and told him not to return (That reminds me of something I observed in a barbershop when I was younger; a boy was getting a haircut while his mother was malignantly instructing the barber. The boy was clearly in distress, with tears in his eyes; the barber, realizing his distress, would only snip a little off each time the mother ordered him to cut more off. It couldn’t have been a comfortable experience for the barber, seeing how much the mother was enjoying the boy’s misery). Rather than return the girl back to her family, the judge ordered her placed in a foster care facility; Martinez, meanwhile, was released on bail; and from there we have this tragic shooting.
Instead of telling the truth of a tragic tale, the local media has been “mum,” allowing people only the comforting “knowledge” of yet another unfortunately “justified” police killing. But this is real life about real people we are talking about here, not just another “unfortunate” incident.
Monday, February 7, 2011
Aguilera just following her peers
Listening to Christina Aguilera’s rendition the national anthem at the Super Bowl was an hearing-impairing experience, although I think the people who criticized her would probably have been more hesitant to do so if she was black—particularly given the fact that her singing style is typical of the current wave of “pop” artists dating at least to Mariah Carey’s entry onto the scene. I admit that the Star Spangled Banner is not the most melodic of tunes, but to sing it vocally all over the sonic spectrum above dog whistle is ridiculous and not at all “art.” I suppose that a person’s mind has to be wired to enjoy conventional, memorable melodies these days; if not, you get this. To listen to these current singing styles, someone like me who remembers what a real song is like can only laugh and cry at the same time. The concept of what a “song” is has just become a limp frame to hang uncontrolled vocal gymnastics.
Maybe I’m “old school,” but the classic song structure, exemplified by the ballad, has been around for at least 2,000 years. If we’ve come to the point where the younger generation has no appreciation of classic song structure, then it is indeed a measure of how the words and music form is dying a slow death. Repetition and derivativeness has always been a part of popular song structure, but in today’s popular music (especially in hip hop and its off-shoots), being ragingly repetitive and derivative (in an almost incestuous way) has become not just part of the “art,” but deliberate in a way that questions its worth as an artistic artifact beyond an expression of urban culture. When future artists seek cover versions of songs to record, they won’t be looking in the current era for them (or shouldn’t), but the past.
I find female singers today particularly irritating (with the possible exception of Katy Perry, not because she's a good singer but her hits are not particularly unlistenable). I remember when Carly Simon, Carole King, Aretha Franklin, Roberta Flack and Gladys Knight as artists who were not “beautiful,” indistinguishable Barbie-doll singers with over-blown “technique”; they were real, honest, with distinct personalities--and could carry a tune (well, Simon couldn’t quite hit the high notes). Diana Ross was a “beautiful” singer, but she had a natural talent to squeeze every ounce out of a melody without resorting to vocal fakery (I once overheard someone complain that Toni Tenille sounded like a “n-word,” which if it had been phrased differently was something she probably would have taken it as a compliment). Even if you were unfamiliar with a song, once the singer came in, you didn’t need to guess who it was, and they didn’t pretend to be something they were not, as is the case today.
Maybe I’m “old school,” but the classic song structure, exemplified by the ballad, has been around for at least 2,000 years. If we’ve come to the point where the younger generation has no appreciation of classic song structure, then it is indeed a measure of how the words and music form is dying a slow death. Repetition and derivativeness has always been a part of popular song structure, but in today’s popular music (especially in hip hop and its off-shoots), being ragingly repetitive and derivative (in an almost incestuous way) has become not just part of the “art,” but deliberate in a way that questions its worth as an artistic artifact beyond an expression of urban culture. When future artists seek cover versions of songs to record, they won’t be looking in the current era for them (or shouldn’t), but the past.
I find female singers today particularly irritating (with the possible exception of Katy Perry, not because she's a good singer but her hits are not particularly unlistenable). I remember when Carly Simon, Carole King, Aretha Franklin, Roberta Flack and Gladys Knight as artists who were not “beautiful,” indistinguishable Barbie-doll singers with over-blown “technique”; they were real, honest, with distinct personalities--and could carry a tune (well, Simon couldn’t quite hit the high notes). Diana Ross was a “beautiful” singer, but she had a natural talent to squeeze every ounce out of a melody without resorting to vocal fakery (I once overheard someone complain that Toni Tenille sounded like a “n-word,” which if it had been phrased differently was something she probably would have taken it as a compliment). Even if you were unfamiliar with a song, once the singer came in, you didn’t need to guess who it was, and they didn’t pretend to be something they were not, as is the case today.
The Packers didn't need stripes help to beat Steelers, barely
I have to admit that I felt a mixture of excitement and nervousness watching this past Super Bowl. The Favre-era Packers tended to start slow and come alive in the second half; the opposite is true of the Rodgers-led Packers. I knew that Packers had to open-up a substantial lead and hang-on. Knowing that Rodgers typically tended to be more inconsistent in the second half (last year’s playoff game against Arizona was the rare exception, mainly because of the Cardinals' porous defense), the opening of a 21-3 lead over the Steelers was precisely what needed to happen, and as it turned out, the Packers came within a whisker of blowing the whole thing.
I grew-up in Wisconsin in the pre-Favre era, and remained more or less loyal throughout the years, although my loyalty was tested during the Favre fiasco. One also has to be realistic; the Packers were fortunate to make it to the Super Bowl; good fortune has a way smiling on a team for no apparent rhyme or reason. The Packers could just as easily have lost in the wildcard game against Philadelphia if Michael Vick had completed that end zone pass in the final minute. I thought that the stripes were going to give the Steelers the same manufactured “luck” they gave them in the 2006 and 2009 Super Bowls, when they called that phantom facemask and refused to overturn the incomplete pass ruling on the challenge. The Packers were able to manufacture their own luck in the first half with the interceptions, but I was concerned about Pittsburgh’s nearly 2-1 advantage in time of possession through the first three quarters, and I think the effects of it were showing in the second half.
Yes, some of the inconsistency of the offense could be blamed on dropped balls, but we’ve seen the lack of consistency all year, and sometimes it was Rodgers’ play. I was still disturbed by some of the play-calling, especially down near the goal line when the Packers didn’t try to run the ball (remembering Rodgers’ third quarter interception against the Bears). I also shared Packer play-by-play man Wayne Larrivee’s lack of enthusiasm when the Packers were forced to settle for a field goal with two minutes left; I think a lot of people assumed the worst, but as it turned out, it wasn’t the Steelers’ night this time—even the stripes would not give them a gift pass interference call. In the end, just as Lombardi said, the game is won on 3 or 4 plays, and for the Packers it was the turnovers and the late third down pass to Jennings when the Packers were facing a punt from their own 25 yard line with 5 minutes to go.
Rodgers was the Super Bowl MVP, and I'll grant deservedly so. But I find it a bit curious that although a great many people have eagerly put him in “elite” status, and despite the fact that Packers played with so many players on IR, I’ve heard no one seriously consider him a regular season “MVP” candidate. Rodgers was a questionable commodity from the start, and his status was always married to how one felt about Favre. Someone on a local radio station wanted to know why Rodgers wasn’t a higher pick in the draft (which doesn’t necessarily mean a lot, since Tom Brady was a sixth-round pick, and Favre was a second-round pick). His partner remembered Rodgers’ college days at Cal, and he recalled that Rodgers wasn’t particularly impressive outside a couple of standout games; he wasn’t accurate, had a disconcerting habit of holding the ball high when he went back to pass, and didn’t throw with much velocity. Although Rodgers impressed at least Packer scouts, he clearly needed work. I thought of Rodgers as just another in a long line of back-ups for Favre.
Ron Wolfe, Holmgren and Favre brought relevance and credibility back to Green Bay, and the current management owes them a huge debt; like many fans who remember the bad old pre-Favre days, I felt that Rodgers had to prove he was more than just someone else’s “guy.” His first two seasons in the back-up role didn’t impress me; he would come in, and a few plays later get hurt; in his second season, I observed with amusement Rodgers coming in late one game, and a few seconds later get knocked-out for the season with a broken foot. I think that Rodgers needed to sit and watch and learn from Favre. I think that in the 2007 season, a veteran like Favre—who was having a career year averaging 310 yards passing a game before his injury against Dallas—was able to make young receivers like Jennings and Jones look like Pro Bowlers, and set the table for Rodgers. The exposed talent of these receivers allowed Rodgers to come into the game against Dallas and put in a respectable performance—much like Matt Flynn put in against New England (which is why I think that John Clayton is underestimating Flynn’s potential).
Now that the Favre era is really over (I think), I can turn my complete and undivided attention to the team next year--that is, if there is a next year.
I grew-up in Wisconsin in the pre-Favre era, and remained more or less loyal throughout the years, although my loyalty was tested during the Favre fiasco. One also has to be realistic; the Packers were fortunate to make it to the Super Bowl; good fortune has a way smiling on a team for no apparent rhyme or reason. The Packers could just as easily have lost in the wildcard game against Philadelphia if Michael Vick had completed that end zone pass in the final minute. I thought that the stripes were going to give the Steelers the same manufactured “luck” they gave them in the 2006 and 2009 Super Bowls, when they called that phantom facemask and refused to overturn the incomplete pass ruling on the challenge. The Packers were able to manufacture their own luck in the first half with the interceptions, but I was concerned about Pittsburgh’s nearly 2-1 advantage in time of possession through the first three quarters, and I think the effects of it were showing in the second half.
Yes, some of the inconsistency of the offense could be blamed on dropped balls, but we’ve seen the lack of consistency all year, and sometimes it was Rodgers’ play. I was still disturbed by some of the play-calling, especially down near the goal line when the Packers didn’t try to run the ball (remembering Rodgers’ third quarter interception against the Bears). I also shared Packer play-by-play man Wayne Larrivee’s lack of enthusiasm when the Packers were forced to settle for a field goal with two minutes left; I think a lot of people assumed the worst, but as it turned out, it wasn’t the Steelers’ night this time—even the stripes would not give them a gift pass interference call. In the end, just as Lombardi said, the game is won on 3 or 4 plays, and for the Packers it was the turnovers and the late third down pass to Jennings when the Packers were facing a punt from their own 25 yard line with 5 minutes to go.
Rodgers was the Super Bowl MVP, and I'll grant deservedly so. But I find it a bit curious that although a great many people have eagerly put him in “elite” status, and despite the fact that Packers played with so many players on IR, I’ve heard no one seriously consider him a regular season “MVP” candidate. Rodgers was a questionable commodity from the start, and his status was always married to how one felt about Favre. Someone on a local radio station wanted to know why Rodgers wasn’t a higher pick in the draft (which doesn’t necessarily mean a lot, since Tom Brady was a sixth-round pick, and Favre was a second-round pick). His partner remembered Rodgers’ college days at Cal, and he recalled that Rodgers wasn’t particularly impressive outside a couple of standout games; he wasn’t accurate, had a disconcerting habit of holding the ball high when he went back to pass, and didn’t throw with much velocity. Although Rodgers impressed at least Packer scouts, he clearly needed work. I thought of Rodgers as just another in a long line of back-ups for Favre.
Ron Wolfe, Holmgren and Favre brought relevance and credibility back to Green Bay, and the current management owes them a huge debt; like many fans who remember the bad old pre-Favre days, I felt that Rodgers had to prove he was more than just someone else’s “guy.” His first two seasons in the back-up role didn’t impress me; he would come in, and a few plays later get hurt; in his second season, I observed with amusement Rodgers coming in late one game, and a few seconds later get knocked-out for the season with a broken foot. I think that Rodgers needed to sit and watch and learn from Favre. I think that in the 2007 season, a veteran like Favre—who was having a career year averaging 310 yards passing a game before his injury against Dallas—was able to make young receivers like Jennings and Jones look like Pro Bowlers, and set the table for Rodgers. The exposed talent of these receivers allowed Rodgers to come into the game against Dallas and put in a respectable performance—much like Matt Flynn put in against New England (which is why I think that John Clayton is underestimating Flynn’s potential).
Now that the Favre era is really over (I think), I can turn my complete and undivided attention to the team next year--that is, if there is a next year.
Sunday, February 6, 2011
Your "rights" are only what a cop allows you to have
I'll have to wait before I comment on the Super Bowl, because real life goes on. Some people may not be aware of this fact, but a typical major metropolitan airport is generally open 24 hours a day, 7 days a week. How do I know this? Because four days a week I set my cell phone alarm to 1AM, and listen to it go off every five minutes for an hour before I’m able to convince myself that I need to get my fundament moving. I don’t have a car, which is OK because now I don’t have to concern myself overmuch with “ethnically”-profiling police out on “fishing expeditions.” Once I’m prepared for the day, I have to walk 45 minutes to the nearest bus stop that will take me to the airport, where essentially I am on duty for 11 hours, Thursday through Sunday. If I get six hours of sleep between days, it’s better than 5, or 4. I’ve been following this routine for over 3 years.
This Sunday morning, there was a bit of a detour in the routine, courtesy of a Kent police officer. Every morning for three years I’ve walked down the same street here, cut across an empty parking lot there, ambled along the arterial road here, and staggered down the “avenue” there before reaching the bus stop. I’ve seen police cars every once in awhile; I should have been a familiar sight by now, and they should be aware of the fact that there is one bus route that makes two runs to airport before 4 AM. . There always has to be someone who isn’t “clear” about what you’re doing, but when their actions are based on prejudicial assumptions, that is another matter. I had barely started my morning journey when I observed a police K-9 van partially blocking the sidewalk I was walking on. These cops are not here to monitor traffic (there wasn’t any), they drive around looking for someone to harass. I asked myself “It’s 3AM, it’s dark outside, I’m short and I’m alone. What chance do I have of simply walking past him without being hassled?” The answer, of course, is not a chance. This guy was too obvious for a speed trap, and he wasn’t in search of a random victim. Perhaps some paranoid someone had called in a suspicious character, short and “ethnic,” who seemed to come out of nowhere and disappear just as quickly, practically every day. This was very suspicious.
I don’t know if that suspicious person was this cop himself; maybe he’d been staked out the night before, perhaps trying to napping, when he saw this little man dressed in the latest in burglar accessories—yellow rain jacket and pants, with reflective strips—and like your typical security guard, got all paranoid and discombobulated. People like us make suspicious people’s lives distressing, because they think that can’t go back to sleep unless we go away. It also true that some people don’t know how to mind their own business. If I was traversing the streets in said manner and some stranger approached me and demanded to know I was doing, I might inform him to piss-off and mind his own business. However, when you encounter a cop with a hair up, there seems little a person can do except express and opinion and stand there; even if you have done nothing unlawful and simply going about your daily (nightly) business, a cop can stick his or her nose in your business any time he or she wishes, and if you decide that you are perfectly within your rights to exercise your rights as a free person, and employ your legs to that purpose, the cop will remind you that your rights only have as much meaning as he or she allows you to have.
So no sooner was I 10 feet from the police van, what do you know but the cop exits the vehicle and demanded to know what I doing out this early in the morning. I told him that I work at the airport and need to keep going so I would not be late for the bus. He demanded to see my Port of Seattle badge (he actually knows what that is?); I showed him my badge. He seemed disappointed. He told me that some of the properties had been robbed lately, which I thought was BS. He said I can go. I walked down the street, and cut across the empty parking lot so I could save a few minutes. Next thing I noticed is that police van was humming down the street; the cop obviously had been keeping an eye on me. He turned the corner, and as I was exiting out the other side of the parking lot, he pulls in and blocks my way. The cop exited the van and informed me “Alright, now I can see your ID.” What? I ask. “You’re on private property, I can hold you now.” Private property? It’s a parking lot. “Let’s see your ID.” I showed you my ID. “I want to see your driver’s license.” I give it to him, but I demanded to know his name. “Warmington.” Well, I’m going to file a complaint. You are making me late for my bus. “You stay there.” I stand there while he went back to his van. He comes back a minute later and hands me back my license and tells me I can go. I tell him what I’m going to do. He gives me his name again. He’s probably smug about it because this is what cops do, and nobody cares.
So I’m finally allowed to finish my journey without further delay, but I’m running late and sore as hell. It is obvious what had transpired. Despite the fact that I had been making this trek for three years, and if anyone who was the least bit curious to know where I was going could have easily have taken the time to ascertain what I was doing without detaining me to find out. Or they could just take my word for it and leave me be. But someone, probably this cop, believed that I was up to no good. The problem is that walking on public streets is no excuse to detain anyone for anything unless there is a legitimate reason to. A cop only has the legal right to demand an ID card if he believes he has “reasonable” suspicion you have committed a crime, or are about to. The problem is that what a cop thinks is “reasonable” suspicion is entirely arbitrary and at his discretion, usually based on some prejudice. The U.S. Supreme Court has only said that you must “identify” yourself”—i.e. “tell” a cop your name—anytime he or she asks, but you are not required to show identification for no reason; racial profiling or “suspicion” based on nothing more than appearance is illegal. Allegedly in the state of Washington you do not even have to stop if you are innocent of a crime or otherwise have done nothing wrong (like walking on a sidewalk or waiting at a bus stop) even if a cop tells you to. Of course, this is not a good idea, or you might become permanently brain-damaged, like the man who just “won” a $10 million civil suit against King County. It is also true that another problem for the innocent pedestrian is that if a cop wants to, he can charge you with “contempt of cop”—which generally comes into play either if you have an “attitude” problem, or the cop desperately wants to detain you but doesn’t have a “reasonable” excuse at hand. This isn’t “Law and Order” where every male “suspect” is guilty until proven guilty, and whatever cops do is justified. The contempt “charge,” which is typically thrown out by morning, is also “handy” when a cop seriously injures a “suspect” who also has done nothing wrong or illegal.
In this particular case, the fact that I was out and about at an early might be “questionable”—if it was it was not apparent that I my actions were in themselves were not inherently suspicious; however, perfectly natural activities are frequently deemed “suspicious” if a person chooses to interpret them as such, based on their prejudices and stereotypes. As long as I wasn’t staggering around in a drunken stupor or sneaking around cars, the cop had no legitimate reason to stop me, and he knew it. Here I was walking in what was clearly a work uniform; I could be walking to or from work, and I was clearly going somewhere—I wasn’t acting “confused” or “shifty.” I made no effort to avoid him. I had provided him with a legitimate reason why I was out that time of day. But this cop had a premeditated plan to stop and “examine” me, and that’s exactly why he was waiting where he was, when he was. He didn’t want to “miss” me; I wasn’t a “random” target. But there was a problem: He still had no “reasonable” suspicion that I had or was about to commit a crime. His claim that there had been “break-ins” at nearby business was of doubtful credibility, since he wasn’t hidden anywhere to observe any “suspicious” activity; he was right out in the open—and after our encounters, I saw him driving off to downtown Kent. His glee upon being provided a rationale to harass me further proved that although he had been frustrated in his first attempt to harass me, he was eager for another go-round, regardless of its petty and absurd nature.
And to think that a year ago I was mugged out here by someone who was standing in the middle of the street for several minutes as I approached. Where were the cops them? Probably sleeping in the van.
This Sunday morning, there was a bit of a detour in the routine, courtesy of a Kent police officer. Every morning for three years I’ve walked down the same street here, cut across an empty parking lot there, ambled along the arterial road here, and staggered down the “avenue” there before reaching the bus stop. I’ve seen police cars every once in awhile; I should have been a familiar sight by now, and they should be aware of the fact that there is one bus route that makes two runs to airport before 4 AM. . There always has to be someone who isn’t “clear” about what you’re doing, but when their actions are based on prejudicial assumptions, that is another matter. I had barely started my morning journey when I observed a police K-9 van partially blocking the sidewalk I was walking on. These cops are not here to monitor traffic (there wasn’t any), they drive around looking for someone to harass. I asked myself “It’s 3AM, it’s dark outside, I’m short and I’m alone. What chance do I have of simply walking past him without being hassled?” The answer, of course, is not a chance. This guy was too obvious for a speed trap, and he wasn’t in search of a random victim. Perhaps some paranoid someone had called in a suspicious character, short and “ethnic,” who seemed to come out of nowhere and disappear just as quickly, practically every day. This was very suspicious.
I don’t know if that suspicious person was this cop himself; maybe he’d been staked out the night before, perhaps trying to napping, when he saw this little man dressed in the latest in burglar accessories—yellow rain jacket and pants, with reflective strips—and like your typical security guard, got all paranoid and discombobulated. People like us make suspicious people’s lives distressing, because they think that can’t go back to sleep unless we go away. It also true that some people don’t know how to mind their own business. If I was traversing the streets in said manner and some stranger approached me and demanded to know I was doing, I might inform him to piss-off and mind his own business. However, when you encounter a cop with a hair up, there seems little a person can do except express and opinion and stand there; even if you have done nothing unlawful and simply going about your daily (nightly) business, a cop can stick his or her nose in your business any time he or she wishes, and if you decide that you are perfectly within your rights to exercise your rights as a free person, and employ your legs to that purpose, the cop will remind you that your rights only have as much meaning as he or she allows you to have.
So no sooner was I 10 feet from the police van, what do you know but the cop exits the vehicle and demanded to know what I doing out this early in the morning. I told him that I work at the airport and need to keep going so I would not be late for the bus. He demanded to see my Port of Seattle badge (he actually knows what that is?); I showed him my badge. He seemed disappointed. He told me that some of the properties had been robbed lately, which I thought was BS. He said I can go. I walked down the street, and cut across the empty parking lot so I could save a few minutes. Next thing I noticed is that police van was humming down the street; the cop obviously had been keeping an eye on me. He turned the corner, and as I was exiting out the other side of the parking lot, he pulls in and blocks my way. The cop exited the van and informed me “Alright, now I can see your ID.” What? I ask. “You’re on private property, I can hold you now.” Private property? It’s a parking lot. “Let’s see your ID.” I showed you my ID. “I want to see your driver’s license.” I give it to him, but I demanded to know his name. “Warmington.” Well, I’m going to file a complaint. You are making me late for my bus. “You stay there.” I stand there while he went back to his van. He comes back a minute later and hands me back my license and tells me I can go. I tell him what I’m going to do. He gives me his name again. He’s probably smug about it because this is what cops do, and nobody cares.
So I’m finally allowed to finish my journey without further delay, but I’m running late and sore as hell. It is obvious what had transpired. Despite the fact that I had been making this trek for three years, and if anyone who was the least bit curious to know where I was going could have easily have taken the time to ascertain what I was doing without detaining me to find out. Or they could just take my word for it and leave me be. But someone, probably this cop, believed that I was up to no good. The problem is that walking on public streets is no excuse to detain anyone for anything unless there is a legitimate reason to. A cop only has the legal right to demand an ID card if he believes he has “reasonable” suspicion you have committed a crime, or are about to. The problem is that what a cop thinks is “reasonable” suspicion is entirely arbitrary and at his discretion, usually based on some prejudice. The U.S. Supreme Court has only said that you must “identify” yourself”—i.e. “tell” a cop your name—anytime he or she asks, but you are not required to show identification for no reason; racial profiling or “suspicion” based on nothing more than appearance is illegal. Allegedly in the state of Washington you do not even have to stop if you are innocent of a crime or otherwise have done nothing wrong (like walking on a sidewalk or waiting at a bus stop) even if a cop tells you to. Of course, this is not a good idea, or you might become permanently brain-damaged, like the man who just “won” a $10 million civil suit against King County. It is also true that another problem for the innocent pedestrian is that if a cop wants to, he can charge you with “contempt of cop”—which generally comes into play either if you have an “attitude” problem, or the cop desperately wants to detain you but doesn’t have a “reasonable” excuse at hand. This isn’t “Law and Order” where every male “suspect” is guilty until proven guilty, and whatever cops do is justified. The contempt “charge,” which is typically thrown out by morning, is also “handy” when a cop seriously injures a “suspect” who also has done nothing wrong or illegal.
In this particular case, the fact that I was out and about at an early might be “questionable”—if it was it was not apparent that I my actions were in themselves were not inherently suspicious; however, perfectly natural activities are frequently deemed “suspicious” if a person chooses to interpret them as such, based on their prejudices and stereotypes. As long as I wasn’t staggering around in a drunken stupor or sneaking around cars, the cop had no legitimate reason to stop me, and he knew it. Here I was walking in what was clearly a work uniform; I could be walking to or from work, and I was clearly going somewhere—I wasn’t acting “confused” or “shifty.” I made no effort to avoid him. I had provided him with a legitimate reason why I was out that time of day. But this cop had a premeditated plan to stop and “examine” me, and that’s exactly why he was waiting where he was, when he was. He didn’t want to “miss” me; I wasn’t a “random” target. But there was a problem: He still had no “reasonable” suspicion that I had or was about to commit a crime. His claim that there had been “break-ins” at nearby business was of doubtful credibility, since he wasn’t hidden anywhere to observe any “suspicious” activity; he was right out in the open—and after our encounters, I saw him driving off to downtown Kent. His glee upon being provided a rationale to harass me further proved that although he had been frustrated in his first attempt to harass me, he was eager for another go-round, regardless of its petty and absurd nature.
And to think that a year ago I was mugged out here by someone who was standing in the middle of the street for several minutes as I approached. Where were the cops them? Probably sleeping in the van.
Friday, February 4, 2011
California and Texas--just two sides of the same coin
California’s economy continues to be a mess, which is not unexpected since like many states it is over-dependent on construction and real estate jobs for its non-public sector employment. With the real estate market showing no sign of stabilizing—let alone growing—along with it comes no real chance that the state will see a decrease in unemployment any time soon. The “answer” to the state’s woes is, of course, cutting taxes and making the state “friendlier” for business, meaning reducing environmental standards among other things; not that any of this will be sufficient, but that’s what they say. High taxes may have been an issue in the past, but not necessarily now. Business leaders are always complaining about taxes; we frequently hear that complaint in Washington, yet just this past December, the Small Business and Entrepreneurship Council rated this state the 5th best on its small business “survival” index.
People demanding that California change its ways point to the fact that it has lost 20 percent of its manufacturing jobs in the past 10 years—except that this is in line with what the rest of the country has lost. A net of 3 million manufacturing jobs were lost during the Bush years, giving yet another lie to the myth that cutting taxes for the rich creates jobs; even in so-called “business friendly”—meaning low-wage, low-tax and low-environmental standards—states in the South lost an average of 13 percent of their manufacturing jobs. The blame for this lies elsewhere—on even cheaper costs from overseas competition, particularly China and Southeast Asia, and the unwillingness of Americans to “buy American” because of low wages. While California’s taxes may have been high in a distant past, today its combined state and local sales tax rate of 10.66 percent on the dollar is barely above the national average of 10.43 percent.
The real problem with California is that it seems that most of its “policy-making” is done by voters in seemingly endless propositions with no thought to consequences; in the past, many of these propositions and initiatives seemed to advance positive goals in education, infrastructure, transportation and the environment. California’s progressiveness was the envy of the world, but all good things must to come to end, because you have to pay for these things. By the late 1970s, people in California decided they were not going to pay for these things anymore. Since then, California has been what Kevin O’Leary suggested it was over a year ago in Time magazine: “Direct democracy run amok, timid governors, partisan gridlock and a flawed constitution that have all contributed to budget chaos and people in pain.” Sounds like the state I’m living.
And, he addd:
“And at the root of California's misery lies Proposition 13.”
Proposition 13 actually had its roots in tax payers from wealthy school districts not wanting to help pay education costs in poor school districts, which came into play after the state Supreme Court ruled that disproportionate school funding violated the equal protection clause in the state constitution (I know this about this attitude; I had a letter in the Sacramento Bee way back when commenting on a well-off white section of a local school district that wanted to separate from the low-income minority section). The housing boom in the 1960s and 70s led to higher home prices and thus higher tax assessments, and a scandal involving tax assessors led to a law raising artificially-lowered assessments. This was just too much for people to handle at one time. So in 1978, a cranky old man named Howard Jarvis pushed through Proposition 13, which would set property tax rates at one percent from the base home price date of 1975; rates could only rise a maximum of 2 percent from the base rate per year. Even if home values rose dramatically, the tax would still be based on the original assessment, and could only change if the home value decreased, or the home was sold to a new owner. The net effect was that it was cheaper for homeowners to stay in their home instead of buying a new home; as bad as California’s real estate market is now, it always was bad in a fashion after the passage of Proposition 13. Property taxes remained largely stagnant, while public services costs skyrocketed. And something else had been snuck into 13’s nefarious shenanigans: A two-thirds majority in the state legislature was required to pass any form of tax increase.
Another problem for California now is that at one point 15 percent of its workforce was in the public service sector; government was the biggest employer in the state. With revenues plunging, it was inevitable that people in this sector would be thrown out of work. If businesses didn’t want to create jobs in the state, then the state needed to raise taxes and get into the business of job creation itself—like building or repairing infrastructure, improving quality of life, and building a world-class education system. In the past, this “liberal” agenda was what made California the most important state in union for decades after World War II. This was at least part of the idea behind the Obama stimulus package: Government spending creating work, which would create consumerism, which would in turn create additional jobs. This didn’t exactly happen, because too much of the stimulus package was tax cuts to appease conservatives. But because of the strictures of Proposition 13, it is virtually impossible for the state to stimulate job growth; we have already seen from the Bush example that tax cuts do not stimulate job growth.
Perhaps the sector that was hurt the most by Proposition 13 and its fallout was education; California once prided itself in providing as many people as possible the opportunity to achieve all they could be. Today, Arnold Schwarzenegger—who still doesn’t remember what he and Enron’s John Lay and stock fraudster Michael Milken discussed at that Peninsula Hotel meeting in 2001 (despite having a college “degree” through correspondence courses with the University of Wisconsin-Superior, probably because they offered lax standards for a “celebrity”)—has as his most lasting “legacy” in California the suspicion that he helped drive the state’s once proud university and community college system into permanent fiscal turmoil. Soaring tuitions—particularly for community colleges—and the cutting of state student aid turned out to have the effect not of increasing the level of non state-sourced funding (that is, out-of-pocket expenses), but reducing education revenue still further, since prospective students simply dropped-out, while higher education revenue needs continue to rise. The effect of fewer educational resources and educated people in the state has even longer-term consequences.
I read an op-ed in a local paper called The Columbian (I think it has something to do with the Columbia River) online recently. California, the writer opines, didn’t understand that tax exceptions would keep jobs in the state; the writer, Don Brunell, neglected to mention that while companies like Boeing were getting billions in tax breaks to keep a handful of 787 jobs in-state, it was laying-off tens-of-thousands workers elsewhere; the state essentially gave Boeing something for much less than nothing. Brunell went on to make some other rather bizarre claims, stating that tax exceptions on machinery beginning in 1995 added $81.5 billion to state coffers on $16.5 billion in increased income over a ten-year period. Whatever. All I know is the Washington’s fiscal position is in a shambles like everywhere else—even Texas.
Some people do like to point to Republican-run Texas as doing everything right to insure economic stability and jobs. They must be doing something “right,” because California pays far more in federal taxes than it receives in federal dollars, while the opposite is true for Texas, in large part due to an over-abundance of military installations and the fact that it has low public services spending--especially in health care, so the federal government has to step in and fill the gap; in fact, Texas receives more money from the federal government than any other state—10 percent of all federal money going to the states. The reality is that besides federal dollars for this welfare state, oil and natural gas jobs and profits are for the time being keeping the state afloat. Underneath all the happy talk is a rotting public services sector ill-equipped to deal with a real crisis. Texas already is near the bottom in many quality of life indicators; poverty is high, infant mortality rates are high, and the number of people without health insurance is infamously high. Statemaster.com places Texas 45th out of 50 states in its “livability” index. A CNBC “study” claimed that the Texas was number one for business, but it came in at 29 on quality life—failing to live-up to CNBC’s claim that states with good business climates also have good quality of life (Interestingly, while business leaders and Republicans in Washington state constantly claim that it has a poor business atmosphere, the same study in fact ranks it the 15th best state to do business).
On top of that, the state faces a $25 billion budget deficit over the next two years—a huge chunk of the $95 billion projected budget. Of course, Republicans will not countenance tax increases, except those that harm the poor rather than their billionaire buddies.
Texas ballot initiatives also tend toward the opposite direction of California’s; this past year, the Republicans pushed referendums requiring all voter show “valid” photo ID at any and all elections; Congress should be enjoined to “stimulate” the economy by cutting more taxes; acknowledge the existence of God at public functions; and women should be required to be shown sonograms before undertaking abortion procedures. All of these constitute the Republican notion of solving the state’s massive budget problems; not surprisingly, besides being designed to suppress Democratic votes or being nonsensical, they cost nothing and accomplish nothing.
Texas is a bad example for the country, perhaps even worse than California. What about North and South Dakota, which seem to have weathered the economic storm and maintaining low unemployment? Both states have populations under a million, for one thing. North Dakota has oil and is a leading producer of many varieties of foodstuffs, while South Dakota’s economy is buoyed by federal spending which accounts for 10 percent of its GDP, tourism to its national parks and the fact that the problems of Native Americans—who constitute the large majority of the state’s non-white population—are the federal government’s problem. It is also another low tax, low service state; its teachers are among the lowest paid in the country. The only lesson here is that being small and in an out-of-the way place no one wants to go has its advantages if you run a state government on the cheap.
People demanding that California change its ways point to the fact that it has lost 20 percent of its manufacturing jobs in the past 10 years—except that this is in line with what the rest of the country has lost. A net of 3 million manufacturing jobs were lost during the Bush years, giving yet another lie to the myth that cutting taxes for the rich creates jobs; even in so-called “business friendly”—meaning low-wage, low-tax and low-environmental standards—states in the South lost an average of 13 percent of their manufacturing jobs. The blame for this lies elsewhere—on even cheaper costs from overseas competition, particularly China and Southeast Asia, and the unwillingness of Americans to “buy American” because of low wages. While California’s taxes may have been high in a distant past, today its combined state and local sales tax rate of 10.66 percent on the dollar is barely above the national average of 10.43 percent.
The real problem with California is that it seems that most of its “policy-making” is done by voters in seemingly endless propositions with no thought to consequences; in the past, many of these propositions and initiatives seemed to advance positive goals in education, infrastructure, transportation and the environment. California’s progressiveness was the envy of the world, but all good things must to come to end, because you have to pay for these things. By the late 1970s, people in California decided they were not going to pay for these things anymore. Since then, California has been what Kevin O’Leary suggested it was over a year ago in Time magazine: “Direct democracy run amok, timid governors, partisan gridlock and a flawed constitution that have all contributed to budget chaos and people in pain.” Sounds like the state I’m living.
And, he addd:
“And at the root of California's misery lies Proposition 13.”
Proposition 13 actually had its roots in tax payers from wealthy school districts not wanting to help pay education costs in poor school districts, which came into play after the state Supreme Court ruled that disproportionate school funding violated the equal protection clause in the state constitution (I know this about this attitude; I had a letter in the Sacramento Bee way back when commenting on a well-off white section of a local school district that wanted to separate from the low-income minority section). The housing boom in the 1960s and 70s led to higher home prices and thus higher tax assessments, and a scandal involving tax assessors led to a law raising artificially-lowered assessments. This was just too much for people to handle at one time. So in 1978, a cranky old man named Howard Jarvis pushed through Proposition 13, which would set property tax rates at one percent from the base home price date of 1975; rates could only rise a maximum of 2 percent from the base rate per year. Even if home values rose dramatically, the tax would still be based on the original assessment, and could only change if the home value decreased, or the home was sold to a new owner. The net effect was that it was cheaper for homeowners to stay in their home instead of buying a new home; as bad as California’s real estate market is now, it always was bad in a fashion after the passage of Proposition 13. Property taxes remained largely stagnant, while public services costs skyrocketed. And something else had been snuck into 13’s nefarious shenanigans: A two-thirds majority in the state legislature was required to pass any form of tax increase.
Another problem for California now is that at one point 15 percent of its workforce was in the public service sector; government was the biggest employer in the state. With revenues plunging, it was inevitable that people in this sector would be thrown out of work. If businesses didn’t want to create jobs in the state, then the state needed to raise taxes and get into the business of job creation itself—like building or repairing infrastructure, improving quality of life, and building a world-class education system. In the past, this “liberal” agenda was what made California the most important state in union for decades after World War II. This was at least part of the idea behind the Obama stimulus package: Government spending creating work, which would create consumerism, which would in turn create additional jobs. This didn’t exactly happen, because too much of the stimulus package was tax cuts to appease conservatives. But because of the strictures of Proposition 13, it is virtually impossible for the state to stimulate job growth; we have already seen from the Bush example that tax cuts do not stimulate job growth.
Perhaps the sector that was hurt the most by Proposition 13 and its fallout was education; California once prided itself in providing as many people as possible the opportunity to achieve all they could be. Today, Arnold Schwarzenegger—who still doesn’t remember what he and Enron’s John Lay and stock fraudster Michael Milken discussed at that Peninsula Hotel meeting in 2001 (despite having a college “degree” through correspondence courses with the University of Wisconsin-Superior, probably because they offered lax standards for a “celebrity”)—has as his most lasting “legacy” in California the suspicion that he helped drive the state’s once proud university and community college system into permanent fiscal turmoil. Soaring tuitions—particularly for community colleges—and the cutting of state student aid turned out to have the effect not of increasing the level of non state-sourced funding (that is, out-of-pocket expenses), but reducing education revenue still further, since prospective students simply dropped-out, while higher education revenue needs continue to rise. The effect of fewer educational resources and educated people in the state has even longer-term consequences.
I read an op-ed in a local paper called The Columbian (I think it has something to do with the Columbia River) online recently. California, the writer opines, didn’t understand that tax exceptions would keep jobs in the state; the writer, Don Brunell, neglected to mention that while companies like Boeing were getting billions in tax breaks to keep a handful of 787 jobs in-state, it was laying-off tens-of-thousands workers elsewhere; the state essentially gave Boeing something for much less than nothing. Brunell went on to make some other rather bizarre claims, stating that tax exceptions on machinery beginning in 1995 added $81.5 billion to state coffers on $16.5 billion in increased income over a ten-year period. Whatever. All I know is the Washington’s fiscal position is in a shambles like everywhere else—even Texas.
Some people do like to point to Republican-run Texas as doing everything right to insure economic stability and jobs. They must be doing something “right,” because California pays far more in federal taxes than it receives in federal dollars, while the opposite is true for Texas, in large part due to an over-abundance of military installations and the fact that it has low public services spending--especially in health care, so the federal government has to step in and fill the gap; in fact, Texas receives more money from the federal government than any other state—10 percent of all federal money going to the states. The reality is that besides federal dollars for this welfare state, oil and natural gas jobs and profits are for the time being keeping the state afloat. Underneath all the happy talk is a rotting public services sector ill-equipped to deal with a real crisis. Texas already is near the bottom in many quality of life indicators; poverty is high, infant mortality rates are high, and the number of people without health insurance is infamously high. Statemaster.com places Texas 45th out of 50 states in its “livability” index. A CNBC “study” claimed that the Texas was number one for business, but it came in at 29 on quality life—failing to live-up to CNBC’s claim that states with good business climates also have good quality of life (Interestingly, while business leaders and Republicans in Washington state constantly claim that it has a poor business atmosphere, the same study in fact ranks it the 15th best state to do business).
On top of that, the state faces a $25 billion budget deficit over the next two years—a huge chunk of the $95 billion projected budget. Of course, Republicans will not countenance tax increases, except those that harm the poor rather than their billionaire buddies.
Texas ballot initiatives also tend toward the opposite direction of California’s; this past year, the Republicans pushed referendums requiring all voter show “valid” photo ID at any and all elections; Congress should be enjoined to “stimulate” the economy by cutting more taxes; acknowledge the existence of God at public functions; and women should be required to be shown sonograms before undertaking abortion procedures. All of these constitute the Republican notion of solving the state’s massive budget problems; not surprisingly, besides being designed to suppress Democratic votes or being nonsensical, they cost nothing and accomplish nothing.
Texas is a bad example for the country, perhaps even worse than California. What about North and South Dakota, which seem to have weathered the economic storm and maintaining low unemployment? Both states have populations under a million, for one thing. North Dakota has oil and is a leading producer of many varieties of foodstuffs, while South Dakota’s economy is buoyed by federal spending which accounts for 10 percent of its GDP, tourism to its national parks and the fact that the problems of Native Americans—who constitute the large majority of the state’s non-white population—are the federal government’s problem. It is also another low tax, low service state; its teachers are among the lowest paid in the country. The only lesson here is that being small and in an out-of-the way place no one wants to go has its advantages if you run a state government on the cheap.
Wednesday, February 2, 2011
We've seen this story before
From Tunisia to Egypt to Yemen and possibly Jordan the dominoes in the Middle East fall. The fear for the U.S. is whether this is about reform—or revolution, such as what occurred in Iran, a country that remains a thorn in our side. There has been a great deal of talk about poverty being the cause of unrest, and the breeding ground of militant Islam. However, the idea that if everyone in the Islamic world was well-off and contented there would be little desire to rock the boat has little empirical evidence to support it. It also should be pointed out that Osama Bin Laden was the son of wealth, and most of the 9-11 hijackers were of well-off, educated stock. The idea that “democracy” would provide a safe outlet for letting off steam also ignores the fact that in the Middle East “democracy” has very little in common with the Western idea of it. Despite all the fanfare in Egypt, “mainstream” Islamic clergy are opposing instituting true democracy because they perceive it as threatening to their authority. In Iran, the Islamic Revolutionary Council is the power behind the scenes, and it’s ridiculous president certainly is not allowed to make policy without first getting approval from the council. In Iraq, if by chance the Shiite majority had eventually overthrown Saddam, we would not have seen “democracy” but a country governed by a Shiite strongman, or an Islamic state based on the Iranian model.
The reality is that like the French and Russian revolutions, what starts out as a “peasant” revolt ends as merely a transfer of power between elites, whether from the upper or the educated middle-classes. After 9-11, Martin Kramer, as editor of the Middle East Quarterly, wrote that:
“(Militant Islam) is the vehicle of counter-elites, people who, by virtue of education and/or income, are potential members of the elite, but who for some reason or another get excluded. Their education may lack some crucial prestige-conferring element; the sources of their wealth may be a bit tainted. Or they may just come from the wrong background. So while they are educated and wealthy, they have a grievance: their ambition is blocked, they cannot translate their socio-economic assets into political clout. Islamism is particularly useful to these people, in part because by its careful manipulation, it is possible to recruit a following among the poor, who make valuable foot-soldiers.”
It is obvious that we can also extrapolate from this that many people who desire “change” in their government also come from this “stock.” We are almost certainly seeing this on the streets of Cairo, and to a lesser extent in Yemen, where most of the protestors (at least for now) seem to be older men from the intelligentsia. The powerlessness that many people on the street feel is also explained by the repression by the police (secret or otherwise), who seem to be omnipresent in practically every facet of life, much like the Gestapo and KGB. The riots in Egypt were given a face by Khaled Said, who was beaten to death by two undercover police officers in Alexandria; it seems that the U.S. is not the only country where “suspects” can be beaten indiscriminately, although in Egypt and other Arab states torture of political incorrigibles is also a frequent occurrence.
How much “change” can the Middle East stand? Iran is not really a true barometer since it really is a theocracy masquerading as a democracy. It remains to be seen if Iraq’s fledgling democracy can stand long once U.S. forces leave altogether; with the return of Muqtada al-Sadr from his religious “sabbatical” in Iran and his known hostility to the present government, the potential for civil war and change of regime cannot be excluded. But more to the point of this discussion, the U.S. tends to forget that they are dealing with people with their own ideas and who may be completely hostile to the Western way of thinking; pushing the present government of Egypt for change may appear to be only a cynical gesture in the eyes of Egyptians who would blame the West—and the U.S. in particular—for propping-up a repressive dictator. We’ve seen this story before.
The reality is that like the French and Russian revolutions, what starts out as a “peasant” revolt ends as merely a transfer of power between elites, whether from the upper or the educated middle-classes. After 9-11, Martin Kramer, as editor of the Middle East Quarterly, wrote that:
“(Militant Islam) is the vehicle of counter-elites, people who, by virtue of education and/or income, are potential members of the elite, but who for some reason or another get excluded. Their education may lack some crucial prestige-conferring element; the sources of their wealth may be a bit tainted. Or they may just come from the wrong background. So while they are educated and wealthy, they have a grievance: their ambition is blocked, they cannot translate their socio-economic assets into political clout. Islamism is particularly useful to these people, in part because by its careful manipulation, it is possible to recruit a following among the poor, who make valuable foot-soldiers.”
It is obvious that we can also extrapolate from this that many people who desire “change” in their government also come from this “stock.” We are almost certainly seeing this on the streets of Cairo, and to a lesser extent in Yemen, where most of the protestors (at least for now) seem to be older men from the intelligentsia. The powerlessness that many people on the street feel is also explained by the repression by the police (secret or otherwise), who seem to be omnipresent in practically every facet of life, much like the Gestapo and KGB. The riots in Egypt were given a face by Khaled Said, who was beaten to death by two undercover police officers in Alexandria; it seems that the U.S. is not the only country where “suspects” can be beaten indiscriminately, although in Egypt and other Arab states torture of political incorrigibles is also a frequent occurrence.
How much “change” can the Middle East stand? Iran is not really a true barometer since it really is a theocracy masquerading as a democracy. It remains to be seen if Iraq’s fledgling democracy can stand long once U.S. forces leave altogether; with the return of Muqtada al-Sadr from his religious “sabbatical” in Iran and his known hostility to the present government, the potential for civil war and change of regime cannot be excluded. But more to the point of this discussion, the U.S. tends to forget that they are dealing with people with their own ideas and who may be completely hostile to the Western way of thinking; pushing the present government of Egypt for change may appear to be only a cynical gesture in the eyes of Egyptians who would blame the West—and the U.S. in particular—for propping-up a repressive dictator. We’ve seen this story before.
Subscribe to:
Posts (Atom)