Sunday, November 27, 2016

TV Review: Gilmore Girls: A Year in the Life

TV Review: Gilmore Girls, A Year in the Life

Writing a review of the Gilmore Girls mini-series “A Year in the Life” is a lot like writing a thoughtful, considerate review of the latest Marvel Comics movie; no one who cares about it will have their opinion influenced by a third party’s independent perspective.  But as a fan of the original series I have to put my two cents in, something Lorelai Gilmore would approve of.

The series Gilmore Girls has been off the air for nine years, and there is always the risk that the voices that animated the original have lost their edge in the intervening 9/10 of a decade.  No such problem exists here, however, as Amy Sherman-Palladino’s and Daniel Palladino’s distinctive dialog comes spewing out of Lauren Graham’s mouth almost the moment the new mini-series starts.  For the most part the four 90 minute episodes track like four of the better episodes of the series, with the seams only occasionally showing where things were cobbled together to shoehorn in a cameo of a previous regular.

The episode pick up nine years after the run of the show, although time has stood still for most of the regulars.  Luke and Lorelai are still together, but not married; Emily Gilmore is still imperious, although stunned by the death of her husband (series regular Edward Hermann died two years ago); Stars Hollow is still filled with kooks and weirdos; and Kirk is still unintelligible even by Stars Hollow standards.  Rory has followed her end-of-the-series assignment of reporting on promising Presidential Candidate Barrack Obama (what ever happened to him?) by adopting a rootless existence of freelance journalism, which has been rewarding but has left her wondering in what city her underwear is in.

Each episode runs twice the length of an episode of the TV series, yet the pacing never flags.  Almost all the actors get back into their old characters with seeming effortlessness.  Liza Weill still manages to make Paris Geller both an abrasive know-it-all and an insecure little girl; one of the small pleasures of the mini-series is the amused look on Weill’s face in the background when Laine’s band Hep Alien plays a number.  Melissa McCarthy, who has had the most post-Gilmore success, not only shows up but recaptures Sookie’s inherent mix of super-competency and ineptness in the kitchen.

The best decision was to use some meta-references to guide the plot.  Some of them are throwaways, like making Danny Strong’s character Doyle a successful screenwriter (Strong won an Emmy for writing Game Change and co-created Empire).  But the main thread of the four episodes is triggered when Rory’s old boyfriend Jess suggests she write a book about her relationship with her mother.  She of course names the book “The Gilmore Girls” (which prompts Lorelai to give her advice from The Social Network to “drop the ‘the’”).  This plot line eventually leads to the famous final four words that Sherman-Palladino promised would end the show, which force a reconsideration of several plot points that had come before.

There are some legitimate criticisms.  One hates to tell an auteur about what her creation would or wouldn’t do, but having Lorelai decide to work out her problems by hiking the Pan-Pacific Trail (like in the BOOK Wild, not the MOVIE) is just plain absurd.  A 30-year-old Lorelai wouldn’t have considered hiking the 2,000-mile trail; the idea that a nearly 50-year-old Lorelai would buy a backpack and hit the trail is ridiculous.  The structure of fitting the episodes into four seasons causes some timing problems, as clearly the events at the beginning of Fall follow immediately upon the end of Summer.  And one has to wonder if Rory would really have a nine-year-long “friends with benefits” relationship with ex-boyfriend Logan even after he was engaged to be married.

And I couldn’t help but yell at my TV when Lorelai said to Luke, “I feel like we should already be married.”  Yes, because you had the wedding planned and he backed out because he felt you were rushing into marriage after nine years!  After which Lorelai promptly slept with her ex-boyfriend and married him.  So yeah, you should have been married ten years ago.

My biggest complaint is the fact that Sherman-Palladino has, for some reason, decided that Michel was gay.  For years I have been arguing that Michel was not gay, he was merely French.  I was pleased to see that one of the writers at AV Club referred to this as “retconning.”  There is also an odd scene where a town member tries to get busybody Taylor Dooce to out himself, which was legitimate but still a little creepy.

The acting is of course first rate, and one hopes that being in the mini-series/movie category will net a long overdue Emmy nomination for Graham.  Even more deserving is Kelly Bishop, who was never anything short of brilliant as Lorelai’s always correct mother, Emily Gilmore.  Her journey through these four episodes brings out multiple dimensions in the character that were always there but never allowed to surface.  Her last scene in the final episode, Fall, was one of the emotional high points of the series.

While the mini-series ends with the famous Final Four Words, the door is left open a crack for more Gilmore Girls.  If nothing else, the long look that Jess gives Rory after assuring his uncle Luke that he was over her could lead to something hoped for by Rory/Jess shippers.


Frankly, I never was a Luke/Lorelai shipper—I think they are a murder-suicide waiting to happen.  I may be the world’s only Lorelai/Digger shipper (she dumped him because he sued her father after her father cheated him in a business deal, saying she couldn’t date someone who was suing her family; that made no sense at all).  Be that as it may, Lorelai Gilmore will always be Lauren Graham’s greatest creation, and the same goes for Alexis Bledel and Rory Gilmore.  That Netflix for giving some closure to a great series that deserved better than the half-assed season 7 that the show was forced to end with.

Wednesday, November 23, 2016

Will the US Men's Soccer team have to wait longer than the Cubs to win?

If you know me, I eat, drink and breathe soccer.  Nothing gives me greater joy than the prospect of a 0-0 tie between Man U and Liverpool being decided on penalty kicks.

In case you an't detect sarcasm, that last paragraph is the final proof that we live in a “post-truth” world, as The Oxford English Dictionary determined this month.

I don’t know much about soccer, but I do know that an industrialized colossus with a population of 320 million people should not lose to a tiny third world nation with a population of 4.5 million by a score of 4-0.  With a 71.1 to 1 ration in population, the US should be able to field a pick-up team at a mid-sized college that would be up by three goals handily by half time against the likes of Costa Rica. So, I was not taken aback when US Men’s’ Soccer decided to fire Coach Jurgen Klinsmann after five years.

According to FiveThirtyEight.com, Klinsmann took over a US team ranked 34th in the world and drove them to a ranking of 33rd in the world. Way to go, Jurgen.   Few things annoy me more than delusional sports figures claiming they will win the Super Bowl or World Series when they will be lucky to make the playoffs, but I still thought Klinsmann was an idiot for announcing before the 2014 World Cup that the US had no chance to win.  If you are planning on losing, why play the game?  You don’t have to predict victory, but say they team is coming together nicely, you think they’ll surprise some people, and determining who the better team is is why we play the game.  Telling your team they are doomed before the tournament starts will not earn you comparisons to Vince Lombardi (note to Jurgen—he’s was a coach of American football who, unlike you, was successful).

So, why does America suck at soccer?  That’s easy—we don’t have any good players.  It is really hard to win when all your players are less good than all of the other teams’ players.  And why are our players less good?  As Deep Throat said, “Follow the money.”

According to a comparative analysis of pro sport salaries, the median MLS player makes $117,000, less than one-tenth of the median salary in baseball ($1.5 million), basketball ($2.85 million) or even hockey ($1.48 million).  It is necessary to look at median salaries, not averages, as salaries in the MLS are highly skewed towards massively rewarding big name, over-the-hill foreign players like David Beckham, which makes the average look higher than what a mid-level player actually makes.

So, you are a young man with athletic skill—are you going to gravitate to baseball (minimum salary $500,000) or soccer (minimum $51,492)?  Let’s ask the same question of a young woman—her choices are soccer or tennis, unless she wants to go for Olympic medals in ice skating or gymnastics.  This partly explains why the US Women’s soccer team is one of the dominant programs in the world and the US Men’s program, well, isn’t.

I can’t say it is terribly encouraging that the US Men’s team rushed in to re-hiring Bruce Arena, whose main qualification for leading the Men’s Soccer team is that he did it before and failed.  Well, failed is kind of harsh; he took the team to its best World Cup finish in 2002.  But he isn’t exactly new blood or a fresh perspective.

There was no need to rush into this hire.  The next match won’t be until March, so the USMST could have taken some time, a week or two at least, and seen if there was another younger, more innovative candidate.  But after the tenure of Jurgen Klinsmann, maybe they wanted comfort food after a serving of that heavy German cooking.  Arena’s meat and potatoes won’t get the US to the World Cup finals in 2018, or 2022, or . . . well, it’s a long century. 


But if the Cubs can wait 108 years until winning a championship, then by gum so can the USMST!

Wednesday, November 16, 2016

So that happened

Dissections of the outcome of the 2016 Presidential election will be taking place until long after the 2020 election.  Despite the perception that Trump’s win was a bolt from the blue, FiveThirtyEight gave Trump a reasonable chance even as they said that there was a 66% chance of a Clinton victory.  If the weatherman says there is a 70% chance of fair weather, is it fair to get mad if it rains on your picnic?  He didn’t say rain was impossible, just unlikely.

Speculating about the future is probably foolish, as no elected President has been as enigmatic as Trump is.  Is he actually going to start construction on a $5 trillion wall between the US and Mexico and send Mexico a bill, or will saner heads prevail?  As John Oliver said on Last Week Tonight, there are two terrifying possibilities: either he will try to implement the policies he advocated for in the election (cancelling international trade agreements, tax policies that will greatly increase the deficit, and so on), or everything he said during the election was just campaign rhetoric.  Those who voted for him are hoping for the former, while all the sane people in America are praying for the latter.

Let’s get down to brass tacks—who is responsible for the election result?  I place the blame at the feet of Hilary Clinton.  In the past, when parties anointed their candidate for the next Presidential election four or eight years in advance, it hasn’t worked out well.  The GOP nominations of Bob Dole in 1996 and John McCain in 2008 produced weak candidates who underperformed.  The path was cleared for Hilary to get the nomination in 2016 after her loss to Obama in 2008, although I would fall short of repeating Bernie Sanders’ line that the contest was fixed (as opposed to “rigged”).  You don’t need to pre-determine the outcome if you can tilt the playing field and make Sanders run uphill.

Hilary knew she had the nomination assured based on her support by “superdelegates” that represented the Democratic Party elite.  Therefore, she felt no urgency to defeat Bernie Sanders, as no matter how many caucuses he won he was never going to wrest the nomination away from her.  Therefore, her risk-averse instincts kicked in and she didn’t make an effort to attack Bernie and alienate the left wing of the party.  This allowed Bernie to run an extended campaign against her, eroding her support on the left.  She should have been able to easily defeat an elderly independent crackpot socialist, but her inability tom make Sanders go away signaled her weakness as a candidate.

Similarly, I think she assumed she had the election in the bag once Trump secured the GOP nomination.  Her TV ads that I saw were overwhelmingly about painting a negative view of Trump and not about following up on the excellent message of the Democratic convention that presented the positive aspects of the Democratic policies.  Her pointing out that Trump was a nut was ineffective because the majority of Trump supporters were going to vote for him because he was a nut.  This was information readily available to all voters; she was not telling voters anything they didn’t already know.

Two of Hilary’s defining characteristics are her penchant for secrecy and her risk-aversion.  Both betrayed her.  Instead of coming clean about the e-mail mess, she stonewalled and did not respond to legitimate questions in an open manner.  She defended herself using words that even someone without a Yale law degree could tell were prevarications and obfuscations.  Instead of developing policies she believed in that might offend some of her constituency (particularly the left), it was easier and “safer” to portray Trump as a dangerous demagogue. 

My favorite line from the George C. Scott version of A Christmas Carol is when Scrooge’s fiancée asks if he would still ask her to marry him now that he had acquired some wealth, and he strategically replies, “You think I would not?”  His fiancée chillingly says, “Oh, what a safe and terrible answer!”  Clinton played it safe, and her answers were terrible.

After Hilary, I also place some blame with President Obama.  Throughout his administration, he was attacked as a Muslim, as a non-American, and Trump said he was the founder of a terrorist group attacking the United States (Trump also insisted that Hilary Clinton was a co-founder, although how a Caucasian woman in her 60’s who was a life-long Christian managed to become the leader of a Islamic terrorist organization was never explained).  Obama responded to these attacks by attempting to work with the Republicans and treating them as rational human beings.  This merely legitimized the more insane accusations of the GOP, providing them with credibility.  His support for Republican calls for budget cuts at a time of high unemployment was a betrayal of the legacy of FDR and Keynesian economics.

Where do we go from here?  A year ago many speculated on the death of the Republican party, with the split between the Establishment branch and the Tea Party branch; now many are predicting the end of the Democrats, who have a weak bench for 2020 given GOP control of Congress and a majority of state houses (when speakers at the Republican convention were describing the terrible state of America, they didn’t mention that the GOP controls most of America and is therefore responsible for most of the problems).  The fact remains that no non-incumbent Republican nominee for President has won the popular vote since 1980, yet the GOP will control all three branches of government once Trump fills Scalia’s seat on the Supreme Court.


Can Trump govern effectively?  No one on Earth can say.  Can the Democrats find a candidate who can make the case for Democratic policies, and how attractive will that message be after four years of Trump?  Let’s just hope that there is still an America existing in 2020 after The Donald gets the nuclear launch codes.  It would be a bad day indeed if he calls a press conference and announces, “They’re fired!”

Wednesday, November 9, 2016

The NFL MVP debate

One reason why the debate over who should win the baseball MVP awards is so interesting is the vagueness over the definition of “valuable.”  If the award was for the “numerically superior player” it would still be capable of being debated, but the arguments would be limited to how many significant digits to use when calculating a player’s WAR.  But value is in the eye of the beholder.

I am a fairly hard core believer that if your team didn’t make the playoffs, then you weren’t valuable.  Maybe back when only one team from each league made the post-season you could occasionally make an argument for a player on a team that fell a little short, but now that five out of fifteen teams get a post-season berth it’s a harder argument to make.  OK, I will begrudgingly concede that Bryce Harper deserved the award last year, because a) he had an historically phenomenal season and b) no one player on an NL playoff team stood out.  One of the ironies of my position is that really good teams rarely have an MVP candidate because they don’t rely on a single player.  Mike Trout may be the greatest player in the AL, but as long as the Angeles are below .500 he isn’t getting my MVP vote.

There is usually less of a debate over the NFL MVP decision than for baseball or basketball.  For some reason, the fact that the NFL only gives out one MVP award, instead of one for each conference, winnows the field significantly.  Last season was typical: the team with the best record was the Panthers at 15-1 (thanks to an extremely easy schedule) and that team’s best player, Cam Newton, had a great year.  Debate over.

This year may be different, because of what may be the most fascinating factor in an MVP debate I’ve ever seen.  The question is this: can Tom Brady win the MVP award after missing one-fourth of the season due to his Deflategate suspension?  ESPN’s Bill Barnwell has already made that call at mid-season.

ESPN’s Max Kellerman disagrees, arguing that missing four games means he’s missed too much of the season to make an MVP contribution.  He also argues that since the Pats were 3-1 without Brady, the marginal value Brady provides isn’t that great.  Former player Reggie Wayne adds the additional perspective that Brady’s missing games are not due to an injury but to a penalty for Deflategate, which should further cause him to forfeit a shot at the MVP award.

The marginal value argument is interesting.  In 2015 Yoenis Cespedes joined the New York Mets at the end of July; before that, the Mets were last in NL offense, but by the end of the year they were #1 in offense.  So Cespedes had a huge marginal impact on the Mets, one that most likely got them into the playoffs.  There was an argument to be made that he should be MVP despite playing for most of the season in the American League.

MVP voters didn’t buy it, with Cespedes coming in 13th in the MVP voting.  Obviously not contributing to the Mets from April through July hurt his consideration, even if he was largely responsible for the team’s playoff push in August and September.  So does this mean that Brady shouldn’t be considered for the NFL MVP because he missed 25% of the season?

Baseball players play on a team, but they rack up individual stats on their own.  Football players are cogs in a complicated machine.  I think slipping Tom Brady into the Patriot’s machine contributes more value than a baseball player compiling excellent stats for two months.  So I’d give a football player who misses a large chunk of the season more MVP consideration than a baseball player in similar circumstances.

I also reject the argument that since he missed the games due to suspension, not injury, Brady should be disqualified.  I have been a huge Brady critic on Deflategate (note: he’s guilty, guilty, guilty!) but his punishment was a four game suspension that was imposed (thanks to Brady’s trying to weasel out of his punishment) nearly two years after his infraction.  The penalty was the suspension; the Commissioner made no reference to ineligibility for post-season awards, and the rules also make no mention.  So I see no reason to treat the suspension differently than a four-game injury.

So, it comes down to this: is 75% of Tom Brady better than 100% of Matt Ryan, or Matt Stafford, or Derek Carr?  The answer is “Oh hell yes!”  Brady and Bill Belichick are on a scorched earth campaign, and they are rolling over the league like a cleansing plague.  Brady is like Doctor David Banner after he gets angry, and you wouldn’t like him when he’s angry.

The best argument against Brady’s MVP credentials is that the Patriots went 3-1 without him.  Obviously, with Belichick as coach, New England could put a leftover Halloween pumpkin at QB and still compete for the AFC East title.  But that does not diminish what Brady has done, and what he presumably will do over the next eight games.  If he goes 11-1, or even 10-2, over his 12 games and continues to put up excellent passing numbers, he will have earned the MVP award.


Hey, does Roger Goodell have to hand the award to him personally?  I might just tune in for that.

Friday, November 4, 2016

Whiteface/Blackface/Yellowface

Doctor Strange makes its cinematic debut this week, and almost forgotten is the brief flurry of complaint that arose when it was revealed that the role of The Ancient, who was an Asiatic man in the comic (excuse me, graphic novel) was being played by Tilda Swinton, a Caucasian woman (albeit a rather ethereal looking one).  This followed on the heels of the outrage over the casting of Emma Stone as an Asian character in Cameron Crowe’s film Aloha, a controversy that surely would have been bigger if anyone had actually paid to see that movie.

There is something to be said about Hollywood’s habit of casting White actors in minority roles.  Maybe you could somewhat justify it decades ago, when minority actors were rare, but in this day and age it shouldn’t be too hard for Cameron Crowe to find an Asian actress to play an Asian character (if I understand Cameron Crowe’s defense, he said the character was based on a friend of his who was half-Asian and did, in fact, look like Emma Stone).

So, can we all agree that when it comes to films and TV show, Hollywood should cast ethnically appropriate actors?  Not so fast.  Look at the casting of Tilda Swinton as The Ancient.  The role was conceived for an Asian man, but it was reimagined as a raceless/sexless entity.  Swinton’s appearance is such that she once starred in a film, Orlando, where she played one character as both a man AND a woman.  Here the filmmakers took liberties with the source material to get something a little less Earth-bound.

I find it hard to completely dismiss all attempts at cross-racial casting.  It is the nature of actors, especially great ones, to stretch their craft, which includes playing characters of other ethnicities.  If Marlon Brando wants to play an Asian in Teahouse of the August Moon, who is to say he shouldn’t?  Robert Downey Jr. got an Oscar nomination for Tropic Thunder, where he played an Australian actor playing an African-American character.  So, which is worse, an American actor playing an Australian actor, or the character of an Australian actor playing an African-American character?

Of course, some line must be drawn.  It is one thing for Brando to ineffectively play an Asian character, but quite another for Mickey Rooney to play a grotesque racial stereotype in Breakfast at Tiffany’s.  The problem with Rooney’s performance isn’t that he is a White man playing an Asian, it is that a mediocre actor is playing a one dimensional stereotype. 

Mickey Rooney playing an Asian is one thing, but German actor Peter Lorre playing Japanese sleuth Mr. Moto is quite another.  In the six Mr. Moto films Lorre’s presentation is subtle, multi-dimensional, and wholly respectful towards the race he is portraying.  While not in Lorre’s class as an actor, Swedish actor Werner Oland also treated the character of Charlie Chan with respect; Lorre and Oland were giving sympathetic performance of Asian characters at a time when virtually all portrayals were uniformly negative (I would recommend the book Charlie Chan by Yunte Huang for an examination of cultural attitudes towards Asian characters in the 1920-30s).

It is also unfair to impose modern ideas of casting on previous eras.  Anthony Quinn, who was from Mexico, made a career out of playing a variety of ethnicities, including Greeks (Zorba the Greek, The Greek Tycoon), Mexicans (Viva Zapata), Arabs (Lawrence of Arabia), and Italians (La Strada, The Secret of Santo Vittoria) to name a few.  Maybe these films should have been made with authentic ethnic performers, but for many years the standard procedure was to hire someone with dark hair to play any swarthy ethnic character.  Maybe this casting policy only made the number of roles available for ethnic actors more scarce, but the reality is there were fewer options when casting ethnic roles.

And what of mixed-race characters?  When Miss Saigon came across the Pond, Actor’s Equity wouldn’t approve the casting of Jonathan Pryce as The Engineer, who was described as half Vietnamese and half-French, insisting that the role go to an Asian actor.  The producers argued that the role was described as half-Caucasian, so why couldn’t the role be played by Pryce, who won awards in London and subsequently won the Tony Award for Best Actor in a Musical.

There is another question—if you want to say that all roles should be cast in a racially appropriate manner, then where do you draw the line?  There was a minor kerfuffle over the casting of John Cho, who is of Korean ancestry, in the role of Hikaru Sulu, a Japanese character in Star Trek.  I seem to recall there was a proposal to make a movie based on a Tony Hillerman novel with actor Graham Greene as Joe Leaphorn, a truly inspired casting choice, but the project went nowhere after protests due to the fact that Greene, while Native American (well, native-Canadian), wasn’t Navaho like Leaphorn.

Are we going to restrict actors to only playing characters that match up with their genetic heritage?  If your grandparents emigrated from Scotland, you can’t play someone who’s Irish?  I don’t think there should be a hard and fast rule, but it should be treated as a factor in the casting decision.  If Mickey Rooney wants to play a buck-toothed caricature of a Japanese person, don’t cast him.  If Peter Lorre wants to play a Japanese character with dignity and humanity, put him in the movie.

It reminds me on The Simpson’s episode when Bart ran for class President and his opponent said “There are no easy answers,” to which Bart replied, “We want easy answers!  We want easy answers!”  We want straight-forward rules regarding race and identity, but there just aren’t any.


Thursday, November 3, 2016

To cheat or not to cheat

             
One thing everyone is taught as a child is not to cheat.  Winners never cheat, and cheaters never win.  There is also an old saying: “If you ain’t cheatin’, then you ain’t tryin’.”  I wonder if that statement is engraved on Bill Belichick’s office wall.

Someone should tell the Oakland Raiders that cheaters never win, because they cheated 23 times last Sunday and won. This is nothing new; in the 1992 Simpson’s episode Lisa the Greek, she explains that she’s picking the Raiders to win because they cheat, and sure enough the Raiders win on what the announcer describes as a “very suspicious play.” Cheating is in the Raiders DNA.

Of the three major American sports (sorry soccer, you aren’t there yet), only baseball has a strict “no cheating” policy.  If you are caught cheating in the smallest detail, what you gained is taken away.  Anyone who has seen George Brett's reaction to having a home run disqualified because his bat was too dirty knows that enforcing the rules does not always make sense (the ruling was sensibly overturned and they replayed the rest of the game with the home run allowed).

This sense of morality gets warped a lot when talking about baseball.  Just as there is no rule against stealing bases, there is also no rule against stealing signs.  You don’t want the runner on second tipping off the batter?  Develop a more sophisticated signaling system.  Yet announcers will describe suspected peekers with barely disguised distain in their voices.  It’s not exactly cheating, but this also colors the demand for instant replay and the micro-examination of every close call; we can’t allow the runner a base if he was out by the tiniest fraction of an inch, because that’s the rule.

Basketball and football are different.  There are violations, there are penalties, and sometimes the gain from committing the violation is worth the penalty.  It is illegal to foul an opponent in basketball, but if the opposing player has a free throw shooting percentage under 50%, go ahead and hack him.  In the NFL holding is illegal, but if holding is the only way to prevent your quarterback from being blindsided, hold away.  Baseball attempts to eradicate the violation; football and basketball impose a penalty that may, or may not, discourage such behavior.

I wrote not long ago about how football penalties should be completely re-examined.  Yardage penalties established in the days of “three yards and a cloud of dust” offenses may not be appropriate to today’s high-octane pass-oriented offenses.  Offenses pass so much that maybe pass interference should be 15 yards instead of putting the ball at the spot of the foul, assuming the pass would have been caught but for the interference.  Maybe holding is so ubiquitous it should be a five yard penalty instead of ten; or maybe it is so ubiquitous the penalty should be twenty yards.

Where this has become critical is the protection of a quarterback in the NFL.  The health of a team’s quarterback is the number one determinant of whether that team’s season is a success, or if they will have a high draft pick. With new concussion protocols now in place, any blow to the head might take a QB out of the game.  Alex Smith of the Chiefs was knocked out of the game because of a concussion that was sustained when the defender pushed his head into the turf.  Cam Newton complained about a shot taken at his knees that looked like it should have been flagged; the NFL’s response created some new questions.

The NFL and NBA must ask some tough questions, like are penalties supposed to eliminate cheating or merely discourage it? Should the penalty for fouling an opponent be increased, or should poor free throw shooters simply practice more until they are better?  How can you discourage cheating when the gain from injuring an opposing player is clearly greater than the loss of a few yards, or the ejection of a player from one game?

Baseball essentially has a zero-tolerance policy on cheating.  How close do football and basketball want to come to emulate it?


Cubs win! Cubs win! Now what?

We now live in a world where the Cubs are champions

What will replace the Cubs as the epitome of futility? What other Sisyphean entity exists, now that the baseball fans of Chicago are no longer living under a 108-year-old curse?  Pointing to the Indians’ World Series drought seems cruel, and 68 (excuse me, 69) years of futility isn’t nearly as bad as 108; besides, the Indians made it to the World Series twice in the 1990’s, so they remember some good times. 

Game Seven of the 2016 World Series will go down in history.  It didn’t have the walk-off charisma of Mazerowski’s blast in 1960, or the sustained tension of the 1991 Game Seven where Jack Morris pitched 10 innings of shutout ball for the Twins until his team finally scored in the bottom of the 10th.  But a back-and-forth game that went extra innings, had a rain delay, and ended with a one-run deficit for the losing team, is the stuff of legends. 

One thing winning did was possibly let Cub’s manager Joe Maddon off the hook for his most controversial decision, to use closer Aroldis Chapman for 20 pitches in game six despite having a five-run lead.  Critics said it would make Chapman less effective in game seven, and he gave up hits to the first three batters he faced including a game-tying two run homer.  If the Cubs hadn’t come back to win the game Maddon would have been vilified in the sports media.  Maddon’s continued reliance on Chapman was a sure sign that he had no faith what so ever in his other relievers, so the Cubs have some work to do in the off-season.  You can’t prove that Chapman’s ineffectiveness was the result of his stint in Game Six, but the dots must be connected.  Maddon also had second baseman Javy Baez try to bunt with two strikes and a runner on third; you only do that with a batter who knows how to bunt.

I thought Indians’ manager Terry Francona made a mistake by walking Anthony Rizzo to pitch to Ben Zobrist in the tenth inning.  The logic was irrefutable—with the go-ahead runner on second and first base open, walking Rizzo creates a force out at every base, and the important run was the go-ahead run, not any other.  That is baseball managing 101; plus, I believe that pitcher Bryan Shaw is a groundball pitcher.  The upside is obvious, but the downside is subtle.  Putting another runner on base gives the pitcher less room to maneuver, making him have to be that much more careful.  If (as it happened) the batter gets an extra base hit, you are looking at a multiple run deficit in the bottom of the tenth instead of a one run deficit.  Plus, Zobrist was hitting the bejeezus out of the ball.  Playing the odds to reduce the chances of a single-run scoring but increasing the chances of a multi-run inning works in the bottom of an inning in a tied game, but not the top.

I picked the Cubs to win game seven, mainly because I felt that Francona was going to the well once too often with Cory Kluber on short rest.  Yes, he did well in game four, but generally speaking World Series pitchers on short rest have a significantly higher ERA. We love the narrative of the superhuman pitcher willing his body to perform despite inadequate rest, but for every time that play works there are other times when it blows up in the manager’s face (Matt Harvey of the Mets last year trying to throw a complete game).  Game Seven was like that for Francona.

Let’s face it; by Game Seven it was clear that neither manager had much faith in anyone in their bullpens except their closers.  The Series became a war of attrition where both managers refused to use their mediocre relievers as cannon fodder.  Maddon stuck with Chapman when he was clearly ineffective from overuse (because Maddon didn’t trust the rest of his staff to preserve a five-run lead), and Francona expected Andrew Miller to be lights out even after the Cubs had seen him enough to start figuring out what he was throwing. 

The 2016 Cubs are a super team, or as ESPN personality Tony Kornheiser has been calling them all year “The ’27 Yankees.”  Maybe not quite, but close. They won 103 games, which generally happens only about four times in a decade on average, but given their run differential and lack of “clusterluck” they were the equivalent of a 110 win team.  The only other super teams I can recall since I started following baseball are the 1975-76 Reds and the 1998 Yankees. 

I am not including the 2001 Mariners who won 116 games but lost in the playoffs; as we learned from the Warriors last season, no matter how many games you win, if you lose in the playoffs you’re nothing.  I should also mention the 1970-71 Orioles, which slightly pre-date my having the capacity to follow baseball.  The 1970 squad had three 20-game winners, the 1971 squad had four; plus, great hitters like Frank Robinson and Boog Powell and great fielders like Brooks Robinson and Paul Blair (and my pick for best manager of all time, Earl Weaver).


Will the Cubs and Indians give us a sequel in next year’s World Series?  Unlikely.  But for the first time in over a century, Chicago Cub fans can hopefully say, “Wait ‘til next year!”

Wednesday, November 2, 2016

Home field and the World Series

I have blogged before about the problem of having one Major Baseball League have the DH and one not.  Now that inter-league play is year-round it messes up the contests, putting the American League teams at a disadvantage when playing in a National League park.  It also plays havoc with the World Series, making home field advantage crucial.  Well, most of the time; this year the Cubs actually have an advantage playing games in Cleveland where they can use injured Kyle Schwarber as a DH.

I watch a lot of talking heads on ESPN, and many of them have railed at the inequity of determining home field for the World Series by using the winner of the All-Star game. I’ll grant some of the injustice of basing home field on the play of players most of whom play for teams with no chance at making the post-season.  But the managers are from last season’s championship teams, so they usually have a chance of going back and can manage the game accordingly.  Of course, this might mean not playing some of the marginal all-stars, but we are long past the days when Lefty Grove pitched 6 innings in an all-star game.  So, most players will still get to play.

I find using the All-Star game to determine home field in the World Series defensible, but lots of people have a problem with it.  So what’s the solution?  One bad idea often floated around is to return to the format that used to be in play before Bud Selig decided the All-Star Game needed to “mean something.”  Namely, the NL hosts one year and the AL hosts in the next.  This seems equitable, but I don’t find it satisfactory.  If the claim is that the All-Star Game outcome is random, how is it better to base the decision on numerology?  Let’s see, the number of the year is evenly divisible by 2, therefore the NL should host the Fall Classic.  That’s nuts.

A solution that attempts to base the decision on merit, but ultimately fails, is the notion that the team with the better record should be the home team.  This sounds reasonable, but the team with the better record isn’t necessarily the better team, but the team that played against weaker opponents.  There is one other criticism—the team with the better record wouldn’t be known until the end of the season, but reserving the potential stadiums might need to be done before the last minute.  I’m sure they could work around this, but knowing that the AL will host the World Series in mid-July helps planners more than finding out in early October.

There is one potentially reasonable solution, and that is use the outcome of interleague play for all teams.  This also has the problem of not being known until the end of the season, so maybe say the result of inter-league play as of September 1st.  It’s not perfect, and given how the schedule works maybe weaker teams in one league are playing stronger teams in the other, but these things even out over time (which is no solace to the team playing four games in the other team’s park come October). 

But I still think this is a solution in search of a problem.  To a great team, home field advantage in a seven-game series shouldn’t be that big of a deal.  And, as I said above, playing by the AL rules actually helps the Cubs this year.  Hey, there are a bunch of new stats coming out every day; maybe home field should go to the team with the highest average exit velocity on batted balls!

So, let’s stop worrying about home field advantage in the playoffs and start worrying about important things, like how mankind will go on without Vin Scully calling Dodger baseball games.



Sequels: bad money after good

The fine folks who lead movie studios and TV networks have an odd existence.  They are constantly trying to use their personal opinions and ascetic criteria to decide what would be a profitable movie, to develop some sort of formulae for success.  Once upon a time Fred Silverman was called “The Man With the Golden Gut” for his seemingly unerring intuition in picking hit TV shows at CBS and ABC.  He then took over NBC, green-lit projects like Supertrain and Pink Lady and Jeff, and suddenly his gut was less than golden.

One reliable arrow in any movie studio’s quiver is the sequel.  There is no guesswork, no speculation required.  If people loved actor X and actress Y is a comedy set in Paris about parrot smuggling, then they will flock to a sequel with the same two performers in a comedy set in Rome about sick cockatoos.  Studios now plan for built-in sequels, planning tent pole movies that will set up the next film in the sequence, then the next, and so on until your studio’s roster is filled through 2028.

Hold on, buckaroos.  As Hollywood Reporter pointed out, sequels have had a poor track record of late. Inferno is not recreating the financial success of The Da Vinci Code, or even the more modest success of Angles and Demons (which contains one of my all-time favorite movie lines, “Thank God, the symbologist is here.”).  Bridget Jones’ Baby also tanked, as did the “Why on Earth did they make a sequel of that?” movie Jack Reacher: Never Look Back.  There are exceptions—horror films seem to be able to put an infinite number of numbers after their titles, and Tyler Perry can churn out a money-making Madea film whenever he needs a few more million dollars.

Reboots are also on shaky ground.  The Magnificent Seven had an all-star cast but won’t be remembered as well as its 1960 predecessor, or 1954’s Seven Samurai.  For some bizarre reason someone thought re-making the beloved 1984 comedy Ghostbusters was a good idea, and then didn't understand why audiences stayed away.  Note—I didn’t stay away because of the women, I didn’t go because the original Ghostbusters is an impossible to duplicate classic, and no amount of stunt casting is going to improve on Bill Murray’s improv skills.  Again, there are exceptions—the Coen Brothers’ remake of True Grit was generally hailed as an improvement on the Oscar-winning original.

So the easy temptation is to draw broad conclusions—stop making sequels and reboots, start making more original films.  Geez, people have been telling Hollywood that for decades.  Remember when Gen Xers started taking over studios in the late 1980’s and 1990’s and producing movies based on beloved childhood series like Dragnet (1987), Car 54 Where Are You (1994) and Bilko (1996)?  That trend has lessened, although there have been recent big screen adaptations of Get Smart and 21 Jump Street (but both of those were parodies). 

The Hollywood Reporter article linked to above contained the important warning to not make sequels no one is asking for.  Was there a groundswell of sentiment for Tom Hanks to make another Robert Langdon film?  Why follow the mediocrity (and horrific bad casting) of Tom Cruise’s Jack Reacher with Jack Reacher: Even More Violence?  Who watched the original Ghostbusters on DVD and thought, “Yeah, we can make it funnier than Bill Murray, Dan Ackroyd and Harold Ramis”?

As I said, one trend is to assume that a sequel will be made before making a movie in the first place.  Marvel has announced a multi-year series of inter-related movie projects. That’s great, but what if one flops?  Or people start getting tired of these characters, or disapprove of the direction the series is going? It’s like a TV series planning plot twists in season five before getting a season one full season order.  The creators of Lost swore they had a six year plan, but the fact is that they initially cast Michael Emerson for only three episodes and his character ended up driving the final three seasons.  You have to stay flexible.

I always recall the story I read about Christopher Nolan’s movie Inception.  When it was announced, everyone said the studio was just doing it to keep the director of the Batman franchise happy, that otherwise no one would finance a film that wasn’t a sequel, remake, or reboot.  When early buzz was positive, the detractors said that it might be good, but an original film not based on a comic book was still a financial risk.  When it became the biggest grossing film of the year, the detractors said “Okay, let’s see him do it again.”


Hollywood will always give “proven” projects like sequels, remakes and reboots a priority.  If it succeeds, you’re a genius; if it fails, blame the marketing department because Tom Hanks movies should sell themselves. Hollywood should remember that before you have a sequel, you have to have a successful film.  That’s the hardest part about making money in the entertainment industry; you must have a good, original idea in the first place.