Here’s that Google self-driving cars promo video that’s been making rounds lately:
This is obviously completely world-changing.
The thing is, though, this video and its surrounding coverage in the press don’t even begin to explore the best, most exciting externalities of self-driving cars. They get to, maybe, level 2 out of 7.
So I thought I’d lay out some ideas.
Level 1: People who couldn’t use cars before can now use cars. This is wonderful for the disabled and elderly. And maybe even for kids. This is most of what you see in the video — and these groups of people have every right to be excited. But (and I’m definitely wearing my executive business hat here) the real economic impact of this change is relatively not that big a deal.
Level 2: Instead of driving, you can do other things behind the wheel. (Well, “wheel.”) This is the part that the angry internet masses presume Google is probably most excited about. The worry is that when Google says this affords an opportunity to “catch up on email”[ref]For the record, by the way: I absolutely despise this phrase.[/ref] they really mostly mean it’s an opportunity to be served ads or create data for Google to serve ads.
I’m a little curious to think about what the most productive & interesting replacement activity would be instead of navigating a steering wheel and pedals, but again, in a large-scale economic sense, this is small potatoes.
Level 3: Everybody switches to self-driving cars. This seems inevitable to me. Once the technology establishes a base level of safety and reliability — it struggles with snow and rain at present, but I can’t imagine those issues will last forever — then the value proposition of a self-driving car is just massively better than a traditional vehicle.
Put it this way: A full-time chauffeur typically costs $50,000-$80,000/year. So you could argue that — at least for the wealthy elite — the self-driving car is worth something like $200k in savings over a 4-6 year use of the vehicle.
That value won’t be the same for lower classes who don’t value the chauffeur (or, the opportunity cost of their free time) as highly, but it’ll still be substantial.
Economically: We lose the traditional chauffeur industry, but that’s not so bad. The car industry’s shaken up, but not gone. It might even boon.
Level 4: Suburbs are more accessible. This could just happen anyway as the internet and digital collaboration get better. (See: Oculus Rift. Read: Snow Crash, Ready Player One, and so on.) Many people avoid the suburbs because of the immense drag of hours of commuting to work in the city or elsewhere. But what if you can work (or even just sleep) on your way to the office?
What happens when the suburbs aren’t just for married couples who want the space to raise a family and can trade off a few hours’ commute time? Do young people move out to take advantage of the substantially cheaper real estate? Does the urban sprawl balloon?
Level 5: The roads are insanely safe. I anticipate that we’ll eventually see the tides turn completely on the perceived safety of driving. It’ll take several years.
Today, self-driving cars are unknown so they seem dangerous. We feel like we can trust human drivers to behave a certain way.
In the future, computer drivers are a sure bet. The faults and lapses of the human mind and senses will still be unknown, so driving a vehicle manually will likely be the outlier and the threat.
An interesting comparable: You can power up the latest Grand Theft Auto game and just start driving around the city. What you’ll find after a while is that the only car that ever gets in an accidents…is the one controlled you, the human.
Once perfectly coherent computers are responsible for manning the heavy majority of vehicles, I’d expect accidents would drop precipitously. I also can’t fathom self-driving cars racking up many speeding tickets.
Here we really start to face some stranger side-effects. What happens to hospitals who no longer have the patient inflow from auto mishaps? What happens to local law enforcement who no longer have the monetary inflow from traffic tickets? Do we need fewer doctors and policemen? Or do we just get better and more aggressive at treating and policing other things?
Level 6: There’s no need to buy a car. A few more years down.
Here’s a big, universal secret that pretty much nobody talks about: Owning a car actually sucks. Even putting aside the fact that it’s already by far the most dangerous thing anyone does on a regular basis (we solved that at Level 4). Your car is incredibly expensive to buy, it’s incredibly expensive to insure, it’s incredibly expensive to use (gas), it’s incredibly expensive to maintain (repairs). It loses value precipitously the second you drive it off the lot. There’s an infinity of teensy tiny nuanced issues (“Has that scratch always been there?”) that quietly add stress to daily life.
Still — today in 2014 (and in years past), that’s a small price to pay for the freedom of being able to say “I need to be 10 miles away from here, right now.”
In the future? Look at what Uber has already done with its on-demand car service. I’m confident their company gets better and better every day at understanding supply and demand — knowing where their cars need to be and when. With self-driving cars, a future company like Uber cuts its most expensive operating cost out of the equation (see: Chauffeurs) and could realistically have an army of cars always standing by. You might realistically be able to schedule regular pickups and routes for yourself. Because you don’t have to pay for a driver, and because you don’t have to pay for the car’s idle time(see: sidebar[ref]Important, relevant side-note: For most of the 24-hour day, Your car’s pretty much just sitting there. Which is the idea driving this whole sharing economy in the first place. You’re paying lots of money to own a car 24 hours a day, and you’re really only actively using it for maybe 60-120 minutes.[/ref]), I’d imagine this could drive individual trip prices down to meet other transportation services where you don’t pay for individual drivers or idle time — you know, like the bus and the subway.
So why own a car when for all intents and purposes it’s just as convenient, and probably cheaper, to rely on hailing cabs which perform all the same functions but cheaper and with less hassle and maintenance?
Also at this point in the future, the symbolic statement of buying and owning a car is all but gone. Getting a car when I turned 17 was a huge deal — it was my freedom and my coming of age. With self-driving cars… who cares? Youths in this future year 20XX have had the freedom to go wherever they wanted since the day they got their first smartphone.
So now the car industry is toasted. It’ll be interesting for the advertising industry too. Car commercials apparently account for 25% of TV revenue. Maybe some day, not everyone will have “15 minutes could save you 15% or more on car insurance” beaten mercilessly into their skulls.
Level 7: Real Estate. This part’s deep, deep future. Perhaps accordingly, it’s definitely my favorite.
Today, we use an ABSOLUTELY, POSITIVELY RIDICULOUS amount of real estate to just let cars sit idly and do nothing.
Our self-driving taxis, however, should never have to sit idle. Like present-day cabs, subway trains, and the like, they’re either always driving, or if they need to be stationary, they’re parked in far-away lots where it’s cheap and not an inconvenience.
All that means we eventually really don’t need parking spaces. Which means for NYC alone, at roughly 150 sq. ft (19 long by 8 wide) per space, the army of self-driving cars has the potential to unlock over 600,000,000 sq ft of land, or 21 square miles. For perspective: Central Park only (“only”) takes up 1.3 square miles.
What’s fascinating about this challenge, though, is that eliminating all the parking spaces in NY isn’t the same as adding 16 new Central Parks, because these spaces are all distributed on the sides of streets throughout the city. So the problem for the future: What’s the best public or private use of a parking space? Of a parking garage? Maybe it’s housing. Maybe it’s some kind of energy production like solar. Probably it’s something just something else we haven’t even begun to dream of.
I’m a 26-year old red-blooded American male consumer, and my favorite product is a kitchen accessory. Honest.
Here’s some features of my favorite product:
No buttons or flashing lights
Can’t access or download adult movies
Doesn’t play Angry Birds
Not connected to other people or other sponges via Facebook
Despite all those shortcomings, and more, I love this product. This should all probably surprise you, or upset you. It should certainly upset every major consumer products manufacturer. Except one: The company that makes the George Foreman Grill, and concurrently, the Foreman Grill Sponge.
Yep. That’s it.
Here’s why I love the Foreman Grill Sponge (FGS):
First: The FGS takes a real (but incredibly subtle) problem and absolutely, positively obliterates it.
It doesn’t fuss around with trying to be cool or flashy. That’s often a hallmark of a solution in search of a problem. The nascent phenomenon of TVs and phones with curved screens is an example of this. You’re inclined to say “Wow!” because it’s different and heavily advertised and maybe if you use it you’ll win favor with someone you’re attracted to. I doubt anyone said “Wow!” when they saw their first FGS.
The Foreman Grill is prone to collecting the burnt residue of the foods you’ve just cooked, and essentially needs to be cleaned after every use. I believe when the Foreman Grill originally came out, it didn’t come with any cleaning paraphernalia — you used the sponge you already had in your kitchen.
If I asked, back in the day, “What’s the worst aspect of your Grill?” to a hundred Foreman Grill owners, I’d guess the heavy majority would say “cleaning it is super annoying.” If I followed-up and asked “What would you fix,” my guess is the responses would blame the range (“make the range less sticky!”) or blame the user (“make me a better cook!”), but never the sponge. The sponge is a known entity; everyone above age 5 is an expert at using it, and it should hardly be blamed for any of this mess because it was still in the cupboard while you were stupid and not paying attention and subsequently overcooking the chicken. I’m sure if you asked “How do you rate your sponging experience,” respondents would simply say “yeah, it’s fine.”
Here’s the difference that Human-Centered Design brings to the table: I wouldn’t ask that second question, “What would you fix?”
Instead, I’d watch users go through the entire process of cooking a meal on their Foreman Grills. And under those circumstances, what should stick out plainly is the aggravated way home-chefs use their sponges to clean. The sponge is flat and designed to clean flat things. The Grill’s range has a very distinct bulbous shape, like this:
As a result, with a regular sponge it’s relatively easy to clean the most-exposed surfaces of the grill, and relatively impossible—requiring clawing with the corner of the sponge (damaging it), using your fingernails (dirtying those), and no shortage of foul language (disappointing your parents)—to get the recessed nooks and crannies. It’s doable, but it’s not pleasant.
Enter the FGS: A sponge which contours exactly to the grill’s humps. It’s infinitely better at reaching the nooks, and moreover, it just feels mechanically and cathartically nicer to slide perfectly along the grill’s rails. The same “Ahhh” feeling you get when you place the correct piece in the middle of a puzzle, or just generally when things fit perfectly into other things. Product functionality: Massively improved. Product experience: Massively improved. Learning Curve: Completely intuitive. Those are the marks of an insanely great product.
Second: Like any good Made-for-TV product, here’s the part where I say “But Wait—There’s More!!”
Sponges are a heavily commoditized product. I doubt even Martha Stewart has a favorite sponge brand. (Maybe she prefers lavender sponges to blue?) Perhaps I’m just not a sophisticated sponge consumer, but I know even paper towels try to differentiate in their mopping ability, thickness, and quilted-ness.
I looked on Amazon quickly. (There doesn’t seem to be a sponge-equivalent of quilted-ness, alas.) Depending on the bulk quantity you’re buying, sponges cost $0.50 to $1.00 per unit. The FGS, on the other hand, seems to typically come in packs of three, and sponges cost $2 apiece, or $6.00 total. Which means they’re convincingly commanding a 100%-300% premium on an item that has the most basic, obvious, and clearly copy-able product innovation possible.
I’d been reading a lot lately about the implementation of touch screens in automobile dashboards — and in particular, how grave a danger this particular use of technology is. In sum: Touch screens have no haptic feedback — when you use a touch screen dash, you have to look where you press rather than being able to blindly feel for the correct button or dial present on a traditional dashboard. Distracted driving = accidents = problem.
Some folks are trying to solve this by inventing a better touch screen dashboard. The same Forbes author suggests a dash where key functions have hardware buttons and auxiliary functions move to the touch screen. TechCrunch raves about a solution where you don’t have to look at the touch screen, but you still have to learn five all-new, unnatural gestures — which makes it seem like operating the dash becomes a skillset akin to operating a stick shift.
Suffice it to say, I think all of the endeavors above — not just helicopter-pocalypse — are steps in the wrong direction. Emblematic examples of technology looking for a problem to solve, rather than solving for a problem already outstanding.
The current-day dashboard doesn’t really have an interface problem. Ask anyone aged 18-68 who’s ever been behind the wheel how to adjust the radio, and they’ll probably do it naturally. Dash board interfaces are already pretty smart and comprehensive.[ref]Maybe you could make a case that we’d benefit from a standardized set of dash controls across all cars — the same way all cars have the same interface for pedals and steering wheels. Or point out how many key functions are moving from buttons on the dash to buttons on the steering wheel. Both fair arguments. But that’s not today’s point.[/ref]
On the other hand.
What if I told you that every single car today already has — not one, but three — screens built in which are genuinely stupid, and are the direct cause of an incalculable number of traffic accidents? And furthermore, relatively simple technology could probably completely fix these performance issues?
Thousands of lives, millions of dollars in damages saved. You’d think automakers would probably jump at an opportunity like this, right? Nah. Instead, they’re finding cutesy ways to make questionable improvements on volume knobs.
I’m not about to propose we add electronics and diagnostics to the windshield. God knows the last thing we need[ref]Well, there was that whole remote helicopter thing…so maybe second-to-last.[/ref] is another colorful distraction in the driver’s direct line of sight.
Rather, here are the dumb screens I’m thinking of:
You might not have thought of these as “screens,” because they traditionally go by a different name: “mirrors.” But I see these as screens — as interfaces — all the same. And mirrors, as things go, are especially dumb.
Isn’t it dumb that you have to rely on tricks of light to judge the presence of traffic around you?
Isn’t it dumb that you have to adjust these screens every time someone who’s taller or shorter takes a turn driving the car?
Isn’t it dumb that these screens can be blinding if hit with headlights at night?
Isn’t it dumb that one of the screens is explicitly inaccurate and says “Objects may be closer than they appear?”
Isn’t it dumb that, even after all of those hurdles, these screens are still wildly unreliable, and have blind spots which you can only avoid by turning your head completely away from the road in front of you to double-check?
Sounds like a pretty ripe opportunity for progress. My solution (like pretty much all the best solutions) is painfully simple:
LCD screens swapped in for these mirrors.
Connect (1) to (2).
Totally solved, no? Let me reiterate: We’re talking about completely obliterating blindspots on cars. You could easily position the cameras around the hood of the car to form a perfect viewing radius. If you’ve ever used a television you’ll know that LCD screens are clearly viewable by people of all sizes, and are also good at the nominal task of not blinding people.
So what’s the holdup? Can’t be a cost thing; we’ve already got high-quality cameras in all of our ~$X00 cell phones, so they can’t be prohibitive. Can’t be a reliability thing; our cars already rely on a litany of other electronics just as likely to default. Can’t be a durability thing; we’ve already got jumbotrons in outdoor sports stadiums that hold up fine against the elements (and besides, side-view mirrors only have to be outside the car because of the tricks-of-light thing — the new smart screens could easily be located inside the cabin and work just fine).
Look, technology is great. Imagination is great. It’s admittedly wrong of me to harshly criticize the Renault prototype, which is so clearly a concept car designed to inspire creativity and which will assuredly never see the light of day.
But the best innovations are the ones that solve real problems, first and foremost, method of implementation be damned. Ford SUVs save untold anguish by truly understanding a unique and universal car use-case and thus enabling drivers to open their trunks hands-free. And I save untold lives by re-framing the concept of a “screen” and thus opening new opportunities for interface. No 31st-century technology required.
The Samsung Galaxy SII. That’s been my phone for the past year or so. I used to hate it. This past week, everything changed.
Far and away my biggest gripe about the phone is its meager battery life. The quoted use time for the phone’s battery is 8.7 hours. I guess, depending on how much you’re talking, SnapChatting, or whatever, your mileage may vary.
“The damn phone can’t even get through an entire day on one charge! How worthless is this??”–Me.
I’ve gone through a number of phases in confronting this issue.
Phase One: Phone “babysitting.”
For sure, one of the most prevalent and under-reported user experience traumas of the modern era. Here’s what I’m talking about:
It’s 10:00pm, and your phone only has 10% of its battery life left, and you have to be crazy vigilant about phone and feature use.
It’s 10:05pm and, now concerned about your phone being low on battery, you check the screen incessantly to monitor progress. You’re down to 8% because you’ve turned the phone on and off 37 times in the last five minutes.
You’re out with a buddy and have to rely on him for Google Maps so you can save your juice in case of emergency.
You make sure all your other friends have your buddy’s cell number because you’re going to turn your phone off for a while.
You set your phone brightness slider almost all the way to zero, and now you can only read anything under a bright light at a good angle.
You turn off your data plan or your GPS or whatever other fancy feature that the phone touts as core functionality.
Your phone dies anyway. It’s 10:08pm.
You’ve got the idea. This sucks.
Phase Two: Proactive phone management. I’d switch my phone to airplane mode for hours at times when I wasn’t actively expecting communication. In fact — and, I know, this is *really* crazy — I’d actually turn my phone completely off during movies, on airplanes, in classes, and the like.
On the balance, being disconnected every once in a while is probably not such a bad thing. But that’s not what I signed up for. I wanted a phone that was at my complete disposal at all times, and as I’ve written before, I didn’t want the ever-incrementally-mounting headache of having to deal with phone minutia throughout the day for my entire life.
Phase Three: I brought a charger to work and charged my phone during afternoons. This felt stupid.
Phase Four: I picked up a case for the phone that had an extended battery built in. Sort of like this one.
This effectively solved the battery problem, but it created others. Specifically: my phone was now large enough to win modest disapproval from the King of all Cosmos. (In other words, it was the size of a small planet.)
My phone+case now barely fit into my pocket. It probably looked ridiculous. In a silo, I didn’t mind. The phone passed my criteria of “fits in my pocket” and “lasts an entire day.” Size for battery life was a good trade.
Stepping out of the silo and into the real world, I found that my phone’s girth prompted me having to explain myself for three minutes any time a friend saw it, ever. No thanks.
At long last, I finally internally raised a critical defining question:
“WHY is it a requirement that my phone lasts an ENTIRE day on a single charge?” —Me again.
This is the fundamental reason why really smart design thinking companies like IDEO base so much of their methodology on the concept of “framing.”
Here’s what I mean: A modern laptop with a “great” battery life will last something like five or six hours on a single charge. (Granted, the technology is improving — but that’s not the point.) Invariably, if you take your laptop out somewhere, and plan on using it longer than just through your cup of coffee, you know to bring a charger with you. Nobody has a fundamental problem with this.
Why doesn’t the same framework hold for your cellphone?
I largely blame Nokia. Because you, and I, and everyone else on the planet got accustomed to using a modern cellphone when we owned a phone in the Nokia 3300 series. Which looked like this:
Things you should immediately notice or recall:
The screen is tiny
There are no colors
You played Snake instead of Angry Birds
The graphics were still pretty dope
What you might not readily recall:
The battery life on this phone was astronomical, probably for all the same reasons the modern Amazon Kindle can go several weeks on a single charge.
As a result, you probably didn’t even charge the phone every single night. (Can you imagine?)
In many regards, phones evolved incrementally from this point on (I guess an iPhone fanboy might argue differently) — screen sizes inched up, pixel density demands followed suit, and battery life capacities receded from “sure, what’s an extra three nights without a charge?” to “every night or bust” to a modern society, where people babysit phones, and where the best seat when you’re traveling by plane is not in first class but the one seat in the waiting area that’s next to the wall power outlet.
Another relevant, subtle phenomenon: More so than I think with any other device, people seem to innately care about conserving the quality of their cell phone battery. One theory is that every time you recharge your phone, the battery suffers because it thus has fewer charge cycles, or something. In fact, there are all kinds of conflicting fan theories — google it, if you want to learn nothing for 20 minutes — but suffice it to say, I’ve never heard or seen the same level of concern raised for the battery in your laptop, TV remote, Xbox controller, or anything. Many (myself, until this past week, included) would sooner risk their cell phone dying prematurely in the night than put undue burden on a piece of equipment that, worst case, probably costs around $30 to replace.
So, no more. It’s 3:30PM at time of writing this sentence, and my phone is plugged in and charging. 8.7 hours is now plenty of time to get through anything day-to-day, day or night.
You’ll notice, if you actually bothered reading all 1100 words here, that I’d actually already uncovered this solution weeks ago in “Phase 3.” Here lies the inherent power of framing, and perhaps, of marketing: I was able to fix the problem not by adjusting the product (expensive), but by adjusting the frame and worldview (free).
“Immersion” is a funny word that gets thrown around a lot in the modern video game industry. Somewhere along the lines, some fools in the development team got it in their rotten minds that immersive gameplay equated to motion-control gameplay. Which first resulted in the Nintendo Wii, and subsequently led to the Playstation Move, Xbox 360 Kinect, and Nintendo Wii Plus all vying for the title of having the best controls.
And they’re all wrong.
Okay, I get it. Immersion is a very sexy concept. I’ve had more than my fair share of bouts with the whole “immersion” thing in the movie realm. Dictionary.com calls immersion “a baptism in which the whole body is submerged in the water.” That’s not the best definition. Being truly immersed in a game—movie, book, anything—means I’ve lost any active recognition of the medium. I’m just there.
I can see the sketchboard appeal in motion gaming. From a 3rd person point of view, it looks more immersive. The developer thinks, This looks more like the player’s swinging a tennis racket, therefore it must feel more like swinging a tennis racket. But when push (literally!) comes to shove, no motion gaming system amounts to much more than an exhaustive effort in random flailing. If I’m swinging wildly hoping for my controller to effectively communicate, immersion is lost. Or if swooping my arm a little too high results in my forehand not registering, and I’ve got to consciously monitor the swoop, pitch, and speed of my input, then immersion is lost, too. If I’m breaking a sweat in anything but an exercise game (and only then, because I’m running in place, not because I’m attempting (and failing) to communicate accurately with the receptor), then the system is broken, not enhanced.
Finding true immersion ought to be easy enough: simply follow the path of least resistance—whatever results in gameplay that’s as quick and accurate as possible.
At any point during gameplay, there’s a three-party transaction going on between my brain (who says to the controller “I need to wallop that fellow over there with a hammer”), the controller (who says to the console “input these complicated commands and algorithms per my instructions (which, presumably, it’s correctly received) from that guy’s brain”), and the console (who makes the blob on the TV screen swing his hammer per the controller’s instructions (which, presumably, it too has correctly received)). Motion gameplay subjects all three parties to lag (I’ve got to swing my arm instead of pressing the A-button) and error (the controller brain wonders, “did he swing overhand for a hammer smash, or jab forward for a rocket punch?”)
Consequently, in my opinion, the medium of least resistance is actually the much-scoffed and often-maligned…Nintendo GameCube controller.
Feast your eyes on the most immersive video game controller in history. Surprised?
The GameCube suffered and ultimately failed because (no duh, haven’t you been reading the rest of my crap?) of a marketing problem. The controller and system itself were widely panned by critics. Why? Because everything on the controller is small. The D-Pad (the grey “+” looking thing on the left side) is small. The yellow C-stick has a small top and became disaffectionately referred to as a “nub.” The controller itself was slightly smaller than its Sony and Microsoft counterparts. Core gamers (male, 14-25, ish) like things that are big—a projection of their genitalia, naturally—and don’t typically take to things that are purple.
Problem is, the size of the controller was never properly explained as an attribute—nay, an asset—of the controller. A small controller minimizes finger movement and exhaustion, and close buttons minimize the amount of time fumbling needed to try to find the one you’re hoping to press. Because the D-pad is small, you don’t need to press Up, Down, Left, or Right individually, you can simply rest your thumb in the middle and tilt. By making the C-stick “nub” shaped instead of full-size, it could be moved closer to the ABXY buttons and minimize accidental presses. B, A, X, and Y are all different shapes—small, big, tall, and flat, respectively—so you’d never confuse pressing one for the other. There’s an obvious and comfortable resting position for your right thumb on A, the most important button. Other controllers (like the PlayStation controller, here) have a diamond-shaped button pattern. Aesthetically, this makes sense. Logistically, this means your thumb probably rests on the X button, and you’ve got to reach all the way up to hit Triangle. Sounds totally silly even as I’m typing it. But on a mass-production scale, even the centimeter difference we’re discussing here needs to be considered.
Granted, the controller wasn’t perfect. In the modern generation, it’d need to be wireless and rechargeable, and it’d need a “Home” button to quickly navigate to a menu screen. There was no Z-button on the top-left of the controller to match the one on the top-right. You couldn’t press the Joystick or C-stick in to use as additional buttons. And the thing is still purple.
An aside totally worth mentioning: The GameCube similarly suffered from a second size-related issue that should have been an asset: Game discs themselves were tiny. The natural assumption is that smaller discs = less powerful, worse graphics, smaller. The actual reason? Games load faster. Shorter and less frequent loading screens, greater sense of immersion.
A second aside: Many avid gamers cite the PlayStation Dualshock (again) as their controller of choice. Strange, because it’s by far the blockiest, least comfortable, and worst-designed (the left joystick—the input used most frequently—is placed awkwardly low to be held comfortably) controller of the current generation. And yet, it may be justifiably considered the most immersive controller of the lot, for the simple and intuitive reason that Sony has been using the same style controller for nearly fourteen years (November 1997), and avid gamers have long since grown accustomed to and fluid in its (albeit flawed) layout.
The conclusion: The GameCube was a flop. The GameCube controller, a flop merely by association. Negative buzz spread about the console (the word “kiddy” made rounds frequently), developers opted to make games on the Sony PlayStation and Microsoft Xbox instead of for Nintendo, which lead to more negative buzz hitting the airwaves, ad infinitum. It sold the worst of the three video game consoles of its generation. Its flaws will go down in history in much larger print than its successes, which, in light of the motion-gaming revolution currently underway, will probably be ultimately lost to obscurity entirely. What a shame.
But from a production and immersion standpoint that will never be heard from again in the history of the universe, what a shining (and surprising) success.
Boy it’s been a fast month. So much nothing to do that I don’t know what to (not) do with myself.
So I picked up more TV. I’ve been sifting through the straggling few episodes of South Park that I hadn’t seen yet, I picked up a trial membership to Netflix, and I’ve had the pleasure of following the NHL playoffs to a far greater extent – I hear the Stanley Cup this year has its highest TV ratings since about 2002, and is quickly closing in on Pre-Season Women’s Softball as the second-least thing watched on television.
But on the other side of the coin, I’ve been plodding along with the NBA playoffs, too. And this sport I’ll never understand. I could easily argue that the entire game is a wash besides the last two minutes – which will inevitably last 10 or 15 times that long (the final two minutes of this game lasts 9 minutes, without commercial interruption). But I’d like to take the opportunity to dive a little deeper.
Here’s my three-pronged strategy to win at basketball:
1. Strategically pause the game
There’s two minutes left on the game clock, and your team is down by double digits (and, of course, Reggie Miller isn’t on your roster). You’re going to need more time than that to catch up. What’s a coach to do? Just pause the game, repeatedly. Each team is allotted 6 (or 7?) timeouts to use over the course of the game (and extras for overtime, too, when you invariably run out). How frustrating is that for me, the non-die-hard fan? Imagine playing Madden, or any non-basketball sports video game, and just as you’re about to win, the other guy pauses it six times to tweak and fine-tune his starting lineup. The hell?! If you’d just use a clock like a normal person, the match would be over and the team that played the better game wins. 60 seconds left in the game should mean just that.
Why should basketball need 6+ timeouts per game? Football only has three. Hockey only has one for the whole game. Soccer doesn’t have timeouts – it has halftime. Basketball shouldn’t need any timeouts. Among all the major sports, basketball is the only one where the coach is close enough to the players that he can bark orders while play is going. Every other sport manages to get along fine without having to incessantly hit the Start button to make substitutions.
And even still, teams routinely run out of timeouts at the end of the game.
What’s that? Pause button broken? You’ve run out of timeouts, you say? Not to worry. You can still slow the clock down further by simply breaking the rules of the game on purpose. Of course, your standard basketball fan will refer to this as “intentional fouls.”
Purportedly, the fouling system in the NBA was set up to ensure a level playing field and provide players with a fair opportunity to play. Smack a guy in the arm while he’s shooting, and it’s going to be awful hard for him to make the shot. So he gets free throws. Seems to make sense – so let’s make it complicated!
In the NBA rulebook, after your team commits X fouls in a game, the system switches from awarding one free shot and a bonus free shot for making the first one to simply awarding two freebies. At Y team fouls, the opposing team gets to take free throws whether or not the other guy was shooting at the time of the foul. The numbers X and Y, I can only presume, are completely arbitrary.
Now, consider Kobe Bryant. He’s got the best free throw percentage on the Lakers, at 85.8% this season. Simple math shows that whenever he’s sent to the line, he’ll return 1.71 points (2 * .858). A little luck from Orlando’s best 3PT shooter, Jameer Nelson (@45%), and the Magic are in the game.
It’s universally understood by coaches, players, and fans, that as the clock winds down at the end of the game, players are supposed to hack and grab and foul eachother. If that’s the case, then there’s clearly a problem with the penalization system currently employed. Slash a guy in hockey and you sit in the penalty box for 2 minutes. Hit a guy hard enough in soccer, and you get a man disadvantage for the rest of the game. But foul a guy in basketball, and improve your chances at winning the game? Preposterous.
And you get six chances! Six fouls before the refs throw you out of the game. Six penalties in any other sport would merit a fine or a suspension. In basketball, it’s just the nature of the game.
3. Let your best player take free shots
Let’s say, instead, that you’re the winning team. What’s the best way to defend against the inane strategy implemented by the other team? Just give the ball to Kobe (or whoever your best player is otherwise), sit back, and watch him make free shots. No defense, no playmaking, no strategies. Often enough, you’ll win the game. Ironically enough, the entire premise of Baseketball was that the main characters were only good at making open shots and didn’t want to move around too much. Turns out, apparently, they would have been just fine in the NBA.
Moral of the story: Basketball sucks. I didn’t even go into the fact that it’s silly how inches in height are just as important an asset as physical strength or athletic ability. All I can say for sure is that the sport reached its pinnacle in 1996, and has been on a downwards spiral ever since.