July, 2009

The digital transition really IS harder for trade publishers than for other publishers


In the 1990s, Mark Bide and I used to chair a program for VISTA Computer Services called “Publishing in the 21st Century.” We’d do a “white paper” every year and run conferences twice a year each in New York and London. The subject was digital change in publishing and the purpose was, simply, to think it through.

The conferences developed a pattern. Mark would open the show with a summary of our research. He’d be followed by a couple of outside speakers and then I’d wrap up with what we took to calling a “walk on water” speech. I got a lot of practice doing big vision stuff.

Part of the pattern was that Mark would start each conference by reiterating that book publishing is not one business, but many. The procedures and business metrics for a publisher of directories bore no resemblance to that of a college textbook publisher; a sci-tech publisher had a different business and, in many ways, a different business model from a trade publisher; and so forth.

General trade publishers are, in my opinion, the most challenged of all by digital change. They have the superficial advantage of having their marketplace “go digital” later than all the others, so there would seem to be an opportunity to learn from the mistakes and the successes of others. But that advantage is illusory because of unique aspects to trade.

Recently, for a new project I’m working on that will become public in the next couple of weeks, I was challenged to enumerate what distinguishes trade publishers’ 21st century problems from everybody else’s. I think all of these things are challenges unique to trade publishing or they have substantially more impact on trade publishers than on any others:

* their channels to get printed books to the customer are consolidating and shrinking which means a loss in margin to intermediaries with more clout, a compressed sales window that leads to more risk-taking with initial placements of inventory and higher returns, and, as volumes shrink and returns increase, actual distribution costs per unit sold will rise as well;

* their principal marketing channels are atrophying, and the new emerging ones are more granular and transient, increasing per-title marketing costs;

* the biggest authors have increased clout for many reasons, increasing title acquisition costs;

* the new technologies both lower barriers to entry for additional competing titles and keep old and even used books alive forever in the marketplace, increasing competition;

* they have unique legacy issues: the backlist rights tangles and undigitized backlists that still sell robustly in print require additional investments now to forestall ultimate sales erosion.  As the overall sales shift to digital some titles on those backlists, depending on rights, might not be able to go (and perversely, could have the copyright licensed for print but effectively orphaned for a digital version);

* they have agents, who have their own challenges (unless they represent one or more of the biggest authors and even that model might be threatened as more digital change evolves) while they add complications for publishers trying to find their way to new models with uncertain costs and revenues.

Trade publishers, much more than their counterparts in school, college, academic, and professional, are bound to the format of “the book”. That is partly because the “value adds” that other publishers can use to justify different (higher) pricing are not natural adjuncts to trade books. Trade publishers can’t boost prices and margins by adding homework helpers as is done for school books, self-testing as is done for college texts, and value-added aggregation, searching, and productivity tools as is done for academic and professional publishing. So the lessons being learned by other publishers just don’t port to trade, any more than the new paradigm for music (give away the content to sell concert tickets) can transfer to books.

Even within trade publishing, there are distinctions that matter. The business of the Big Six general trade houses is quite different than the niche (sometimes called “enthusiast”) businesses run by publishers like F+W Media, Chelsea Green, or Hay House. Niche publishers who have depth of content in verticals can do things that the general trade houses just can’t.

For example, F+W Media has refocused its company around verticals. They used to see magazines and books as the businesses they were in with titles related to crafts and writing; now they see crafts and “writing” as the businesses they’re in with books and magazines in each. That enables them to share marketing costs across many books and magazines and it enables them to market much more effectively to the markets they serve. F+W’s book business is challenged by all the same factors as everybody else’s, but reinforcing their vertical organization creates alternatives, sometimes allowing them to convert essentially consumer audiences to professional audiences so that tricks from other publishing businesses can apply.

Here’s a recent anecdote that, for me, sums up the uniqueness of our challenge.  In a recent conversation with a very sharp and Senior Marketing Person at a major house, I raised a pet question of  mine these days. “What’s the new ‘standard treatment’ for a trade book?” Used to be galleys to selected pre-pub review media, a press release with and without review copies to lists of varying sizes and care of curation, size of the catalog page, and then things we can do for all mysteries, all cookbooks, etc. What should it be now?

So SMP says back to me, “there isn’t one.” But there has to be one, I said. Because only 10% of the books are going to get any unique marketing effort, so 90% have to get something standard. What should it be? If PW and The NY Times and the list of review editors at the various papers and the “usual suspects” aren’t the right thing to do anymore, what is? Or do the 90% get nothing?

What that captured for me is that the Achilles heel of trade publishing has always been that publishers have to reach audiences as numerous as the books they publish and they have mostly marketed books one-by-one, book-by-book. That’s what no other branch of publishing would even attempt. Marketing effort per title is the real point of scarcity in this business — more than quality product and more than shelf space. People outside the trade don’t think about that because, frankly, it’s a problem that doesn’t occur anywhere else. But it’s in our industry’s DNA. And we’re going to have to create some unique answers.

21 Comments »

DRM or not? a debate that won’t be over anytime soon


The one subject I didn’t touch in last week’s series of posts on ebooks was DRM: digital rights management, the software that controls what you can do with an ebook (or any other) file. This topic is so fraught with emotion and misplaced certainty that it has “third rail” aspects to it. So we tackle it today with the knowledge that we’re going to annoy many people: there’s no way to avoid it.

I hold two conflicting notions about DRM over time:

1. In the not-so-long run (5 to 10 years), we will be holding very little content in our devices or hard drives. We will access files — those we create and those we obtain — from the cloud. We will see only what we have license to see (as managed by our passwords, our iris scans, our fingerprints…) When that time comes, everything is, effectively, DRMed and, because we will all have our own private stuff up there, we’ll be damn glad it is and damn glad it works. Large elements of today’s DRM concerns will disappear (such as whether you, the purchaser, can access content on multiple devices); some other objections to DRM expressed today will become fights about the license, but not about DRM itself (lending your content or giving it to a friend.)

2. Also in the not-so-long-run, just about all of us will be in social networks that make file sharing (to the extent that we still have the files) with multiple users very efficient and very simple. When we’re all on Facebook and an unprotected file is posted, how many degrees of separation will there be between you and your friends and the entire world? Is it hard to imagine that every digital book would be available free on Facebook? Or through Facebook?

Both of these futures are within sight; very few people would say that either is impossible within a relatively short time. And both are very different from the world we have been living in for the past 15 years or so as the digital revolution has gotten started.

There is definitely a school of thought, which seems most widespread in the library community and among aspiring authors and aspiring publishers (those which are not, or not yet, making tons of money from selling content), that we should live in a DRM-free world. There are, broadly speaking, four lines of argument against DRM:

1. That it is commercially stupid, because it stops sharing and viral spreading of the word about content that will only increase sales. This is the “obscurity is a greater enemy than piracy” school of thought. Evaluating the scanty evidence about the effects of piracy for books so far would suggest that file sharing boosts sales more than it cannibalizes them. “So far” are important operative words.

2. That it violates the “first sale” doctrine, by which when somebody buys a copyrighted physical something, they can then do what they want with it, including lend it or sell it on to somebody else.  This argument is often couched in moral terms suggesting that the sellers of ebooks who put on digital controls are not just being unwise but also unjust (even though in the physical world “copying” is not something you’re permitted to do without paying for permission.)

3. That because of DRM, abuses occur such as people losing the use of files they bought (because they get a new device or computer and it won’t transfer or because the seller of the file, who was storing the backup copy, goes out of business or because, as happened last week, Amazon reaches into your Kindle and erases a book that they just found out they didn’t have the right to sell you.)

4. That it is futile because all DRM can be “hacked”. (Of course, more to the point, DRM can only raise the cost of getting an unlocked file: anybody can create one by re-keying or scanning and OCR-ing a text, the more expensive and cumbersome version of “ripping” a music CD.)

Let’s deal with these in reverse order.

Of course, all DRM can be hacked. The clearest evidence of this is that pirate sites carry books that didn’t ever have a digital file because somebody went to the trouble to scan or re-key them. There is pretty widespread agreement that DRM is like a lock on the door to keep an honest person out, not a security guard that will stop any interloper or thief.

I have been a longtime believer in what is called “social DRM”; the watermarking of information tying the file to its purchaser (or licensor). It is often said that those watermarks can be hacked off as well. True, but if the lock is to remind the honest person not to open the door, it would seem like social DRM should do it. Would you like a file with your name on it (let alone your phone number or your credit card number) on a pirate download site?

Using social DRM would make it easier (although not necessarily easy: interoperability problems are not all due to DRM) for you to share a book with your mother or your spouse, whom you could presumably trust not to spread your branded file far and wide. It would serve as a real deterrent to having the file end up on Facebook.

When Amazon erased 1984 and Animal Farm from their customers’ Kindles, it sparked widespread outrage. It properly raised the spectre of what a malevolent government could do in a connected world. That’s a big problem, but, in my opinion, not primarily an ebook problem.

We are headed for a world where our files are in the cloud and we need to be tethered to access them across our devices. The advantage to that is that we’ll have access to all our files in the cloud all the time on any device wherever we are. The drawback is that the cloud also will have access to our devices and that our files could be made inaccessible at any time. That’s a big problem that requires legal protection, but focusing on ebooks would really miss a much bigger point.

As for inaccessibility that results from device changes or people going out of business, I wonder where people making that argument have been for the past 40 years. Can you play a record on your cassette player? Can you load the program you bought on 5.5″ floppies twenty years ago on your new computer? We have been living with format changes that render our content or software impossible to use for the lifetime of most people living. Why should ebooks be exempt from a problem that existed even before the digital age?

It is absolutely true that ebooks reduce “first sale” flexibility. It is reasonable to say that an ebook “purchase” is not a purchase in the way we used to understand the word: it is a license with real limitations. And DRM is the tool by which the file creator and seller enforces those limits: enabling or disabling print or copying capability; allowing or forbidding some number of pass-alongs or use on multiple computers or devices.

But it is also true that digital files don’t “wear out” and books do. And books aren’t infinitely replicable for free (quite aside from any licensing cost), and unprotected digital files are. And the copying and printing you can’t do with a DRMed ebook file, you also can’t do with a book.

The argument that ebook pricing should reflect reduced useability is a reasonable one, although pricing is really decided by supply and demand, not by reason or rectitude. (History suggests that all new formats — from CDs to VHS tapes to DVDs — arrive at a premium price and it is ratcheted down over time.) The argument that ebook ownership and rights need to replicate the world of the print book is just that: an argument. And I don’t think it is an argument that would move me as a content owner if I believed that enabling that replication might also result in many potential purchasers of the IP just securing it for free.

From my perspective, the “commercial stupidity” argument against DRM is the strongest one of all. But I believe the evidence that supports the idea that it is stupid is about to become dated. Most of our ebook experience so far has been in what we called the “vision” stage of adoption: a time when very few people read ebooks. We have only recently moved into the “establishment” stage, largely enabled by the Kindle and the iPhone. The Kindle and iPhone are devices for the affluent and the Kindle, particularly, appeals to an older demographic. I can’t prove it, but I’d say the more affluent and the older are less likely to steal content than the population at large. (I don’t know an adult that downloads free and illegal music; I don’t know a millennial who doesn’t!)

So we have evidence from a world where, a) very few people read books on screens at all and b) those who do skew older and more affluent, that pushing out free copies — and indeed, the effect of piracy as well — tends to increase sales of a printed book. With evidence of what is really happening sketchy (although many people, I among them, believe the “obscurity is more damaging to sales than piracy” argument has held true so far), trying to attribute reasons for it is a pretty speculative exercise. But I would speculate that people are buying books of things they get free digital files of because most people don’t want to read digital files.As ebook uptake grows and, according to our paridigm of adoption becomes damn near universal over the next ten years or so, that will change. In an ebook-consuming world, a free ebook will satisfy the potential purchaser, not spur them to a sale.

There are ebooks available without DRM. Many publishers, including O’Reilly, Harlequin, and Baen, sell them from their websites. There are some non-DRMed files available from Kindle (according to my best source), but it isn’t easy to figure out which ones they are. Fictionwise once reported that as many as 50% of the ebooks they sold were without DRM (publisher’s choice), but we don’t know how that experience will port over to BN.com, Fictionwise’s new owner. Smashwords, the new open-source ebook developer and retailer (you send them your .doc file; they’ll put your ebook on sale at the price you want to charge) has no DRM option and they say they never will. But at least so far, Smashwords is for self-published content, not for big publishers or big authors.

My hunch is that the biggest authors will continue to insist on DRM and that they are sensible to do that. And that lesser authors will often be comfortable without DRM, and they are probably sensible to do that as well. But as the establishment stage of ebook adoption continues, I’d also expect that the “viral effect” of non-DRMed titles will stop being healthy for sales. This is an argument that still has a long time to run.

30 Comments »

Aside from the publishers: how the other stakeholders fare as ebook adoption continues


In three prior posts, we’ve explored the initial conversation that surrounded the announcement that Sourcebooks would delay the ebook release of Bran Hambric; sketched out what we think are the four stages of ebook adoption; and looked at how publishers see the early “establishment” stage, which is where we are now.

This post is about the other stakeholders: authors, retailers, distributors, and, of course, readers.

In the “vision” stage of ebook adoption, which ended with the launch of the Kindle in November 2007, authors were virtually powerless. With ebook sales even for established books struggling to make triple digits, publishers were gunshy about accepting digitization costs for books other than the biggest sellers and it hardly made sense for authors to make the investment on their own. With the exception of genre fiction, particularly romance and sci-fi, where vertical audiences were able to cluster early, the ebook world was inhospitable to the author working on her (or his) own.

That has changed dramatically. Today Amazon Kindle as well as web services Scribd and Smashwords make it easy for an author to upload a pdf or doc file and publish an ebook. While Amazon appears to be paying authors only about 35% of the selling price to access its army of device users, Scribd (80%) and Smashwords (85%) pay much more. Barnes & Noble’s ebook announcement yesterday didn’t mention author-generated ebook content, but with their goal being clearly to offer as many titles as they can, one must assume they’ll figure out a way to get at it too. So there is a clear path to the public developing for anybody with ebookable content; the challenge will be driving audiences to the content.

At each end of the bell curve, the publisher doesn’t contribute much to that equation. Small books and unknown authors often get little or no support from a publisher; big books and big authors often don’t need help to alert the public to their content. So after several years of publishers driving down ebook royalties to the current Major League standards of 15% of retail or 25% of net, we can expect to see the pendulum swing back to the author. Big authors will negotiate far higher ebook royalty rates; small authors will turn down small advances in favor of self-publishing as the ebook market grows (and the physical books, remember, can be delivered through a variety of POD self-publishing options.)

The biggest book retailers basically stayed out of the ebook game during the vision stage. Both Barnes & Noble and Amazon made a pass at the ebook business, but gave up on it pretty quickly (although Amazon first bought the Mobipocket format, which became the foundation for the Kindle software.) That made sense; there was too small a market early in this decade to occupy the attention of corporations doing billions in sales on printed books.

There were other complications which ultimately left ebook retailing to the smaller players. Early in the vision stage, the two big formats for handhelds were Palm, which displayed on Palm Pilots, and Microsoft’s dot lit, which displayed on handhelds that used the Windows operating system. Adobe Reader software, which was installed on PCs, began back then and has been used continuously to this day. Early in the decade, Palm’s model was to keep control of the sale of Palm ebooks, first through “Peanut Press” and then through the “Palm Digital” store. That meant no other ebook retailer could sell Palm books. When Palm became, by far, the preferred format for handheld ebook reading, they left the general ebook retailers, including B&N, without access to the heaviest users of ebooks on devices.

Mobipocket was created as a cross-platform ebook reader that would work on both Windows and Palm software. The first indication that Amazon would look for a path to ebook hegemony was when they bought Mobipocket in 2005 (they bought BookSurge, the print-on-demand capability, at about the same time.) But even though Mobi ebooks would play on multiple platforms, the market was apparently too small to interest Amazon.

The Palm Digital store became Ereader in 2007 and the Ereader platform, just bought by Barnes & Noble, will work on almost all devices (except Kindle and Sony Reader) now. In the final years of the vision stage, before Kindle, ebooks were sold by independent bookstores (Powells being the most successful) and dedicated ebooksellers like Diesel ebooks. Discounts off publishers’ established prices were only offered in targeted and time-limited promotions and seldom offered even as much as 10% reductions. The stores were “powered” primarily by Ingram Digital, which replicates its print-world role as a digital wholesaler. Competing with Ingram was an upstart company in Cleveland called OverDrive, whose wholesaling operation is called Content Reserve. Content Reserve became the primary supplier of ebooks to libraries.

When Sony Reader came on the scene in September 2006, publishers had four formats to convert their ebooks to: Palm, Microsoft dot lit, Adobe, and Sony. Adobe, which played on PCs, was at that time by far the market leader in titles available and sales. But publishers, still seeing very little market, would not necessarily convert each ebook into all formats. At a time when Adobe had over 100,000 titles available, there were perhaps 40,000 on Palm and fewer than that on Microsoft or Sony.

Amazon’s arrival with the Kindle changed everything: title availability jumped, prices were slashed, delivery was vastly simplified, and the biggest online book-buying audience in the world was constantly pushed to think about ebook reading. That signaled the shift from the vision stage to the establishment stage.

Another critical development that enabled the movement from the vision stage to establishment was the development of the epub format by the International Digital Publishing Forum, the ebook trade association, facilitating use of ebook content across platforms.

Now in the establishment stage, the big book retailers — Amazon, Barnes & Noble, and Canada’s Indigo — are in, competing in every possible way: price, selection, and merchandising. B&N and Indigo are trying to appeal to ebook readers regardless of the device they want to use. Amazon has suggested they’ll go that way, but so far are only pushing the Kindle format for Kindle or iPhone. Prices at Amazon and at B&N are clearly being subsidized in pursuit of a larger customer base. That is going to make things very difficult for the independents or any new entrants to make a go of ebook retailing.

As we proceed in the establishment stage, we can expect publishers to start selling digital downloads and we can expect most web sites to offer vertically-curated offerings. The big horizontal aggregators will thrive for the next few years as the market grows, but the verticalization of consumer attention will eventually chip away at their sales.

The distributors are, or have been, Ingram and Content Reserve. (I say “have been” because Barnes & Noble’s just-announced deal to power the Plastic Logic content offering  positions them as a competitor to Ingram as a digital wholesaler, although there is no suggestion as to how far they want to go and, as of now, several days after the announcement, nobody else to my knowledge has raised this point.) CR has recently done a deal to provide service through Ingram’s print-world competitor, Baker & Taylor. The subsidized discounting taking place at Amazon and B&N is going to make it very difficult for the distributors’ horizontal customers. Ingram may recognize this problem as being similar to what they faced when they tried to launch ebook wholesaling the first time in the late 1990s and Amazon responded with deep discounting.

The distributors have to find new opportunities through web sites that don’t think of themselves as content-centric or content-sellers now (they’re communities.) The trick will be to curate the set of offerings in a very granular way, but there is a marketplace that will develop there that will be served by aggregators.

For ebook readers, it is definitely the best of times, so far. Because of the epub standard developed by the IDPF, most ebooks can be offered for use on multiple devices without high conversion costs (which, in any case, are easier to bear now that there are real sales.) More and more titles are available and, despite the Sourcebooks experiment that triggered this series of posts, we are moving to a standard of ebook release when the book first comes out. I believe we’ll start to see ebook releases ahead of the book before long. The competitors have prices of the content to the consumer plunging. The choice of devices is proliferating and, of course, that means the devices will cost less in the future too. The deployment of smartphones that can also be used as book readers continues to increase. The pieces are in place for evolution to turn to revolution and, when it has, a few years from now, we will move from the establishment stage to “transition”. That’s when the printed-book world as we have known it for about the last century will change into something completely different.

Due to a little programming change we did, I haven’t been alerted to comments and I haven’t been answering them for a little while. I will clean this up on Friday (and then this message will disappear…)

12 Comments »

The Sourcebooks experiment with Bran Hambric: publishers in the early “establishment” stage of ebook adoption


In a post last week we reviewed what Sourcebook CEO Dominque Raccah did — announcing she was holding back the ebook publication of a new hardcover YA novel coming this September — and why she said she did it. Over the weekend, we posted about what we see as the four stages of ebook adoption. Today we will examine how one ebook stakeholder — the publisher — is affected by the change from a no-ebook world 10 years ago to what will be a largely- (if not mostly-) ebook world 10 years from now.

The first stage of ebook adoption, which we called “vision”, ended with the appearance of the Kindle. In that period of roughly 10 years, ebooks found early adopters who read them on PCs and handheld PDAs. The dedicated ebook devices introduced early in the vision period (Rocketbook and Softbook) went nowhere. The Sony Reader came along at the end of the vision period. It is an e-ink device quite similar in size to the Kindle 1 and 2, but without two critical components that gave Kindle an edge: a much larger body of titles to choose from and direct connectivity from the device to the source of the titles. There were other advantages Kindle had (the massive Amazon online book-buying audience) and that they presented (the built-in dictionary), but the title selection and connectivity were key.

Amazon quickly added a third advantage: the price of the books in the Kindle store went way lower than anybody expected because Amazon was willing to sell the individual titles at a loss to grow the market for the devices. The net effect was to propel ebook adoption from the vision stage to the establishment stage, which is where we are now.

Ebooks were not a priority concern to publishers at the time the Kindle came out. There had been too many false alarms. In 2000, both Arthur Andersen and Forrester Research offered projections for a multi-billion dollar ebook market which was to appear by 2005. Nothing close to that happened. In the vision stage, only the visionaries cared, inside the publishing houses and among the readers. Sales grew in fits and starts but when the Kindle came out were still well under 1% of units or dollars for every major trade publisher.

Because the dollars weren’t big, business decisions were not hard-fought and probably not well thought out. Publishers used the retail price of the prevailing print edition as their benchmark, with most setting the ebook price at nearly that level. After some turn-of-the-century feelgood talk about 50-50 splits with authors, royalties settled at about 25% of net or 15% of publisher suggested retail. Agents accepted it, at least partly because, whatever the percentage, there wasn’t enough ebook revenue at stake to be worth fighting a publisher offering an attractive print book deal.

It should be noted that the big accomplishment of the vision stage was the creation of the International Digital Publishing Forum (IDPF) and the creation of the epub standard, which drives most ebooks today with the exception of Kindle, which Amazon keeps in their own special flavor of mobipocket format, and ScrollMotion, where the content comes embedded in the company’s proprietary app.

There was very little thinking necessary about the ebook’s impact on the sales of the printed book because ebook uptake was so limited. In fact, there became a growing body of evidence that giving away the ebook would stimulate sales of the printed book. Lost in the thrill of that discovery was the likely underlying reason: people didn’t want to read ebooks so when they were given something digitally that they started reading and liked, they’d buy the printed version to finish it. Now that we’ve moved from the “vision” stage where most people don’t read on screens to the “establishment” stage when many do, we’re likely to find the stimulative effect of ebook giveaways will be diluted, if not eliminated.

Another fact that made little difference in the vision stage but matters more and more now is that ebook sales are not reported to the bestseller list. So even if ebook availability (at Amazon’s much lower price) only cannibalizes a fraction of printed book sales, it could affect a book’s bestseller chances or placement.

Since the actual profits from ebook units are higher than they are for print books if the publisher price is the same (unless the publisher has cut an unusually generous deal with the author for royalties), this decision by Sourcebooks — which is being watched and contemplated by other publishers — must be motivated by something more complex than the publisher’s profit per unit sold.

In PublishersLunch, Michael Cader reviewed this decision and seemed to suggest that it was largely about taming the Amazon beast. I seldom disagree with Cader, but I don’t buy that argument in this particular case. It would take a very foolish publisher to publicly stick their thumb in Amazon’s eye (and Dominique Raccah is not foolish). And a one-off experiment of this kind does not seem like an approach that would affect Amazon much one way or the other.

What Dominique said in her post was that she didn’t want aggressive ebook pricing to devalue the high-priced hardcover. She believes that higher-priced editions are critical for the publisher and the author to maximize revenues so she prefers to slot ebooks into a “staged release” strategy resembling what publishing has done (hardcover, trade paperback, mass-market paperback) and what Hollywood has done (theatrical release then DVD.)

Before we evaluate that idea, let’s look ahead to the further stages of ebook adoption. In the current establishment stage, we can expect the number of ebook channels and vendors to proliferate. In that environment, the resellers will do everything they can to keep prices down. They will subsidize individual product sales from device margins or anticipated longtime customer value. If Amazon is willing to swallow a hit of two or three bucks a unit with virtually no competion, what will they do now that B&N and soon Indigo also have devices? B&N has announced that they will match Amazon’s $9.99 flagship price and they are clearly charting a course of appealing to all devices (insofar as they can) with their ebook store. And B&N content will power another device competitor, Plastic Logic, in early 2010.

This period of loss-creating discounting by retailers won’t last forever, but it will last until the market stablizes, which will take several years. While that happens, the number of ebook points of purchase for the consumer will mushroom, which is good news for publishers. At the same time, propositions like Scribd and Smashwords will disrupt the in-supply-chain pricing; Scribd offers publishers 80% of retail and Smashwords pays 85%. As the devices proliferate, so will the tools to make it easy to put ebooks from those sources on the devices. If Amazon has disrupted the publishers’ hopes of controlling ebook pricing, might not Scribd and Smashwords disrupt the retailers who took away that control?

Evan Schnittman makes the point that holding back the ebook has consequences. It dilutes the impact of the publisher’s marketing efforts. It could encourage piracy. Evan’s solution is an introductory promotional price that is raised when initial demand has ebbed and he has a notion (which I don’t quite understand) of how publishers can get retailers to collaborate on that. I don’t think that’s the answer. First of all: it strikes me as backwards. The ebook price should be a dollar more than the print book for the 3 weeks or so before the print book comes out when an ebook could be available. Then it should be the same as the print book for the first couple of months so that it doesn’t disturb the bestseller list possibilities. Then it should drop sharply to reflect the lower cost (to publisher and retailer) of providing ebooks.

Now that’s a great theory I just posited; unfortunately there is no way to implement it. All retailers will try to beat each other on price and ebooks constitute a much less expensive place for them to subidize a low-price perception than print.

Sourcebooks — any publisher — wants to maximize revenue for themselves and for their author. To the extent that Sourcebooks can preserve hardcover bestseller status by holding back the ebook, it makes sense to do it. But beyond that, it doesn’t. Retailers selling at a loss are good for the revenue of publishers; it is their margin they are giving away to increase sales for everybody. Would Sourcebooks, or any publisher, refuse to make a book available to a price club or mass merchant because they’d sell at a deep discount? I’m not aware of one that ever did.

If I were Amazon, I’d enlist 10 publishers to try selling their ebook 10 days before the printed book was on sale and use the data to prove (most likely) that the digital head start propels early print sales. Seems at least as likely to me than that early or simultaneous release of the digital version reduces them.

Aside from the new ebook device and retailing entrants we can expect in the next few months, another flashpoint will arrive when publishers start to sell digital downloads themselves, which all of them will by a year or two from now. The discounts publishers offer and the price war among retailers will put publishers in an extremely difficult position. When publishers sell their books at a discount (which they will absolutely have to do), retailers will be knocking at their virtual door saying “I thought my discount was off your price. I want my discount off the price you really sell at, not the price you made up that nobody sells at!” And that’s when the publishers who hadn’t seen it earlier will know that the discount structure has to change.

In the next post on this subject, we’ll look at what other stakeholders have to look forward to as ebook adoption continues. And we’ll see another reason why the publisher-to-retailer discounts will come under pressure: authors will be demanding, and getting, a bigger piece of the ebook pie.

2 Comments »

A context in which to evaluate ebook strategies


This post is part of a growing set initiated by the Sourcebooks experiment holding back an ebook from simultaneous publication with an upcoming hardcover. It is the second (link to the first below) and will be followed by at least one more, as the conclusion of this post makes clear.

To talk sensibly about the Sourcebooks experiment with Bran Hambric, we need to sketch out some context. Trying to provide it will be the objective of this post. A couple of caveats before we begin:

We are talking here about narrative fiction and non-fiction: books that don’t need illustration or design-intensity to get their content across.

And we are talking about books intended for general audiences: trade books.

The first caveat matters because it describes the technical challenges of presenting the content and the second because it defines the commercial parameters for all the players (and the players will be the subject of a subsequent post.) Content that is delivered to more structured and organized markets, such as we see in academia or corporations, has a very different set of commercial realities.

There will eventually prove to be four distinct stages of ebook adoption, and what makes sense for all the players will change as we move from one to another. The four stages are vision, establishment, transition, and the new marketplace.

The first stage, vision, which started in the late 1990s, will be seen to have ended when the Kindle was launched in November of 2007. This was when ebooks attained a minimal market, substantially less than 1% of total trade sales. In that stage, we had the development of the ePub standard, which could be a permanently useful efficiency for the market. We also had the establishment of basic terms of trade, giving intermediaries approximately the same margins based on the publishers’ suggested retail price that they have had in the physical print-book world. (In my opinion, that will not prove to be so helpful.) Author royalties in publishing’s Big Leagues seem to have settled at either 15% of the publisher’s suggested retail or 25% of the publisher’s revenue, another formula that will be challenged by market forces. We have learned a lot about the futility and frustration surrounding DRM. And publishers have tried to establish ebook pricing that tracks the printed book availability at any time, generally listing the ebook at about the same or a buck or two cheaper than the lowest-priced print edition available.

The second stage, establishment, started with the Kindle. This is when ebooks are much more obviously headed for their ultimate central position in consumer trade book publishing. Ebooks are moving from making a negligible commercial contribution to each book to measureable value, a shift which could be said to have occurred. Many major books are now getting nearly half their Amazon sales from Kindle and other ebook sales are growing as well. Publishers are seeing ebook sales that have tripled as a percentage of their total sales in the past 12-to-18 months. In this stage we are also seeing — and will see more — new players enter the game. Amazon’s device play was followed by software launches from Apple (more than one, including Amazon, from the App Store) and Indigo (a smartphone application called Shortcovers which is part of the iPhone expansion). The Kindle device was preceeded by the Sony Reader; there have been UK-based launches of an independent competitor (Cool-er Reader) and one from Borders UK called Elonex; and strong rumors suggest that both Barnes & Noble and Indigo will deliver their own devices very soon. There are others as well. In this establishment stage, ebook revenues are growing, though they are not yet sufficient to change the overall power relationships in the publishing value chain. But because so many devices and channels are competing to get established and because of the high physical-world discounts, publishers have completely lost control of consumer-facing pricing at the title level.

The third stage, the beginning of which I reckon is about 1-to-3 years off, will be the transition stage. Since I’m inventing this paradigm, I’ll declare arbitrarily that the transition stage will begin when it becomes common for ebook sales to be as much as half the sales of ebookable titles (see the caveats above) and trade houses are seeing their overall unit sales (including the many books, still most juveniles and other highly illustrated titles they all publish that are not “ebookable”) grow steadily from 10% of total sales with no end in sight. In the transition stage, we will start to see real shifts in the value chain. Devices that can only import from a single source (such as the Kindle is today) will fade in importance (if, indeed, there are any left by then.) The number of potential purchase points will explode, as many web sites offer some sort of ebook-readable content, a great deal of it free, but lots of it based on the prices set by publishers. Large horizontal aggregators (Amazon, B&N, and the full-line bookstores that build their offerings from wholesalers) will struggle to hold onto a large and loyal customer base as the vertical web increasingly takes hold. Almost all publishers will be among the zillions of sites offering direct downloads to consumers, many through explicit verticals that sell the books of their competitors (as Macmillan’s tor.com sci-fi site, presciently, is doing today.) DRM will gradually disappear but policing commercial-level piracy will become much more effective because the entire industry will be fighting it. What Scribd is doing to fight piracy — using their archived content to locate pirated material posted by site visitors — will be more widespread and collaborative. There’s a real opportunity for a search engine to offer a service here that somebody will take, and then all will follow.

And the fourth stage, the new marketplace, will have arrived when ebook sales dominate and printed book sales shift primarily to short-run and print-on-demand, except for the very biggest titles. This will happen with accelerating speed when sales pass the point of being 40 or 50 percent digital overall, possibly within a decade. When ebooks become the “norm”, prominent authors will have less need for publishers and ebooks will be routinely updated and enhanced and linked to other content in ways that printed books simply cannot match. In the new marketplace, printed books will have very specific uses: tokens and souvenirs, delivery of certain material that makes great use of large presentation surfaces, and, of course, enabling those who are too old, too poor, or just too stubbornly luddite to make the shift to screen-reading that will have become ubiquitous by then.

In the next post on this subject we will really address the Bran Hambric experiment. We’ll tackle how the various stages of ebook development affect each of the stakeholders: authors, publishers, retailers, wholesalers, and, of course, readers. The context of the stages allows us to make sense of the issues of 1) timing, 2) pricing, 3) DRM, and 4) the content itself, and the marketplace impact of each of the four from the standpoint of each stakeholder. And we’ll see that the challenges Sourcebooks is responding to are symptomatic of what publishers face in the early establishment stage.

27 Comments »

An ebook experiment stirs up conversation


The Wall Street Journal was the first to announce, on Monday, (behind a pay wall, but Google “Publisher Delays E-book Amid Debate On Pricing” and you’ll get it) that Sourcebooks CEO Dominique Raccah was holding back the ebook publication of a new hardcover YA novel, Bran Hambric, scheduled for release this September. Raccah’s explanation to the Journal was that she was trying to preserve the perception that the $27 hardcover price was reasonable. Since she knew that any ebook would hit the street at just under $10 (the Kindle promotional price is $9.99 and B&N has suggested that their promotional price will be $9.95), Raccah felt that sales of the hardcover would be undermined.

What was left unsaid in the Journal piece was that Raccah might have been leaving money on the table with this decision. After all, the publisher still sells ebooks on roughly equivalent terms to printed books and has lower costs. So, depending on the royalties Raccah is paying the author, she is (most likely) realizing more margin for Sourcebooks on the ebook sale than on the printed book sale, regardless of how the retailer prices it.

Even more startling (in this day and age) is the possibility that the author’s royalty is higher per copy on the hardcover, so Raccah might be protecting author royalties, to the extent that withholding the ebook restrained cannibalization and resulted in more hardcover sales. I mention that possibility because the agent for author Kaleb Nation is Richard Curtis, one of the most ebook-friendly agents in town (and, indeed, the owner of an ebook publisher called EReads), who was quoted in the Journal supporting Raccah’s decision.

On Wednesday, Motoko Rich and Brad Stone published a piece in the Times on the same story (in which I was very briefly quoted.) Rich and Stone added some nuance to the story. The Journal said that agent Robert Gottlieb resisted simultaneous ebook publication “when he can prevent it.” In the same graf, they said that only one book of the Times’s Top 15 fiction bestsellers was not available in the Kindle store. Of course, that doesn’t mean that the Kindle editions were available at any particular time in relation to the first release of the hardcover, just that they are available now.

The Times reporting went further than the Journal, speaking to several publishers of upcoming major books about their ebook timing plans. Doubleday hasn’t decided yet about Dan Brown’s book but acknowledges that the impact of ebook sales on the hardcover was a consideration. S&S won’t reveal their ebook release plan for Stephen King’s November novel, Under the Dome. Ditto from Hachette imprint “Twelve” on the Ted Kennedy autobiography, True Compass, coming on October 6.

So the fact that everybody is thinking hard about this is confirmed by the Times’s reporting.

But Cader, who as an industry expert and blogger has more scope and credibility to report unattributed information than reporters at WSJ or the Times, went further in Publishers Lunch on Thursday. He ridiculed the notion that Doubleday was (according to a spokesperson)  ”[more] worried about…security…than particular vendors” and he sees the motivation from publishers being to control the behemoth, Amazon. As Cader reports it, Kindle sales surged when the new device(s) came out, becoming as much as 50% or even 70% of Amazon’s sales of many important books.

Everybody (in the industry, but maybe not outside of it) knows that Amazon pays a standard discount for ebooks, which is about 50% off publisher suggested retail, and that Amazon actually takes a loss on a $25 or $27 hardcover book it sells through Kindle at $9.99 (as B&N will do if they follow through to sell books like this as ebooks for $9.95.) Nobody expects Amazon to do this forever although, as Cader points out, they are temporarily subsidized by the profit they make selling the Kindle devices. The widespread fear among the big publishers is that Amazon will soon demand lower prices for the books they put on Kindle so they can keep the $9.99 price point profitably.  As the Kindle unit sales grow, of course, the muscle behind such a potential demand would grow right along with it.

Cader makes the very important point that sales migrating to ebooks, and particularly to Kindle, weaken the brick-and-mortar channel that publishers depend on for most of their sales and profits. The Times reported that publishers could well be making bigger unit profits on each Kindle sale than on each printed book sale (a fact that I explained to them when I was interviewed and which appeared not to be clear to them before I did). Cader (who of course knew that without needing to be told by me or by the Times) makes the point that publishers do this because they are “looking out for what they believe to be their long-term interests — and are trying to protect the entire system of physical book retailing which supports the whole industry.”

While this was happening, Dominique Raccah posted her thoughts to Peter Brantley‘s Amazing List and Kassia Krozser, on that list and proprietor of the Booksquare blog, turned her space over to Dominique for a version of that post. Dominique made it clear that she considered what she was doing with Bran Hambric to be an experiment. Her focus was on a “sustainable author/publisher model”. She made the point (again, clear to most people in publishing but perhaps not to those outside) that the music business continues to present inapplicable analogies, but one of the most egregious is that authors should give it away like musicians to get performance bookings: in publishing, there are no performance bookings (and few t-shirt sales…)

Raccah made it clear that she supports early ebook releases and her house is going to a workflow that will enable that. But then she gets to what is really the heart of the matter. “Etailers are suggesting that the ‘right’ price point for an ebook is maximally $9.99.  And they are proselytizing the price $9.99.  We can’t control what retailers charge for books or ebooks.” The publisher’s choices are whether and when to make it available and whether to sell to any particular retailer.

From there she explains that exploiting formats with “windows” is an old book business strategy (hardcover, trade paperback, mass-market paperback) and a common film strategy (theatrical precedes DVD release, with TV licensing once part of that picture as well, but not anymore.) And she concludes by saying that publishers need to make these decisions on a book-by-book basis (“strategically”, she says, although I’d call that “tactically.”)

My quote, by the way, was to the effect that ebook readers and print book readers are increasingly separate markets, which I believe to be true but cannot prove. A C-level friend at a large house disagrees with me, as I’m sure many others do, and my evidence on this is highly anecdotal (including myself: I have read one printed book of the 50 or so I’ve read in the past 18 months.) But my friend would have no more evidence than I to support his contrary position, so publishers will have to make decisions without really knowing, for now, whether they can push a Kindle or Shortcovers or Ereader consumer back to paper by denying or delaying a book.

That concludes the summary. I have a few thoughts of my own to add on this. I’ll be posting those shortly, probably over the weekend. I hate going much over 1000 words on any single day, and I’m already past 1200.

An  earlier version of this post had a couple of errors misconneting agents and authors which have been repaired. So if somebody tells you about a mistake they saw that you can’t find, that’s what it’s all about. Thanks to Michael Cader for setting me straight.

15 Comments »

“Vertical” versus “service”: semantics, nuance, or dueling metaphors?


Andrew Savikas of O’Reilly Media and I definitely agree on some things, the principal one being that it is going to get harder and harder for people to get paid for content. And it is going to become more and more necessary for a publisher to be branded as “of value to the community” to have any business at all. But Andrew sees the governing paradigm as “service”, and he, sometimes very logically and sometimes tautologically, can explain content purchases as actually being service purchases. And, indeed, sometimes they are.

We all are prisoners of our experience and Andrew draws heavily on O’Reilly’s. O’Reilly is a unique publishing company because its audience — its community — is as tech-ept as the publisher itself. O’Reilly has a coherent product offering, which is, in and of itself, a “service.” It is also, in and of itself, a “vertical.” And that’s where Andrew and I differ in our interpretation of what we’re seeing when we look at the same thing. O’Reilly is actually serving a vertical (so we’re both right). But the question for the larger publishing world is, “which (“service” or “vertical”) is the real core concept?” Which is the one that best enables other publishers to visualize and build a business model that will work for them?

Of course, either paradigm breaks down if you try to suggest it is universal, and particularly that it is universal today. To defend the “nobody pays for content, just for service” meme, Andrew must attribute your willingness to pay for a movie as the price for a social experience, or the price of having somebody put it on a big screen for you and sweep up the popcorn that you spill while watching it. I’d say you’re just buying the content in different forms whether you go to the movie or rent the DVD, but that wouldn’t fit the meme.

The clinching metaphor in Andrew’s piece is that we aren’t actually buying food when we go to a restaurant (because, if we were, we’d just buy it at the grocery store.) This is tricky, because, indeed, you do want that hamburger cooked and served on a bun and you want a place to sit while you eat it and maybe some ketchup supplied. So, in fact, you’re buying both food and service. You wouldn’t patronize the restaurant if they didn’t give you the food, so it seems a bit of a stretch to say it isn’t what you’re buying!

But Andrew wants to explain a phenomenom he sees over and over again at O’Reilly: that they are able to sell their content even though it is available free in some other form. He sees the indispensible component that allows that to happen that the content is somehow packaged to be more useful and therefore performs a “service.”

I’d say the indispensible component is that O’Reilly has a trusted brand in a community, which means the community looks to O’Reilly for answers. That enables the company to sell their content in contextual ways that would not be accessible to a publisher that had the same content but did not have the same community attention, the same brand.

Andrew believes that service is what the O’Reilly customer “is paying for when they buy one of our Cookbooks (which contain “Recipes” for how to accomplish specific tasks with a particular computer language or technology, often culled and curated from material and techniques previously published in blog posts, mailing lists, or help forums). I asserted that rather than the content itself, people are paying for the preparation of that content, to the extent that it helps them solve their problems more quickly and conveniently. When you think about what we do as a service business, then it makes perfect sense: readers are paying us for the service of finding a bunch of great and interesting stuff, and putting it together in a convenient package. It’s the convenience of not having to find it themself, and the concise package that saves them from having to dig through a bunch of web bookmarks or search results.”

First of all, distinguishing between “paying for content” and “paying for preparation of content” is a pretty fine line. Every piece of content bought and sold in the history of man has been “prepared” in some way. Repackaging, recombining, and repositioning content — selling the same content with different preparation — is within the experience of every publisher of any size.

And it is also nothing new for content to provide a service (and, it is not necessary to be cute about it, like claiming it provides the “service of entertaining you.”) It isn’t a stretch to say that any Dummies book is providing you a service, just like any other how-to book. A book that shows you how to get start a corporation or get a divorce is providing a service. Those books had been around for decades before there was an Internet.

But how does the capability of O’Reilly to sell that content (as a service) look when viewed through the “vertical” lens? Like a natural, I’d say. When you’ve got the attention of a community, you can present the same things to them in different ways and at different price points, in context, because you are familiar with them and their needs in a very immediate way.

Andrew’s piece concludes with a lengthy quote by Kevin Kelly defending the enduring role of PSLs (publishers, studios, and labels) in a paragraph that starts, rather confusingly, with touting the value of two aggregators who are not Ps, Ss, or Ls (Amazon and Netflix.) The point to the graf is that creators (of content) need aggregators “for the distribution of the users’ attention back to the works.” Kelly and I are in solid agreement on that: the publisher’s main job (and “service” to the writer!) is that the publisher makes the user aware of the work.

So Kelly’s point, which is the final summation quote from Savikas, is about marketing, not about service. And marketing, as we have said repeatedly, most recently and elaborately in the Shift speech, is the reason that publishers have to get vertical.

The distinction between what Andrew is saying about “service” and what we say here about “vertical” is nuanced. Rarely can”service” be delivered broadly; it has to be targeted so vertical becomes a sine qua non. And anybody really trying to build a vertical will do it by offering service and tools, which they would hope would also lead to the ability to sell content.

Our vision has been that content should be used as “bait” to attract community members and, indeed, that services are a key component to keep communities involved. We believe that, before long, very few publishers who don’t have real brand identification to communities will be able to profitably sell content.

Without disparaging the notion that service needs to be a much more important component of publishing thinking that it has ever been before, I am still convinced that the “vertical” lens, rather than the “service” lens, is the place publishers have to start to understand how their businesses will be changing in the 21st century.

9 Comments »

Google settlement opponents need to be careful how they win


The debate about the Google settlement, like most of any consequence or intellectual interest (what the government should do about health care or energy, for example) actually engages a wider range of knowledge than most of us have. But we feel comfortable having an opinion about what we should do about health care or energy without necessarily knowing much about the logistics, requirements, actual state of affairs, or cost-value relationships of what we favor or oppose.

We each start with a general position. For example, mine on health care is that government intervention is required to make sure everybody has a minimum reasonable standard of care. On energy I believe government policy should encourage energy development and consumption that is efficient and unwasteful while increasingly substituting renewable energy for resource-consuming energy.

My personal political positions are directional, not very specific. Others, perhaps because they’re better informed, have more aggressive and articulated views. I know people who think health care isn’t worth fighting for unless it is single payer; that anything else could make matters worse. I am sure there people that hold similar positions on energy that I would deem “perhaps desirable, but not politically achieveable at this time”. They’re my allies unless their idea of “perfect” blocks my idea of “better”.

And then there are others, of course, who aren’t allies at all: people who believe that market forces can be trusted with social challenges or simply resist the idea of any expansion of government or increase in taxes.

But when it comes to the details of legislation, most of us just plain citizens are pretty helpless even to have an informed opinion, let alone to have any influence. The staffs of our legislators are hearing about the details from the experts representing doctors, hospitals, insurance companies, drug companies, left- and right-wing lobbyists. If Charlie Rangel says that a very modest tax increase on people making over $350,000 or $500,000 a year will bring the costs into line with the parameters President Obama says are economically necessary. Assuming there is no chorus of objections from sources I trust (Krugman), I’ll just accept that as fact. It advances my philosophical position and I tend to trust him. I mean, who really “does the math” for themselves on things like this? Without being a professional, how could you?

The Google settlement might not be as complicated as health care or energy, but the debate about it also revolves around a lot of unknowns. Although the argument between those who say “approve it” and those who say “reject it” or even, “reject it if you can’t change it” is superficially waged on the “merits” and on the words in the settlement, I believe most of us come to this extraordinarily complicated question with a position and then put each new piece of information (or argument) into a “context” that won’t require us to change that position. And since we’re dealing with a lot of unknowns, that’s not really very hard to do.

My dominant prejudice I bring to this conversation is a belief that copyright laws have been extended so that they are abusive to the public interest and result in a lot of intellectual property being walled off from use for no good commercial reason. With that as a background belief, I saw what Google did (scanning all the work) as cutting a Gordian knot. Others come to this discussion with a dominant concern of respect for copyright or a dominant concern of bullying monopolies. Their prejudice might lead them to be against the settlement while mine pushes me to favor it.

Today’s post is not to argue that the settlement should be approved, but to consider what the situation will be if the settlement is rejected. The proponents and opponents of the settlement certainly seem to differ on what the world will look like if the settlement is approved; might there be somewhat greater agreement between the sides about what the world will look like if the settlement is rejected?

To me, it looks a short story.

The consequences of the settlement being rejected seem catastrophic to settlement opponents if it is turned down because the litigants are deemed not to fairly represent the classes (that is: the judge buys into the the idea that foreign authors, contributors, and orphans and perhaps others are “left out”). If the class representation is overturned or curtailed, it would be somewhere between difficult and impossible for these lawsuits to go on (and there are two lawsuits, even though there is one settlement.) If the settlement is rejected for some other reason (perhaps: the judge agrees that it can’t be allowed because it grants Google what would be a monopoly), then presumably the litigation could go on.

If rejection of the settlement is because the AAP and/or AAR don’t represent the class, Google would be in a stronger position than they were before the suit. There would be no database of orphan works to sell litigation-risk free, but the scans for search and returning of snippets would just continue. Authors could individually sue for copyright infringement if they wanted to try. Nobody would be any more tempted to “compete” with Google by scanning in-copyright works than they are now. And Google would have the benefit of having smoked out a lot of potential litigants because the faux settlement got a lot of copyright holders to come forward.

A little-known fact is that most of the value of the database Google was going to sell was in the in-copyright works that would have been ceded to the database. (This came up obliquely because these are the copyright holders who are going to get bonus revenue from the money earned by the page views on orphans, a fact settlement opponents have raised.)

That being the case, somebody will want to distribute that database, even without the orphans. That somebody will have to negotiate with Google to get the digital files and then with each of the publishers for their rights, without a BRR to help them. A pain in the neck, but in a few years it would probably happen.

If the settlement is rejected for some other reason, all of the above (except the part about still selling that database, since the copyright owners would still be in litigation with Google over this scanning and their lawyers would advise them against it; they can’t license a use for the scans they want to say Google shouldn’t have!) remains true and the AAP and the AAR get to decide whether to continue to fund the suit for the next several years while they and Google keep talking, presumably, about something that would satisfy the court (a bit odd, since they already satisfied each other!) If that were to happen, would the opponents of the settlement somehow help them carry on? Or step in to litigate in their stead?

If this analysis is right (and I float it with all humility: IANAL), then the opponents of the settlement walk a fine line. They want it rejected, or remanded to the litigants with some instructions they can actually follow. But they don’t want the plaintiffs discredited as representatives of the class. It would be the height of irony if Google, which probably had foregone challenging the standing of the AAR and AAP at the beginning to avoid antagonizing two organizations they ultimately need to work with, gets a court victory they didn’t seek handed to them by people motivated to make their lives more difficult. This could end up being a textbook demonstration of “unintended consequences.”

5 Comments »

Malcolm Gladwell, please meet John Wooden


The sui generis Malcolm Gladwell wrote a provocative piece in the May 11 New Yorker, “How David Beats Goliath”, that demonstrates that the underdog can often win by adopting an unconventional strategy. The examples were numerous, and included Lawrence of Arabia, but the central point-maker was a girls basketball team. Their coach, an Iranian national who was not terribly familiar with basketball, couldn’t understand why basketball teams were routinely coached to simply allow the offense to bring the ball into the forecourt, essentially not defending about 70 feet of the court’s 94 foot length.

Gladwell explained the near-irrefutability of the coach’s logic which was underscored by the team’s success against much more skillful opponents. (They made it to the finals of the state championships before they lost and, according to this article, the referees working on their opponents’ home court were largely responsible for that.) And he did a bit of research, relying heavily on an interview with longtime pro and college coach Rick Pitino to get some historical perspective and to understand how the fullcourt press strategy worked at higher levels of the game.

Pitino had been a scrub guard on the University of Massachetts basketball team that had Julius Erving, the immortal “Doctor J”, as its star player. UMass had been defeated by a scrappy but presumably inferior Fordham team in 1971 because Fordham used a fullcourt press and disrupted the better team’s offensive flow. That Fordham team had a star forward named Charlie Yelverton who was about 6-foot-3, nowhere near tall enough to play that position on most successful college teams. But the press mitigated the height disadvantage.

A few years later, Pitino was a young coach at Boston University and used the press to get the team into the NCAA tournament, an unusual event for them. Pitino has made a career of using the press successfully at Providence, the University of Kentucky, and Louisville. But, oddly enough, as Gladwell notes, nobody else has.

All of this demonstrated, to me, that research is great, but you can’t beat a long memory unless you do all the research. And my long memory beats Gladwell’s research with Pitino.

The first coach to use a tip-off to final buzzer full court press as a staple tactic was John Wooden, undoubtedly the most successful college basketball coach of all time. Wooden had been the coach at UCLA for 15 years when he put the press in for his UCLA teams, which also lacked height, for the 1963-64 and 1964-65 seasons. These were the first two UCLA championship teams. And they beat presumably superior teams from Michigan and Duke to win those championships.

It’s too bad Gladwell didn’t know this, because the Wooden history confirmed and validated his “David and Goliath” paradigm. Those first two UCLA champions were, indeed, “David”s. They lacked height and conventional basketball star power. They needed to change the tactics, as Gladwell says Davids do, and the press worked perfectly for them. They made use of the unique talents of 6-foot-5 Keith Erickson (who was a star volleyball player in the basketball off-season), making him the roving backstop for their press. Erickson’s quickness and jumping ability frustrated opponents who tried to beat the press with a long pass.

But the success of those two UCLA teams led to Wooden being able to recruit far superior talent to UCLA in the future. In 1965, Lew Alcindor (later Kareem Abdul-Jabbar) arrived on the campus and, from then on, UCLA was Goliath, not David. (By the way, fellow freshman Mike Shatzkin arrived on the campus at the same time, which is how he knows all this stuff so well!) Wooden gave up the press as an all-game tactic and won 7 championships in 8 years by more conventional means.

So it turns out that Gladwell’s entire case could have been proven with one example: John Wooden at UCLA.

10 Comments »

Reality changes more slowly than I like to think


I did a panel yesterday at NYU as part of the summer publishing program on “New Visions” for publishing. The group was put together by Leslie Schnur. I shared the stage with four very articulate co-presenters who gave very diverse views of the future. Our audience was a full room of about 50-100 (I wasn’t counting; I didn’t know I’d be writing this piece) very attentive 20-somethings with a serious interest in publishing.

Dan Simon of Seven Stories Press spoke optimistically of a revival of book reading, as in printed ones, and he spoke passionately about the importance of editorial selection and advocacy as part of a social mission publishers have to bring good writing to readers.

Carol Hoenig, a writer and consultant who works with Author Solutions, told about her own experience successfully self-publishing a novel (she thinks selling 1500 copies is successful, and I agree with her) and explaining how Author Solutions helps aspiring writers “get past the gatekeepers.”

Brian O’Leary of Magellan explained the new business models enabled by print-on-demand and how to think about them. Brian pointed out that POD models make sense for books that sell as many as 500 or 1000 copies a year, and that caught Dan’s attention, because, as he put it, “a book that sells 500 or 1000 a year is solid backlist for us.” Dan has been comfortable printing a 3 year supply; Brian’s math suggests reconsidering that formula.

Will Schwalbe, who had a 21-year career as one of New York’s top commercial editors at Morrow and Hyperion, explained his new web business, Cookstr.com, which aggregates recipes from more than 300 of the top chefs and cookbook authors in the world. Since, as any reader of this blog knows without my having to report, I used my presentation time to talk about the shift from horizontal to vertical, Will’s presentation had the great virtue of reinforcing the message I had delivered three presentations before.

Will made good use of the audience. He asked, by a show of hands, how many people liked Italian food. Just about everybody. How many cooked? Almost everybody. How many people got recipes on the Internet? A lot. How many baked more than cooked? A good chunk. How many vegans? About none. How many vegetarians? A handful. How many would prefer a recipe with fewer than five ingredients? Quite a few.

He used that device to show how the tagging he invests in on his web site delivers a better user experience for somebody looking for precisely the right great recipe. What it triggered in my mind is “what an interesting way to collect information from an audience.”

After we all presented, there were lots of interested questions. What’s the business model of Cookstr? How does Seven Stories go about finding those great books Dan wants to publish? Does Author Solutions do publicity for books?

As the conversation evolved to a close, I realized I had a precious opportunity. Though I’m considered to be wildly (crazily?) forward-thinking in some circles, expecting print runs of books to nearly disappear in 20 years, for example, I am unabashedly conservative in others. For example, the idea of books as collaborative or social experiences leaves me cold and it really leaves me cold to think of interrupting good narrative reading to explore links and, particularly, to see video. Some people think storytelling will be reinvented to take advantage of things like this, which makes me scratch my head. But maybe it’s generational, I always think. Maybe today’s generation would find it boring not to have a video interlude interrupt unbroken text. Well, with all these very smart Born Digitals in one room, I’d use Schwalbe’s technique and ask!

So, with time running out, I got the indulgence of the organizers to ask the crowd a couple of questions. The first one was: “how many of you read ebooks.”

Two hands went up. Two.

The next question was not worth asking. But I sure got a dose of new information to ponder.

8 Comments »