Wiley

Comparing self-publishing to being published is tricky and most of the data you need to do it right is not available


I have a certain pride of discovery in super-successful indie author Hugh Howey. It was nearly two years ago that I learned about him on a trip to LA to organize a conference that didn’t happen. The Hollywood grapevine told me about his novel-of-assembled-novellas, Wool, which was a sudden major self-publishing bestseller and that he had a movie deal. I got in touch with him and his agent, Kristin Nelson, and learned that he was making $50,000 a month in royalties, and had a host of foreign deals as well as the movie deal. Meanwhile, the publishing establishment couldn’t come up with an offer that would sensibly entice him to give up his indie revenues. I read his book and loved it and then had many interesting exchanges with Hugh and Kristin, which resulted in them appearing on the Digital Book World program in 2013, 13 months ago.

He’s a terrific guy who has achieved a phenomenal success and maximized it in a very clever way. But I think he’s a much better author and self-promoter than he is a business analyst.

At the beginning of the year, Howey offered his advice for publishers which reminded me of an old saw of my Dad’s, which was “when I was a kid, everybody wished their father owned a candy store.” Hugh’s advice for publishers is to eliminate things that annoy him (non-compete clauses, length-of-copyright licenses, New York City offices) and to lower prices, give away ebooks with hardcover purchases, and pay authors monthly.

Now, none of these things is necessarily a bad idea, and some of them will almost certainly come to pass, at least for some authors in some contracts. And I remember when Wiley moved from 3rd Avenue to Hoboken that they figured they got a competitive advantage of permanently lower rent at very little sacrifice of efficiency. But none of them are things a publisher would do just for the hell of it; they’d have to see a competitive advantage or a competitive necessity. The piece he wrote advising the publishers (which he addressed to HarperCollins but which he meant to be generic) didn’t even attempt to prove that these changes were either commercially advantageous or necessary.

But giving this advice to HarperCollins or any other big publisher is not dangerous to anybody’s health. Unfortunately, Hugh’s latest business inspiration — a call to arms suggesting to independent authors that they should just eschew traditional publishing or demand it pay them like indie publishing — is potentially much more toxic to consume. (The agenda here is unclear. Is Hugh most interested in getting more authors self-publishing or in organizing authors to demand better terms from publishers? It’s hard to tell, but there is an agenda, it would seem.)

The long story short is that Howey analyzed a bunch of Amazon rank data (apparently a single day’s worth, 1/28-29/2014, which has so many obvious problems associated with it that all by itself it raises questions about what of value can be gleaned) and from that extrapolated some breathtaking (and breathless) conclusions that go way beyond what the data could possibly tell anybody. The analysis purports to compare how authors do self-publishing versus how they’d do with a publisher and comes to the conclusions that they make more per copy on average self-publishing and maybe even sell more and make better books to boot. (For much more and better analysis of the data biases, I’d check Dana Beth Weinberg’s post on this subject. Her objections and my objections have very little overlap.)

My problem with the whole exercise is that there is a long list of relevant facts not included in the data and therefore ignored in the subsequent analysis:

1. Author revenue from print sales.
2. Getting an advance before publication versus having costs before publication.
3. Unearned advances and their impact on author earnings.
4. Getting paid for doing the work of publishing which goes beyond authoring.
5. Current indie successes where the author name or even the book itself was “made” by traditional publishers.
6. Rights deals.
7. How well Amazon data “maps” to what happens elsewhere. Is it really projectable?
8. The apparent reality: flow of authors is self- to traditionally-published, not the other way around.
9. Publishers can raise royalty rates (or lower prices) when it becomes compelling to do so.

Each of these could be a big or small part of the story, but every one is relevant.

1. Author revenue from print sales. Authors not only make a lot of money on print sales, but print in stores (as opposed to printed copies available through Amazon) is also a marketing element. This all still matters. In a comment on Howey’s site, one author estimates her Amazon sales as anywhere from 10% to 30% of her total sales. Obviously, for some other authors it is a lot more than that, maybe north of 70% of their sales. Which kind of author are you? And if you’re the kind selling mostly on Amazon, is that an inherent characteristic of your appeal or a deficiency in your non-Amazon distribution?

2. Getting an advance before publication versus having costs before publication. Although Howey cites one author who turned down an advance to self-publish, those stories appear to be few and far between. I was really struck by one such author announcing nearly two years ago that he was doing this, but, in the end, that author took a publishing deal — not a self-publishing deal — from Amazon. And the size of the advance is also a consideration that Howey’s analysis doesn’t touch on. It can’t, because that data — however relevant — isn’t available. (But then, can you draw valid conclusions without it?)

 3. Unearned advances and their impact on author earnings. Unearned advances are a substantial part of author compensation. I know of one Big Five house that calculates that they pay more than 40% of their revenue to authors and another which says that number is in the high 30s. That’s not all digital, some of that is print with manufacturing and warehousing and shipping costs associated with the revenue. How can you compare how authors are compensated if you don’t calculate the benefits to authors, meaning the resulting higher percentage of the revenue they’ve taken, of unearned advances? That relevant data is also not available.

4. Getting paid for doing the work of publishing which goes beyond authoring. Frankly, the biggest omission to me is the eliding of the costs — in time and money — of doing the work the house does for an author. Howey mentions that editors and cover designers can be hired. That’s true, and good and competent ones too. But is a good writer necessarily a wise chooser of an editor or of a cover design? How much does it cost if you don’t get the right one the first time? (We know publishers aren’t perfect at these jobs either, but they’re bound to be better most of the time than somebody who hasn’t ever done it before.) And is that how you want to spend your time? Authoring is a job but doing the work of self-publishing is also a job. And it entails real risk. Advising a writer to self-publish without considering these things is like telling somebody who’s a good cook that they might as well just open a restaurant.

5. Current indie successes where the author name or even the book itself was “made” by traditional publishers. Another factor any author self-publishing has to consider is the likelihood of success, which is much greater if the books are backlist (have some fame in the marketplace) or even if just the author has been previously published. Successes like Howey’s, from a total standing start with no prior writing track record, are quite different from others who have reclaimed their backlists and used them as a platform to build a self-publishing career. Now, that data could be obtained. Wouldn’t you like to know how many of the “indie authors” at various income levels were cashing in on what was originally publisher-sponsored IP and how many started from scratch? (It’s more challenging, of course, to assemble the data by the author rather than by the book.) But I sure think it would be necessary to understand before drawing conclusions about who should self-publish.

6. Rights deals. Howey himself has benefited from having a stellar agent who has made foreign and movie rights deals for him across the globe. (She even made a print-only deal for Wool with S&S.) Yes, you can (if you’re lucky) do this like Howey did: finding an agent to represent his self-published material. But that’s another thing to find and manage that comes with the deal (and the advance check you get to cash) if you do a deal with a traditional publisher (although, admittedly, you would probably have had to find the agent in the first place, and self-publishing could be a way to do that.) Nonetheless, you get more rights-selling firepower on your side if you’re with a publisher.

7. How well Amazon data “maps” to what happens elsewhere. Is it really projectable? A massive flaw in the analysis is the biased nature of the data. Amazon’s sales profile is not the same as the market as a whole. (One day of data isn’t a projectable sample either.) One agent pointed out to me that they are weak at selling mass-market fiction, for example, and that their ebook sales tend to the fresh and new, so they don’t get a bump when a mass-market paperback comes out. But we can be pretty sure that Amazon sells ebooks more successfully than the market as a whole, because Kindle has the biggest installed base and Amazon has the most book customers. This bias of sample is compounded by the focus on genre fiction. No matter how big a percentage of those niches is served by Amazon, it is important to remember that it is where they are relatively strongest in relation to the big publishers. If we were comparing literary fiction or biographies — both of which have lots of worthy authors too — the chances are the cost of an Amazon-only distribution strategy, or an ebook-only distribution strategy, would be far higher. And the chances of success would be far lower.

8. The apparent reality: flow of authors is self- to traditionally-published, not the other way around. But I think part of the motivation for this piece was frustration in the indie author community at the fact that many of the best ones get signed up by traditional houses, who view indie publishing as a farm system, and very few established authors will actually turn down an advance to go indie. They’ll reclaim their backlist and self-publish it, or do a short ebook on a subject that is timely and can’t wait for print or be made longer. But there has been very little evidence that I am aware of that publishers are having wholesale difficulties getting authors to come aboard with them on a traditional deal.

9. Publishers can raise royalty rates (or lower prices) when it becomes compelling to do so. Which brings us to the final point that I think is relevant and ignored. As Howey and others have pointed out, the early days of ebook publishing appear to have been good for publisher margins. They can afford to give authors more. (In fact, I encouraged them to do that before their accounts come after them for the extra margin in a post nearly three years old.) But they’re not going to give it out of some spirit of generosity or because Hugh Howey (or Mike Shatzkin) thinks it would be a good idea. They’ll give it when it is a competitive necessity to do so.

So my advice about Hugh Howey’s advice is simple. Totally ignore it if you’re not a genre fiction author; there’s precious little evidence or thinking in it that applies to you. And if you are a genre author, be very clear about the extra work and extra risk you take on in order to get some extra margin. Both will be required for sure whether the extra margin materializes or not.

Self-publishing is definitely an incredible boon to commercial writers and they should all understand how it works. Increasingly, literary agencies see it as their job to provide that knowledge.. It is almost certainly a good idea to self-publish for many writers who have reclaimed a backlist that has consumer equity. It is a perfectly sensible way to launch a career, either before going after the commercial establishment or as a part of the strategy to engage with them. (Editors in the big houses are well aware of the self-publishing successes; it’s a new farm system.) If an author has access to markets, it can be a better way to get short or very timely material to them faster. But to say it has its advantages and applications is a far cry from saying that it is a preferable path for a large number of authors who could get publishing deals.

I can’t “prove” this so I won’t try, but it bears further emphasis that it still looks like the number of authors who start as self-published and then get “discovered” by the establishment and switch over is still larger than the number of authors who say “keep your stinking advance” and turn down a deal to do the publishing themselves. None of the parties involved is stupid — not the traditionally-published authors, nor the self-published authors, nor the hybrids — not even the publishers. And they might not be evil, either. As for self-interested debaters, they exist on all sides.

PS: I HATE long comments. If you disagree with me and want to use my space to make your case, please be concise. (And frankly, although I also prefer you to be concise if you agree with me, I’m made less cranky when I get long-winded support.)

437 Comments »

Publishers are reshaping themselves


It was reported last week that Hyperion plans to sell off its “backlist” to focus its attention on new titles it will develop in conjunction with its corporate cousins at Disney and ABC. This follows Wiley’s selling a lot of the most bookstore-dependent parts of its list, including the sale of Frommer’s Guides to Google, in 2012.

I believe these transactions are the front end of a trend I first anticipated in a post about four years ago. 

Publishers are going to find it increasingly compelling to reconfigure their inventory of title offerings around their most current thinking about their marketplace. Both Wiley and Hyperion are moving away from a “general” trade model. They’re moving away from publishing books for which their primary revenue dependence would be on bookstores and their primary marketing dependence on the book review media.

Wiley is actually returning to its professional roots. I did a lot of consulting at Wiley in the late 1980s when they were building out their trade presence. Although they were very disciplined about sticking to specific subjects where they had special marketing capabilities or subject expertise, they became increasingly “trade-y” over time. They built a powerful organization to sell to the book trade which reduced the need for them to be as focused on core subjects as they were when they were first building their trade capabilities. But the core of the company — its heart and soul and its DNA — always remained primarily professional. (Wiley also has a college textbook list, but it is a much smaller part of their business than professional books and journals.)

That means that Wiley would view the diminution of bookstore shelf space with more equanimity than a straight trade house, like one of the Big Six (soon to be Big Five) would. They would see themselves readily able to move away from a shrinking business segment that was never “core” for them anyway.

Hyperion is a straight trade house. Unlike Wiley, they don’t have a direct-to-user business or the big library revenue that a professional publisher does. But what Hyperion does have is a close relationship with sister companies Disney and ABC. Those relationships make possible partnerships which don’t change the sales and distribution challenge, but have a huge impact on the marketing opportunities. Hyperion is increasingly able to publish titles that have a strong public awareness component built on the back of TV or movies.

But Hyperion is a straight trade house without a lot of fixed overheads. They have outsourced the heavy requirements of sales and distribution, currently to Hachette. So they can sell off their backlist, even if it amounts to a substantial chunk of their sales, without having to worry about reorganizing their sales force or underutilizing their physical plant. They have apparently decided to become a different, more focused, kind of publishing house, not so much committed to “publishing books” that can make money from whatever source as they are to being the book publishing arm responsible for building out the brands and franchises their corporation invests in for movies and TV.

Both Hyperion and Wiley are showing us what the publisher of the near future is going to look like. They will be more focused. They will be shedding overheads so they can expand or shrink their offerings more readily to respond to opportunities and circumstances. They will be less dependant on the trade bookstore and book review trade networks. And Hyperion’s decision says something more about the future that Wiley’s doesn’t: book publishing will increasingly be an activity operating in tandem with or in service of other objectives of the owning organization. (There is a parallel here in retailing, where Amazon and Google and Apple fit this description, and Kobo and Barnes & Noble do not.)

There may also be a message here about the relative importance of backlist. When digital first started to happen, it seemed like the backlist might be the biggest beneficiary. After all, stores had limited shelf space and online merchants can “carry” all the books they want, particularly if there is no pre-purchased inventory required. (There isn’t for ebooks and there increasingly isn’t for printed books either, which can be purchased from wholesalers for next day delivery, even if they are printed on demand!)

But it turns out that the current state-of-the-art for merchandising and presentation of books online is not very helpful to backlist. Most retailers return a limited number of books (10 or 20) per screen to any query. Customers have limited patience for refreshing screens, so the number of titles an online purchaser “browses through” is far fewer than the number that would catch the same eyes in an equivalent amount of time in a store. This appears to be pushing sales more and more to newer books and books on bestseller lists.

This problem of concentration will probably just get worse as mobile devices become more ubiquitous and the shopping takes places on ever-smaller screens.

It isn’t clear yet to what extent publishers’ marketing practices could be responsible for the consumers’ bent to purchase from the top titles or whether changes in how publishers market could ameliorate it. But it does mean that marketing backlist is its own challenge and not sufficiently addressed, as it was in years past, by sales reps or store systems just keeping in stock what has been selling.

It is now necessary for publishers to communicate directly with consumer audiences to be effective marketers. At the same time, it is now possible for publishers to do the core work of reaching the trade without big fixed overheads. The combination of those two things will motivate changes in how publishers view the value of both their backlists and their publishing programs. What Wiley and Hyperion have done shows what kind of conclusions publishing today allows them to come to.

Should be great times coming for the small number of players in trade publishing’s M&A world.

18 Comments »

Hats off to Amazon


When the story of how Amazon came to dominate the consumer book business is written ten years from now, there will need to be a chapter entitled “September 6, 2012″.

Of course, that was the day that Judge Cote approved the settlement agreed to by HarperCollins, Hachette, and Simon & Schuster and began the process of undoing the publisher price-setting regime that was established by the agency model. This is actually designed to unleash broad and deep discounting in the ebook marketplace and I think we’ll see evidence very soon that it will succeed in that objective beyond anybody’s wildest dreams. (I have repeatedly expressed my concerns about what I think are inevitable consequences of that achievement.)

But that’s not all Amazon accomplished on September 6, 2012. It’s not nearly all. In fact, the only thing that wasn’t good for Amazon about the Judge’s announcement was that it stole a lot of the attention from what they can accomplish without the government’s help.

One day after scrappy competitor Kobo tried to upstage them by announcing their own updated suite of devices, Amazon did a combination of outperforming and underpricing the device competition from them, as well as from NOOK, Apple, and Google. Even the device innovation wasn’t what most impressed me. There were several other innovations that raise the bar substantially for everybody competing with the Kindle ecosystem.

1. Leveraging their ownership of Audible, the dominant player in downloadable audiobooks, Amazon has introduced a Whispersync feature that enables seamless switching between reading an ebook and listening to the audiobook version. One of my sisters-in-law, who is both a teacher of reading-challenged kids and an adjunct professor teaching others who do the same, had asked me a few months ago why nobody had done this. I asked around and was told “it is complicated.” Publishers can’t do it because they don’t control the delivery ecosystems. Other ebook retailers can’t do it because they don’t deliver audio.

Only Amazon could do it. Now they have.

1A. In addition to the use of Whispersync to allow seamless toggling between reading and listening, Kindle introduced a feature called “Immersion Reading” that allows you to read and listen at the same time.

Does everybody notice that this creates a real reason to buy both an audiobook and an ebook of the same title? Seems like that is something all authors and publishers can celebrate.

This specific innovation is particularly ironic if we remember some history. In the early days of the Kindle, Amazon wanted to put in a text-to-speech capability that would deliver an audiobook by automation of every ebook. Agents and publishers balked because of the obvious rights issues; audiobooks are a separate profit center for everybody and nobody with a commercial interest wanted to see that threatened, even though others thought that the automated delivery wouldn’t really satisfy an audiobook customer.

Nobody will have a problem with this solution, though. The consumer buys twice.

And, incidentally, somebody else can write a whole blog post on how this suite of capabilities can be used as an opportunity-creator in the college and school markets!

2. Leveraging their ownership of IMDb (the movie and TV database), Amazon is enhancing the experience of watching video by making information about the film and its personnel available at a click. Last month bloggers were explaining that Google bought Frommer’s from Wiley because they wanted to turn content into metadata. Now Amazon is clearly demonstrating exactly why that’s useful and important.

3. Leveraging their publishing capabilities and their role as the only retailer with an audience large enough to deliver a critical mass of readers all by itself, they are introducing serialization by subscription with Kindle Serials. The initial foray is modest: a selection of eight very low-priced serial novels delivered in chunks of at least 10,000 words. But this “tests” the model of getting people to buy something up front knowing in advance that it will come in stages.

(When I explored the viability of subscription models for ebooks, I speculated that the only one that could really pull it off for general reading would be Amazon. Consider the camel’s nose to have now officially penetrated the tent.)

On one hand, this recalls the success of the self-published novellas-cum-novel called “Wool” by Hugh Howey. But it also could be the foundation for something like Dominique Raccah’s “agile publishing” model, which is an active experiment now at her company, Sourcebooks, with author David Houle. Amazon would have the great advantage of a much larger audience to “invite” into an experiment of that kind and, when you are doing something dependent on participation for success, having more people to appeal to at the outset is a huge advantage.

4. Amazon is subsidizing all their devices with ads served as screen savers. They were initially planning to change the previous practice of offering higher pricing that enabled consumers to avoid the advertisements. Their first announcement was that Amazon had gone all in with all their devices coming with advertising and without a “pay more” option to avoid it. Although the initial reaction to this apparently forced a change, and they’re now offering the Kindle Fire without ads for $15 more, this still opens up a series of other thoughts and questions.

How can anybody compete on device pricing with a competitor that not only has the most direct contact with buying-and-paying customers but which is also bringing in ad dollars to subsidize a cheaper retail price?

Does this mean that Amazon “knows” that by far most consumers elected to save the money and don’t care about the ads?

Are they building a priceless communication network to promote content and to charge content creators for the next generation equivalent of store windows and front tables?

I thought Google was the champion of advertising. Why didn’t they figure this out first for the Nexus 7?

5. Amazon’s X-Ray feature, which basically collects core metadata (characters, scenes) from books and movies, is a building block to ultimately deliver summaries and outlines that could be an exciting additional unique capability of the platform. It could perhaps even be a start on generating automation-assisted “Cliffs Notes”-type content that could ultimately command a separate purchase fee.

6. Amazon has built a parental control capability into their Kindle ecosystem called FreeTime so that kids can use the device and even obtain content but only in approved ways. There are fledgling initiatives like Storia from Scholastic and the longstanding PBS brand Reading Rainbow for which one of the core propositions is creating a reading environment for kids with adult controls. These kid-centric platforms are obviously designed to present environments that parents and teachers will find superior to what they use themselves for the purpose of enabling kids’ reading. They suddenly have some serious competition from the most popular platform already out there.

And Amazon has built in what is perhaps a killer app that the others probably can’t even contemplate: they can apparently control the amount of time a kid can spend doing various activities on the device, so parents can mandate a ratio of reading time to movie time to game-playing time. I’m sure more than a few parents will say “wow!” to that.

********

Judge Cote’s decision is also very good news for Amazon, and it was what reporters called to talk about on the day of the press conference that announced all of the above. Michael Cader’s very thorough analysis (on which I have written a few more words below) spells out what we don’t yet know about the speed and complexity of implementation, starting with whether an appeal will be heard and whether implementation will be delayed pending that appeal.

But it would seem that the chances are good that many of the controls that prevented Amazon from discounting high-profile books for the past 18 months will come off a month, or maybe two months, before Christmas.

I think that Amazon will discount aggressively. Their “brand” is, among other things, very much about “low prices for the consumer”. And they have always used price as a tool to build market share. Expect them to lead the way.

The price-setting won’t be done by humans; it will be done by bots and algorithms, responding to what is happening in the marketplace among their competitors every day. Amazon is very good at this; they’ve been doing it for years. Presumably, BN.com has a similar set of skills and tools. Presumably everybody except Apple had to price at least their wholesale-purchased books competitively.

Apple was protected by the MFNs that remain in place for all but the settling publishers. But without that protection, how will Apple compete? They’ve never had to do competitive pricing of commodity products before. I will be very impressed if Apple can get through the price fights about to take place without an obvious black eye. They haven’t been training for this.

Overall, this should mean another surge of growth in the ebook market, which had seen a serious dropoff in its growth rate over the past year. We won’t be seeing ebooks doubling share annually again, but we’re about to see digital priced aggressively in ways that will make any regular consumer of print wonder whether they should consider making the shift that so many heavy readers have already made.

When the settlement is implemented, the three settling publishers will have their book prices cut by retailers, whatever they decide about setting list prices and however they negotiate the next round of commercial terms. But the three publishers still permitted to use agency pricing — Random House and the continuing litigants Macmillan and Penguin — will probably find that they are forced to lower the prices they set to keep their big books competitive. At least that would be my expectation. It will be beyond interesting to see how this plays out over the next few months.

Pardon a plug here for my Publishers Launch Conferences partner, Michael Cader, and his skills as the indispensible reporter on the publishing scene. His four posts on Friday: on the Judge’s ruling, on what happens next as a result, on their new hardware, and on the various reading and consumption features that were the subject of most of this post, comprised — by far — the clearest and most thorough explanation of a staggering array of complex information. Of course, Michael is more than a reporter on the industry; he’s been a player in it for 25 years.

I really don’t understand how reporters who don’t have the benefit of that background can justify not reading him. (You hit a pay wall it takes $20 a month to scale if you are not a subscriber. Just about everybody making a living in trade publishing has no trouble with the value proposition.) They’d all certainly be doing their jobs better if they did.

73 Comments »

Some brief comment on news items from this week


Wiley announced a few months ago that they wanted to sell some of their most consumer-oriented lines of books (although, as Cader makes clear, what they announced they wanted to sell constituted only about 20% of the sales volume of the division that houses these titles.) The first sale under that initiative was announced this week: Google bought the Frommer’s travel books for a price apparently somewhere between $23 million and $25 million.

Google had previously purchased the Zagat’s guide business, and the Frommer’s acquisition was (properly) seen as part of Google’s effort to ratchet up its content for travel and for local searches. Attention has been focused on whether they would continue to publish the books (they say they will for now, but plan to reassess) and whether this means publishers should now worry that Google will become a competitor.

Another common, and accurate, observation is that this transfer signals a shift to a different monetization model for content, from selling packaged bundles like books (or ebooks) to delivering nuggets of information at the point of need.

But there’s one relevant observation I haven’t seen, at least so far. Wiley’s Frommer’s travel line is one of two, to my knowledge, that has created a real B2B content-selling business. (The other one is Random House’s Fodor’s travel line.) Indeed, the New York Times, in their story about the transaction concluded with this:

Google also declined to comment on what will happen with companies that have worked with Frommer’s to show its reviews, including Kayak and The New York Times, which licenses destination-related content from Frommer’s for its Web site on an annual basis.

There are two possibilities here and I don’t know Google well enough to predict with confidence which one is right. One is that they like the model of licensing content to websites, will continue it with Frommer’s, and will learn from it to extend it to other businesses somehow. The other — which intuitively seems less likely — is that they are happy with their already-developed model of being the key aggregator of dispersed content and would prefer that this content be found through general search or through the many tools they provide sites to provide customized Google search on their sites. If that’s the case, perhaps they’d unplug those deals as contracts allow.

If the former is true, Google might create opportunities for other companies to syndicate content without building the infrastructure to do it. If the latter is the strategy, then an opening just got created for one or more of the other travel brands to pitch Kayak and The New York Times and all other Frommer’s customers on replacement content. So there will be a few players watching developments here very closely (or maybe they already know the answer).

************

Also this week, Royalty Share CEO (and attorney) Bob Kohn filed an additional brief for Judge Cote to consider before she rules on the DoJ settlement with Hachette, Harper, and Simon & Schuster. Kohn’s brief is full of new information for those of us who aren’t lawyers (and perhaps for many who are who haven’t done as much homework as he has!)

New to me from reading Kohn’s paper:

1. Apparently, the law, as defined by the same court where this case is now (the 2d Circuit) in a ruling in 1981, defines pricing below marginal cost as “predatory pricing”, which is “presumptively illegal”.

2. Kohn interprets the Sherman Act to allow conduct that results in raising “illegally-low” prices.

3. The DoJ’s finding that Amazon’s pricing wasn’t predatory because the ebook unit was “consistently profitable” was inconsistent with the Court’s ruling in 1981.

And, for good measure, Kohn wants DoJ to turn over to the court (the linked article contains the whole Kohn brief) the evidence that led them to that conclusion. (I’m sure the whole industry would like to see that!)

Kohn is also urging the Judge to hold a hearing before ruling. He argues that to determine if the settlement “is in the public interest, it would be perverse if this decision were made without a public hearing.”

I find it hard to quarrel with his logic. I leave it to the lawyers to argue about his legal citations.

************

OK, this one isn’t really from this week. But here is a survey of published authors from the UK, which I discovered this week and found to be very interesting. Seems like they got something over 300 responses (as of these results) with most coming from authors who were published by big houses.

Most seemed quite happy with the development of their book: the editing, the cover, the presentation. They were less enthusiastic about the marketing efforts they saw on their behalf. But, all in all, I thought it spoke to pretty high satisfaction with the publishers, particularly when you consider the highly disproportionate effort the big publishers put into a very small number of books whose authors are mostly getting very large advances and whom I doubt would take time for a survey like this.

What I found really interesting, and counterintuitive, is that of those authors who expressed an opinion about whether they’d have a publisher in 5-10 years, they thought by about 4-to-1 that they would. But asked if they’d have an agent in that time span, the margin was only 2-to-1 that they would.

13 Comments »

Somebody please tell me the path to survival for the illustrated book business


My eye was caught at the end of last week by a story in The Bookseller that acknowledged that ebooks just haven’t worked for illustrated books. It appears that the publishers of illustrated books they spoke to for the piece think that situation is temporary. The Managing Director of Thames & Hudson, Jamie Camplin, is quoted as saying “you have to make a very clear distinction between the situation now and the situation in five years time.” And Dorling Kindersley CEO John Duhigg emphasized that his team is being kept up to date with digital workflows and innovations, so they can “be there with the right product at the right time.”

But maybe, except for an opportunity that will arise here and there, for illustrated book publishers trying to exploit the same creative development across both print and digital, there won’t ever be a “right time”. There certainly is no guarantee there will be.

Duhigg characterized what he called “the black and white digital business” (but which I think would more accurately be described as “the immersive reading digital business”) as “flowing along” while admitting it is “very different” for the companies with “fully-illustrated lists”.

That’s accurate. Expecting that to change could well be wishful thinking.

Illustrated books in printed form depend on bookstores more than novels and biographies do. If the value in a book is in its visual presentation, then you might want to look at it before buying it, and the view you’d get of it online might not be doing justice to what you’d see if you held the book in your hands.

Camplin sees that optimistically. He has an aggressively modernist view of what will happen with novels. “I don’t see why print should survive at all for fiction, beyond the odd bibliophile” which he apparently believes could open up more bookstore display space for illustrated books.

But if the buyers of Patterson and Evanovich and 50 Shades of Gray aren’t visiting bookstores to make those purchases anymore, will there be any traffic to look at the illustrated books, however prominently they are displayed?

This problem has been nagging at me for a while. Books are illustrated for two reasons: beauty or explanatory purpose, more the latter than the former. When they’re illustrated to better explain, such as showing you how to knit a stitch or make a candle or a piece of jewelry, wouldn’t a video be a better option most of the time? If the illustration is a map, isn’t it likely that being able to manage overlays digitally (for the movement of the weather or the troops on the battlefield or the adjustment of borders over time) will deliver more clarity than whatever stills were in the book?

Of course, these things can be done by book publishers for the digital versions. But they require creating or licensing and then integrating new content assets and rethinking and redesigning the presentation. And that’s not even accounting for the work involved in adjusting the content to multiple screen sizes, a problem that just keeps getting more challenging as more different tablet and phone screen sizes are introduced.

One major publisher I know really endeavors to make ebooks of all their new title output, which includes some imprints that do a lot of illustrated books. Like everybody else, they frequently see ebook sales of 50% and more of their fiction, and 25% or more on immersive-reading non-fiction. But the illustrated books are in the single-digit percentages most of the time, with some of the more successful categories in the very low double-digits.

This is in the US — two years or more after the launch of the iPad and Nook Color and nearly a year after the launch of the Kindle Fire. Poor sales of illustrated ebooks can no longer be attributed to a lack of devices that can deliver them effectively.

And the ubiquity of these highly-capable devices brings its own new set of headaches. We were discussing the recent Bowker reporting that more people are reading ebooks on multi-function devices than on dedicated e-ink readers with our favorite expert on reading habit data, Peter Hildick-Smith of the Codex Group. He concurs and says that, as a result, the ebook consumption per reader threatens to go down.

Hildick-Smith points out that the tablet is a sea change in the history of content and consumption. Up until now, each content form had its own delivery mechanism. Records and cassettes and even MP3s were delivered through devices made just for them, just like the programming on TV and radio. Books on Kindles and Nooks replicated that paradigm. When you turned on your Kindle, you were as buried in your book as you were when it was in paper.

This is no longer true. If the book you’re reading on an iPad or Kindle Fire or Nexus 7 gets boring or you get tired of it, you can switch to a movie, The New York Times, your favorite song, or Angry Birds with the same device. Or the chime on your iPhone will ring taking you out of your book to answer an email.

For the publisher of novels, this means the book is competing with other media that would accomplish a different purpose. For the publisher of illustrated books, the book also must compete with media accomplishing the same purpose (how many new instructional videos of knitting stitches or jewelry-making techniques are posted to YouTube every day?) But they can’t do it for the same price, because that price is free.

So the illustrated book publisher not only has to learn how to make videos (a skill they were never previously required to possess), they also have to come up with a business model that enables their videos to be part of a priced commercial product, competing with legions of them that are free. And they have to finance a substantial creative component that isn’t contributing value to the print version at all.

We know our industry is changing radically. Different business models are challenged in different ways. Most of our time on this blog, perhaps too much of it, is spent contemplating how that affects the biggest publishers and the biggest books. There’s a reason for that. Big books have always driven the consumer book business and that seems to be more true than ever, not less.

But the challenge for — very specifically — “general illustrated book publishing” seems much more severe. The big publishers I’ve talked to apparently see that. Nobody has been explicit about it, but it sure feels like they can see a profitable path to navigate digital change with immersive reading books but not with illustrated ones.

I’ve also talked to mostly-illustrated publishers. Nobody says “you’re wrong, Mike. This is how we’re going to continue succeeding using our content-development skills, marketing capabilities, and talent network when bookstore shelf space is insignificant.” A couple of them have said “I don’t agree” without specifics. Most admit that they see the problem but haven’t yet figured out a solution.

There may not be one.

Camplin of Thames & Hudson is quoted at the end of The Bookseller piece saying, “I think it’s sort of a waste of money to assume the market is there [at the moment]; however, it would be foolish to say it will be this way forever.”

It might be equally foolish to say, or bet, that it won’t.

Of course, there is one strategy that can work: a vertical one. If you’re using illustrated book output to build a community of the interested, then you’ll presumably be able to sell them other things (software, live events, databases, services) when illustrated books are past their sell-by date. That’s the Osprey and F+W strategy and you can see sense in it because books are only part, and almost certainly a diminishing percentage, of their sales portfolio.

In fact, it is companies like these that might use technology like Ron Martinez’s Aerbook Maker tools and be able to use their books as a springboard to digital products with commercial value. They’ll probably also want to discover fotoLibra’s “advanceImages” scheme for micropayment of royalties instead of advance licensing fees for photographs. What Aerbook and fotoLibra offer can reduce the cost of creating an illustrated or enhanced ebook by 80%. That would certainly help.

It’s been obvious to me for a long time that managing the cost side of enhanced ebook creation is critical, which is why I was a sucker for the original Blio pitch in December of 2009.

For any publisher who claims a vertical strategy is their solution, the metrics to track are the sales they make of things other than books and the sales they make outside of bookstores. That is: track what is sustainable and has the potential to grow, not what is bound to shrink.

Relevant piece of anecdata: I remember being told by somebody at Wiley a couple of years ago that a large portfolio of photographs added measurable revenue on their travel sites. For very little cost, they could make a selection of photographs available for browsing. People clicked through them pulling up a new ad each time they did. That’s the “illustrated book publishing” of the future, but it starts with having the audience.

26 Comments »

Some things that were true about publishing for decades aren’t true anymore


Back when my father, Leonard Shatzkin, was active with significant publishers — the quarter century following World War II — he observed that very few books actually took in less cash than they required. That is not to say that publishers saw most books as “profitable”. Indeed, they didn’t. They placed an overhead charge of 25% or 30% or more on each book so most looked unprofitable. But that didn’t change the fact that the cash expended to publish just about every book was less than the cash it brought back in.

The exceptions were usually attributable to a large commercial error, most commonly paying too much of an advance to the author or printing far more copies than were needed. But, absent that kind of mistake, just about every book brought back somewhat more revenue than it required to publish it.

This led Len to the conclusion that the best strategy for a publisher was to issue as many titles as the organizational structure would allow. That was a lesson he passed along to the next generation of publishing leadership that came under his influence. And the leading proponent of that business philosophy was Tom McCormack, who worked for Len at Doubleday in the late 1950s, then went on to Harper & Row before he ascended to the presidency of then-tiny St. Martin’s Press in 1969. Tom often credited the insight that publishing more books was the path to commercial success as a key component of the enormous growth he piloted at St. Martin’s over three decades.

(I checked in with Tom, who is long-retired as a publishing executive but a very active playwright, about how many books didn’t claw back the cash expended. He told me that his “non-confirmable recollection” is that the percentage that did at least get their money back ranged from 85% to 92%. He recalls “incredulity” from his counterparts in other houses, whom he believes simply couldn’t “wrap their minds around the meaning of the statistic: revenues minus disbursements.” He went on to tell me that this number “seemed effectively irrelevant to them. They had an overriding and deeply flawed notion of something they called title-profitability. They thought they were analyzing the profitability of a title with their ‘p&l’.”)

Despite the apparent immutability of the fact at the time that most titles brought in incremental margin, many publishers who were losing money would come to the opposite conclusion. They would decide they should cut their lists, pay more attention to the titles they published, and create more profits that way. I remember discussing the futility of that approach in the 1980s with my friend and client, Dick McCullough, who was at that time the head of sales at Wiley. When I observed that the publishing graveyard was littered with the bones of publishers who pursued cutting their lists as the path to profits, Dick said of their efforts to cut “yes, and very successfully too”.

I got another lesson about this reality in the late 1980s when a company I consulted to (Proteus Books) sued its distributor (Cherry Lane Music) for a failure of “due skill and competence” in the sales efforts for Proteus Books. One of Proteus’s expert witnesses was Arthur Stiles, who had been Sales Director at several companies, including Doubleday, Lippincott, and Harper & Row. Stiles confirmed that big and competent publishers routinely put out thousands of copies of titles in advance of publication, with extremely few failures in terms of getting the initial placements. He was testifying in a time that was still like what my father experienced: the industry’s title counts were growing, but so were the the number of bookstores in which they could be placed.

Those days are over. And, coupled with the ebook revolution, the implications of that are profound.

A few things happened to change the environment so that it became no longer true that even big publishers could get all the distribution they needed on every title to assure a positive return of cash.

1. The title output of the industry has grown enormously. In the 1960s, the total output of the industry was in the neighborhood of 10,000 titles a year. Now it is something more than 30 times that number published traditionally, with a multiple of that number being self-published. Each new book is competing against more new titles every two weeks than a book fifty years ago would have competed against in a year!

2. Nothing published ever dies. Fifty years ago, stores were smaller and, while there’s no easy way for me to measure this, I’d guess that the active backlist across publishers was probably no more than 25,000 titles. Superstore growth in the 1980s, the efficiency of Ingram as a national wholesaler, and computer systems that helped stores track their inventory and sales fueled backlist expansion. Even in the early 1990s, the total of truly competitive titles was probably in the low six figures. But then came Amazon’s unlimited shelf space and Ingram’s Lightning Print to deliver one copy at a time, and, even before ebooks, the competitive set of available titles had probably jumped to seven figures.

3. Bookstore shelf space is declining. Nobody who has been reading this blog needs much elaboration on that point.

What that means is that a list-cutting therapy that McCullough and I saw in the 1980s as suicidal and which McCormack explained repeatedly was folly is no longer crazy. (Oh, how I wish my dear departed Dad was around to discuss this with!) And the new conjecture in this blogpost is that the day might come when a publisher with an extensive backlist might decide that the most profitable path would be to hardly publish any new titles at all!

The portfolio of any longstanding publisher today contains a lot of backlist which is pure profitable gold in the ebook era. Contracts often give publishers the rights to a book for the life of copyright if they continue to sell it. (I’ll confess here that there is a caveat to this point coming up in an italicized postscript below.) So a major publisher doing $600 million and up (of which there are six), almost certainly has triple-digit millions of sales in its backlist, which is increasingly shifting to digital. Even the most sober industry observers are seeing revenues exceeding 50% from ebooks in the next two or three years, which would mean that substantially more than half the units of these books are selling electronically.

So, let’s say you’ve got a company doing a billion dollars in annual revenue and barely eeking out a profit or perhaps even losing money. With a strategy of continuing to publish what you own as ebooks, you can see digital backlist revenue of $150 million, decaying by 10% a year, with gross margins giving you $100 million or more in cash flow. Offloading all the print operations for which you own rights to a distributor or competitor will provide incremental revenue as well. (You only need help for the offline print sales. Getting the online sales requires no operational capability.) You’d then need a minimal organization to do some marketing (not a lot), sign up and put out some additional titles that would be chosen for being risk-free (not a lot), and to handle the administration and royalty processing for your thousands of contracts. Five or ten million ought to cover those costs very handily.

Of course, the other thing you could do is sell your rights to that backlist. But I think it would require somebody to overpay in relation to your net discounted cash flow to make that attractive because the costs of keeping it all for yourself would be so minimal.

One hopes that today’s publishers are looking at the simple statistic Len and Tom authored: revenues minus disbursements by title. No doubt today’s biggest publishers are looking carefully at the performance of their copyrights in a way that sorts the new titles from the backlist. But doing so is only useful if they’re apportioning their costs properly across the title base. If they are, what is described in this post will be evident if and when it is true. In the meantime, careful focus on new title acquisitions and accepting that the healthiest way to manage for the future might be to reduce the commitment to new title development will have to replace the clear truths that guided smart publishing strategy for previous generations.

The history and analysis are all valid, but there is one big monkey wrench in this scenario I’ve sketched. There is a provision in the 1978 copyright law that allows authors to reclaim rights to their books after 35 years. Titles published in 1978 become eligible for reversion, called “recapture” apparently, starting in 2013. (With logic that is ironically typical of what Congress does when it touches copyright law, older titles are on a slower track for liberation.) Agents are planning for this; publishers will have to deal with it. I am given to understand that publishers can only retain these books for life of copyright by, in effect, reacquiring them. (Should be lots of fun!)

So, in fact, the backlist attrition might be faster than 10% (but it might not, because ebooks may create more readers for backlist than we had before as well.)

It is also true that many publishers have already been moving in the direction I suggest: pruning their new title counts and being particularly cautious with midlist. Of course, there was a conviction by many that list-pruning was a good strategy even before it actually was a good strategy, but the execution of it has been much more rigorous over the past decade.

46 Comments »

Nothing happens over 4th of July weekend, except this year


Monday, July 4, was supposed to be a quiet day in the publishing business. It turns out it wasn’t. Three developments reported as special holiday bulletins by Publishers Lunch have strategic implications worth pondering that will have trade publishing people all over the world conferring with their friends and colleagues as soon as they shake the sand off their shoes and settle in to read the weekend email.

First of all: Amazon.com bought The Book Depository. What? You’ve never heard of The Book Depository? Well, then you’re almost certainly one of my US-based readers (about 60-70 percent of you.) The Book Depository is really the other global bookstore. They don’t do ebooks, but they’ve bult their global book business to more than $150 million. No, that’s not as big as BN.com, but they have built a sophisticated many-to-many supply chain (they don’t do it holding stock in distributed warehouses like Amazon), have been growing by something like 30-40% per year for several years, and might even make money.

They’ve even invested heavily in untangling the metadata challenges of global book sales, with a large team in the Middle East tackling the problem.

If anybody were going to mount a global challenge to Amazon as a single consolidated book (and content) distribution business worldwide, The Book Depository was the platform to do it from.

This move by Amazon reminds me of when they acquired Mobi-pocket early in the last decade. In the dawn of the ebook-on-devices era, there were two formats competing as pawns of a hardware competition. Microsoft pushed MS Reader, Palm pushed their own format. Mobi had the clever idea of being able to play on either.

So Amazon acquired Mobi. That meant that they owned the only single-file solution; any other retailer trying to serve the market would have to offer both Microsoft and Palm as a choice to reach all the devices. Palm quickly took that option off the table by insisting it would serve all its files itself. That’s when B&N went out of the ebook business, not to return in a serious way until after Kindle launched in late 2007.

It sure looks to me like The Book Depository would have been a great launch platform for Barnes & Noble to go global.

Second: Pearson, owner of Penguin, became a book and ebook retailer by the purchase of the relevant assets from the bankrupt REDGroup. It appears they will run the business, web sites under the Borders and Angus & Robertson brands, with a minimal staff.

Pearson is a big company whose interests go far beyond Penguin, but it is the trade implications of this that catch my trade-centric eye. Big trade publishers are caught between a rock and a hard place on direct selling and customer ownership. Whatever the future may hold or require, trade publishers today are highly dependent on their intermediaries’ good will. It would likely cause untold grief with Amazon and Barnes & Noble if a major US trade house set up a direct selling operation, despite the fact that niche publishers often have them as adjuncts to community or professional publishing efforts (Wiley, O’Reilly, McGraw-Hill, F+W Media, Interweave. In fact, Pearson owns half of Safari, a direct-to-reader subscription service pioneered and co-owned by O’Reilly. They also own part of CourseSmart, but they’re now selling books and ebooks direct to consumers, not just content-by-subscription to geeks and textbooks to students.)

It might be well down the list of reasons why Pearson Australia is now running online trade selling operations, but it will be interesting to see how Penguin Australia benefits from the association.

Third: J.K. Rowling and the agent that actually handled her business, Neil Blair, have left the Christopher Little Agency which formerly employed Blair and was the agent of record for Rowling. Lawsuits may ensue, but this is another lesson in what disintermediation can mean and it recalls to me something I learned long ago from a lawyer in the music business.

My mother, Eleanor Shatzkin, had a chunk of her consulting career when she designed billing systems for law firms. (This was in the days before personal computers; “data processing” back then was done on punch cards sent to job shops for print-outs to be created.) So she made friends with a lot of lawyers. One of them, a very nice man named Don Engel, left the large New York firm where he’d been a litigator and moved out to California and set up a practice in the music business.

What Don told me (this was in the early 1980s) was that he found a phenomenon out there that didn’t exist in New York because people could start a law firm with just one client, and they often did. (As he said, you can’t take a piece of the AT&T business and set up shop, but you can take one big recording artist.) That meant these firms had no broad capabilities, and if any real legal challenges arose, the little firm with the big client would need savvier outside counsel. Don built a substantial business suing record companies over royalties on behalf of artists, getting cases referred by these tiny “firms” with one star client because he developed a reputation for being an honest guy who wouldn’t poach the client in turn!

I don’t want to suggest that what Rowling and Blair are doing is likely to become a trend. In fact, the prevailing industry conditions at the moment would, I think, mitigate against it. Agencies are more likely to consolidate than to splinter because the capabilities they need to serve their clients effectively are growing with digital change. Whatever threat there is to publishers from disintermediation would require that agents do more and have greater organizational capabilities, not less.

On the other hand, new services being offered by agents that other agents could employ might allow unbundling of the direct client contact from the rest of the agency functions.

I hope you had a really restful 4th of July weekend. The second half of the year begins with plenty to think about.

26 Comments »

With new opportunities come new challenges


This blog and my speeches contain frequent references to what we see as the big shifts the book publishing industry, and some publishers more than others, are feeling. The horizontal and format-specific product-centric media of the 20th century are inexorably yielding to the vertical and format-agnostic community-centric delivery environment for content that will soon predominate.

In that context, we’ve observed that the most general publishers are the most challenged. The distinction between publisher and retailer is blurring; in a decade or two it will be a distinction without much difference. What has always been the source of competitive advantage to trade publishers is leverage; they could reach thousands, tens of thousands, or even millions of customers for their wares through retail channels that aggregated audiences for content creators and curated content for consumers.

The non-trade components of the book business: publishers of textbooks, professional information, databases, and academic content already tended to specialize by subject so the challenge of being audience-specific, a prerequesite to creating community, had already been met. Non-trade publishers had never depended much on horizontal intermediaries. Even in college textbook publishing, which depended (and still largely does) on the college bookstore to actually deliver the product and collect the consumer’s money, the marketing component of the bookstore’s contribution was and is minimal. The publisher works vertically through a network of professors to drive adoptions, and adoptions are what drive the sales.

Trade publishers, which are called trade publishers because they reach consumers through “the trade” network of bookstores, libraries, and the wholesalers that serve them, have been generally alert since the 1970s to the importance of what are generically called “special sales”. Those are sales that come from outside the book trade, often from retailers in other channels. Special sales experts learned pretty quickly that you did better when you had a selection of books for an audience. If you had one book of Jewish interest, you couldn’t do much with it. If you had a dozen, it could make sense to buy a mailing lists of rabbis. If you had one home repair book, you couldn’t afford the cost of setting up relationships with retailers of hardware or construction materials (particularly thinking back to days before those outlets had consolidated into giant retailers like Home Depot and Loew’s.) But if you had a list, then the mutual interest in a relationship was obvious to both sides.

Some publishers specialized. When I was consulting with Wiley in the 1980s as they were developing their fledgling trade program, they brought their philosophy of really covering the needs of a vertical market from sci-tech to trade. They didn’t want just one resume book for job-hunters: they wanted one at every sensible price point and different ones for different kinds of jobs. One day a sales rep called in from the road to suggest that they deliver a book on the cover letters that should go out with resumes. They already knew they had a market through specialized customers of all kinds and through their direct mail efforts. The lists that worked for resume books would also work for cover letter books.

The most “general” of the general trade publishers tended not to develop the same depth of specialized lists. When Wiley considered that cover letter book, they knew they’d be able to sell it very efficiently and they knew it would enhance their relationship with individuals and channel partners through and to which they were already selling a lot of books. Would the cover letter book be big? Possibly not, but it didn’t have to be to make it clearly worth doing.

But the big trade houses were not built that way. And the biggest books, the sexiest books, the most exciting books, don’t tend to be in niches. In fact, niche identification can dampen sales in a general trade market. The CEO of a major house told me a couple of years ago that he didn’t want to label a book that could become a betseller a “mystery” title. Mystery was a “category” (read: “niche”) and, while those books tended to meet theshhold expectations more readily, he perceived them as harder to break out to the sales levels they could achieve if they were perceived as unique.

We are now seeing the early signs of what will soon be a tendency, then a trend, and then a stark reality: you just can’t sell as many copies of most books if you don’t have a proprietary position with a vertical audience. The early signs are evident through companies like O’Reilly Media (computer programming and technology), Hay House (mind body spirit), Chelsea Green (sustainable living), Harvard Common Press (cookbooks and pregnancy-childbirth), and F+W Media (several niches, including writers and crafts), which have special retail channels and huge email lists of individual customers that the big houses simply don’t. Niche by niche, the big houses will find it impractical to publish in areas that were once productive for them. Their need for each book to be “big” individually — for the single title to provide its own critical mass — works against what you must do to be “big” in a niche. To do that requires a more across-the-list kind of thinking that is counterintuitive to a company that makes the lion’s share of its sales through trade channels.

So for just about all the books that aren’t novels, memoirs, celebrity-driven, or epic works of popular history or politics, trade publishers are increasingly handicapped. Unfortunately for them, things are going to get worse.

The obvious problem is that the capacity of the general trade market to merchandise and move product is diminishing. I hate to invoke the old wisdom that many things happen “gradually, then suddenly”, but it is often true and we have been gradually losing bookstores for the past decade. What happens to the economics of the big publishers if we lose a big chunk of superstores pretty suddenly?

I recall a dinner conversation with the Chairman  of a large diversified multi-niche publisher two years ago. Even back then, we were speculating about the possible sudden demise of Borders. (Hey! It hasn’t happened; maybe we were wrong!) My dinner companion said, “you know, Mike, we’re as diversified as a publisher can be, but if Borders went out, we’d definitely feel it. It would really hurt us.”

“Temporarily,” I said. He needed me to explain.

“Sure, you’ll suffer a bad debt if they go out. That hurts right now. But over the next couple of years, you’ll get a lot of cheap and useful assets from competitors of yours that couldn’t withstand the blow. By a couple of years from now, you’ll be ahead.”

“You may be right,” he said.

So even with the obvious problem, a multi-niche publisher has a big advantage over a general publisher, just as it does over smaller niche players. But the ground for the general publishers is about to shift in ways that will be even more challenging.

Because “book publishing” in an increasingly vertical world is less and less about content sales in the unit of “books” (although that will be the lion’s share of revenue for a long time) and more and more about sales bigger than the book (databases that stretch across many books and other things too) or smaller than the book (chapters or fragments that naturally stand alone or which address a particular content need.) The iPhone app as a unit of delivery is accelerating the latter trend. The value of a database across titles has long been demonstrated by O’Reilly’s “Safari” offering, which generates more revenue for them than all but one trade account.

As the percentage of a publisher’s revenue that is generated by fragments and aggregations rises, so does the value of being vertical and, especially, so does the value of a direct relationship with the end users. The fragments piece is especially important, especially challenging, and requires new ways of thinking (and perhaps new contracts.) For example, Dominique Raccah, the visionary leader of Sourcebooks, whose Poetry Speaks is building a model for vertical community building, has found that many publishers of poetry aren’t sure they have the rights to license her vertical to sell individual poems! Does that mean she has to go directly to the poets for those rights? And how long will it be before it is more important to a poet to have their individual poems available for sale on Poetry Speaks than to have them available in a publisher’s collection bound as a book?

Bruce Shaw, the longtime empresario of Harvard Common Press, is demonstrating another aspect of this thinking that we’ve expected for a long time but hadn’t seen in practice before. He told us about a macaroni and cheese cookbook his house was considering for publication. Normally, Bruce reports, that’s a subject they’d skip because it just isn’t distinctive enough to make the ambitous sales targets he normally sets for print publications. But, in this case, he’s doing the book because his overall recipe database (all the thousands of recipes HCP has published in over 30 years in business) is light on mac and cheese recipes. So he’s willing to publish the book, knowing he’s going to make less profit than he normally requires, because it is a subsidized way to improve the value of his overall database of recipes.

The question of selling fragments opens up a host of other challenges: figuring out what is a saleable fragment, tagging it with an identifier and metadata, managing transaction costs for a much higher volume or low-value transactions, and retro-fitting accounting systems to process author royalties that will require increasingly complex analysis of smaller amounts of money.

In fact, there is opportunity on what might be viewed as a micro- or nano-level of transaction, too small for even a niche publisher to manage the customer relationship and the transaction. That is going to present new opportunities for our client, Copyright Clearance Center, which we’ll elaborate on in future posts.

There’s a great deal of new opportunity out there but a lot of it is in pennies, not hundred dollar bills.

Let’s hear it for Wifi in the air! This is the first post for The Shatzkin Files filed from an airplane. Boy, did I have fun at Spring Training!

11 Comments »

Are “enhanced ebooks” the CD-Rom era all over again?


Is this where I came in?

In the early 1990s, the computer manufacturers and Microsoft were doing everything they could to persuade businesses and consumers that they really, really, really needed CD-Rom drives. That Microsoft would benefit from them was very clear; the software they were selling was taking more and more diskettes to deliver in those pre-broadband, pre-Web days when all software was “shrinkwrapped.” If computer owners could take their new software on CD-Roms, the cost of delivering the product would drop dramatically.

Only a year or two before, Bob Stein had developed what we can now identify as the first “enhanced ebooks”. His company, Voyager, introduced the “Expanded Book”. These were the first efforts to use the book as the foundation to do something much more ambitious: linking in pictures and sound and video and databased information. No web links yet, because there was no web yet, but the Voyager Expanded Books really foresaw the possibilities.

Microsoft encouraged publishers to build on the Voyager Expanded Books example with CD-Roms, and, indeed, the Voyager product itself moved quickly from a diskette-based product to a CD-Rom, which gave it a multiple of the digital space to add content.

Publishers at that time had recent experience with new product forms. In the early 1980s, a few had experimented with software publishing, but that was quickly seen not to work and the publishers who tried it, like Wiley, pretty quickly got out. In the mid-1980s, audiobooks first came on the scene, however, and their acceptance, fueled by the ubiquity of tape players in cars and the relatively new Sony Walkman family of portable cassette players, was very rapid. With the encouragement of Microsoft and the hardware makers promising that all computers would soon have CD-Rom drives, many publishers jumped into what we can look back and see was an enhanced ebook business with both feet.

It turns out they jumped into an empty swimming pool. Many legs were broken.

The whole idea that people who wanted a cookbook needed video in the middle of the recipe or that people would “read” a book on a desktop computer because of sound effects in a CD-Rom version always seemed like a stretch to me. Sometime in the middle of the CD-Rom craze, I learned that McGraw-Hill had a big animal encylopedia on which something like 60% of the cost went into the sound. This was for a high-priced professional product. This made no intuitive sense. It wasn’t placing the investment where I thought anybody would find the value.

What seemed more likely to work to me at that time was to just put the book on a diskette (they were still much more common then than CD-Rom drives) to allow one to just read it on their laptop. The writer and enrepreneur Po Bronson might not remember this, but he and I discussed that idea at great length at the time. Meanwhile, I predicted in 1995 and 1996 that CD-Roms were going nowhere, that the “action” for book publishers would be online, and that the first important thing that would happen online would be increased sales of plain old printed books, all of which turned out to be utterly correct.

Now, as Yogi Berra allegedly once said, we have deja vu all over again.

In the later 1990s, the simple ebook delivery I imagined happened through online distribution, not diskettes. The devices of choice were plain old PCs (mostly reading PDFs) and handheld PDAs, reading the Palm Digital format, Microsoft’s new “dot lit” format (remember how revolutionary that was supposed to be when it first came out!), and then Mobipocket which, until Amazon bought them and largely buried them, was going to be the cross-platform standard.

Now that I had what I wanted, I was a happy guy. I started reading ebooks predominantly and I went out on the prediction limb again. I figured that PDA-reading would become widespread, and quickly.

Talk about jumping into an empty pool!

In fact, underscoring my misunderstanding, I wrote in about 2004 or 2005 that PDAs were the key to ebooks. If you carry a PDA, was my thinking, then you shouldn’t need anybody to explain the advantage of ebooks to you. It was transparent; you always had your book with you. And, conversely, I figured that if you did not have a PDA, there was no great advantage to ebooks. What I saw as the big advantage was not having to carry the book as an “extra.”

Still, ebooks just didn’t happen. I couldn’t understand it. A lot of people told me the problem was that ebooks didn’t really do anything that couldn’t be done with plain old print books. They didn’t take advantage of the opportunities afforded by digital books. No video. No audio. No web links. That didn’t seem like the answer to me. I remembered the CD-Rom fiasco.

Then Kindle came along. On the one hand, it proved me wrong because here was a device that had to be carried around (like a book) and didn’t do anything for you except let you read a book. On the other hand, Kindles sold well (particularly considering Amazon was the only place to get one) and, more important, Kindles sparked an explosion of interest in and uptake of ebooks. And that, I thought, proved that “just the book” was enough for many people to have a satisfying ebook experience.

But now it looks like market forces are going to tempt publishers to invest in enhanced ebooks all over again. We are awash in news of new ebook readers — meaning both software that can play on PCs, netbooks, iPhones, or various more dedicated devices and a slew of those more dedicated devices to choose from. So people are going to be reading books on devices that can do a lot more than a Kindle or Sony Reader can do.

Two other things happening at the same time also push for more complex ebooks. One is that the tool sets to deliver them — and even to allow any author working with a bright young person alongside of them to deliver them — are getting more ubiquitous. And the other is that publishers think they see a connection between more complex ebooks and higher-priced ebooks, and that makes them very interested in exploring the subject.

A lot has changed in the past 15 years since the CD-Rom era. I am not in any way suggesting that the CD-Rom disaster of the mid-1990s will be repeated in the enhanced ebook era we are heading to now. But nobody figured out what compelling consumer product could be made from a book with lots of digital space to play with then and we’d be kidding ourselves to think anybody’s figured it out now either. There will be a lot of trial and error work done by the industry in the next couple of years trying to find the book-into-something-better formula that works artistically, functionally, and commercially. The answers are by no means self-evident.

One cautionary tale from the CD-Rom era. One of the first big successes on CD-Rom was issued by Simon & Schuster and based on StarTrek. In retrospect, we can see that StarTrek was the “perfect subject”: the one thing that would work with early-adapting techie geeks even if nothing else would. Unfortunately, S&S read the StarTrek success as an endorsement of the CD-Rom product idea and rapidly expanded their new media division to do more titles. Nothing else came close to matching StarTrek’s success.

50 Comments »

Some thoughts about piracy


As part of the program-creation process for Digital Book World, I had a round of conversations with the top executives of the Big Six companies to discuss the agenda, mostly with the CEOs. The purpose of the check-ins was to find out what topics the CEOs wanted their companies to speak about and, of course, which they wanted to avoid for reasons of diplomacy, commercial politics, or legality.

One topic I had left out of our program initially was “piracy”. Some of the executives I met with found this a very troubling omission. My first reaction was “what’s there to discuss? We’re all against piracy and there isn’t much we can do about it. So what else do we say?” Although there are two of the big houses where that view is, to some extent, shared, most of the others disagree, some vehemently. In fact, Macmillan has a “seven point program” to confront and combat piracy, which will now be the topic of a presentation by Macmillan president Brian Napack on the first morning of Digital Book World.

The topic of piracy is a part of the conversation about “digital rights management”, software that manages how a file can be used. DRM is a pretty standard aspect of software and DVD distribution but it comes in for a lot of complaint and criticism from very knowledgeable observers and participants in the ebook scene.

There is a “first sale” doctrine in copyright law that gives the purchaser of a book (or sound recording or DVD) the right to give away or re-sell that good. It does not give the right to sell or give away a copy, but it does allow you to “share” your book or CD or DVD with your mother, your sister, and your aunt and then to sell the used copy on eBay. Those rights have never really extended to software, which often knows if you’re trying to load it onto a second computer and won’t let you. Attempts to control sharing of music through DRM are commonly blamed for the piracy that became rampant in that sphere (although I don’t buy that; there are other explanations I find more compelling.)

The question of DRM-or-not in the ebook world is a very complicated one, although opponents of DRM often paint it as very simple. O’Reilly Media sells its ebooks “DRM-free”, as do some upstart ebook first publishers. The ebook self-publishing site, Smashwords, also sells only DRM free from their own site, although Smashwords-originated files might have DRM added by intermediary resellers, with which it is making more and more deals.

The opponents of DRM point to the incontrovertible fact that its existence does not stamp out piracy, which is transparent at a time when you can type just about any book title into Google with the word “file” after it and be directed to sites that offer you a free pirated download. In fact, even not publishing the book digitally is insufficent DRM to keep it from pirate distribution.

Mark Coker of Smashwords, despite the fact that he sells onlyDRM-free ebooks from his site, is an avowed opponent of piracy, and even of sharing. He suggests a boilerplate notice in his ebooks that tell you that you should go buy another copy of this book you’re about to read if you didn’t buy this one, or else you’re cheating the author. Mark believes the key to combating piracy is education; he admits to an unusually strong faith in consumer integrity.

But despite the lengthy introduction, this post is not about DRM; it’s to propose what is the ultimate defense against piracy: ebooks that aren’t static; ebooks that change.

The secret sauce behind O’Reilly’s DRM-free policy is that when you buy an ebook from them, you are entitled to the updates to that ebook…forever. The implicit message there is there will be updates.

There is no better antidote to piracy than this. If the pirated or peer-to-peer edition of a book is yesterday’s, or last week’s, and the book is changing, then it’s yesterday’s paper (which the Rolling Stones noted long ago, “nobody in the world” wants.)

This is beyond wrenching to publishers; it completely changes the workflow and it completely changes the business model. The rhythm of a publishing house is based on the fact that books are, at some point, finished. There is a Henry Ford assembly line aspect to how things have always worked. Whether you’re an editor, a marketer, or a sales person, new books have a pretty reliable “cycle” for you: their existence in your life has a beginning, a middle, and an end. The conveyor belt moves the book away from you so you can’t spend too much time on it and can move on to the next one. Having authors not stop adding to or changing a book, even after it’s published, is totally disruptive. And what would we do about the ISBN numbers?

Yet, the possibility for ebooks to be totally up-to-date is one publishers can’t ignore. The Little, Brown division at Hachette has just announced that on December 1 it is publishing a 2,000 word update on the H1N1 (swine flu) virus in the ebook edition of “The Vaccine Book”, which was originally published in 2007. If something startling happened that should change that text on February 1, wouldn’t it make sense for them to update the book again? In October, Wiley published, as an ebook only, “The Swine Flu: The New Pandemic” because they wanted to get the most up-to-date information out quickly. By that logic, wouldn’t they also want to update their ebook if what was up-to-date in October isn’t in March?

And if they did that, what possible value would a pirated edition of yesterday’s ebook have?

Of course, swine flu is a dynamic subject. It isn’t a novel; it isn’t history. It isn’t even programming or software development or technology, the subjects O’Reilly publishes (and often updates.) But every editor knows plenty of authors of non-fiction books that wanted to keep writing and changing and adding past every deadline the house presented. Let the new process start with those; there will be plenty of candidates.

Furthermore, the biggest threat from pirated ebooks is to the most established franchise authors. I believe Tim O’Reilly is responsible for two cogent and pithy observations about piracy: that obscurity is a greater threat to most authors than piracy, and that piracy is “progressive taxation.” Both express the reality that the marketing for most books fails to reach most of the book’s potential audience. That Henry Ford assembly line conveys the book away from the marketers before the task of informing the entire potentially-interested public is anywhere near complete. So piracy, or file-sharing that may fall short of actual piracy, can serve the purpose of spreading the word about a book and triggering more sales. Except there are some authors, and those are the ones that sell the most books for the biggest publishers, who don’t need marketing to inform their audience; their audience, in effect, informs their audience! And those are the ones who would surely lose sales if there were no DRM and books could be freely shared or are made available through illicit channels.

But those authors are also the ones who have the biggest personal followings. They are the most capable of adding material: notes about what they’re working on, correspondence with fans or critics, even observations about other people’s books, that would add some value for many of the readers of their stories. In fact, a regular “update to my readers” from a top-flight author that is available only in their ebooks, or to purchasers of their ebooks, would be an attraction to many and could serve as a constant reminder that downloading their books from illegitimate sources is cheating them.

I’m not against DRM in principle and I’m all for combating piracy any way we can (and I have a couple of thoughts on that subject I’ll save for a subsequent post.) But I am far from certain that piracy represents the same existential threat to book publishers that it did to record companies, although we have others: the music business isn’t nearly so threatened by the shift to vertical.

One of my favorite people in the digital book business, who once worked in the music business said to me: “I don’t worry about piracy. I did in the music business because music was bought by kids. My customers are 53-year old ladies. They don’t go to pirate sites. They’d be afraid of getting a virus!” She’s right about that, at least for today. But for those who are concerned about piracy, I am not sure this problem can be attacked with toughness and muscle as effectively as it would be with creativity and delivering to the market something the pirates just can’t keep up with.

We have observed previously that the day will likely come when Big Authors will go straight to electronic distribution for some ebooks, bypassing the publishers to collect bigger royalties. What could be the first shot of that battle, and a reflection of the ideas in this post as well, may have been fired in the UK where Sony has announced a special edition James Patterson ebook which will contain the new book, “Cross Country”, a month before its general release plus other excerpts and a special letter from James Patterson. Of course, that deal was probably made by the publisher with Patterson’s cooperation, but it points to possibilities that should make publishers nervous.

19 Comments »