Leonard Shatzkin

Market research used to be a silly idea for publishers but it is not anymore


When my father, Leonard Shatzkin, was appointed Director of Research at Doubleday in the 1950s, it was a deliberate attempt to give him license to use analytical techniques to affect how business was done across the company. He had started out heading up manufacturing, with a real focus on streamlining the number of trim sizes the company manufactured. (They were way ahead of their time doing that. Pete McCarthy has told me about the heroic work Andrew Weber and his colleagues did at Random House doing the same thing in the last decade, about a half-century later!)

Len Shatzkin soon thereafter was using statistical techniques to predict pre-publication orders from the earliest ones received (there were far fewer major accounts back then so the pre-pub orders lacked the few sizable big pieces that comprise a huge chunk of the total today) to enable timely and efficient first printings. Later he took a statistically-based approach to figure out how many sales reps Doubleday needed and how to organize their territories. When the Dolphin Books paperback imprint was created (a commercial imprint to join the more academic Anchor Books line created a few years before by Jason Epstein), research and analytical techniques were used to decide which public domain classics to do first.

In the many years I’ve been around the book business, I have often heard experts from other businesses decry the lack of “market research” done by publishers. In any other business (recorded music might be an exception), market research is a prerequisite to launching any new product. Movies use it. Hotel chains use it. Clothing manufacturers use it. Software companies use it. Online “content producers” use it. Sports teams use it. Politicians use it. It is just considered common sense in most businesses to acquire some basic understandings of the market you’re launching a new product into before you craft messages, select media, and target consumers.

In the past, I’ve defended the lack of consumer market research by publishers. For one thing, publishers (until very recently) didn’t “touch” consumers. Their interaction was with intermediaries who did. The focus for publishers was on the trade, not the reader, and the trade was “known” without research. To the extent that research was necessary, it was accomplished by phone calls to key players in the trade. The national chain buyer’s opinion of the market was the market research that mattered. If the publisher “knew different”, it wouldn’t do them any good if the gatekeeper wouldn’t allow the publisher’s books on his shelves.

And there were other structural impediments to applying what worked for other consumer items. Publishers did lots of books; the market for each one was both small and largely unique. The top line revenue expected for most titles was tiny by other consumer good standards. The idea of funding any meaningful market research for the output of a general trade publisher was both inappropriate and impractical.

But over the past 20 years, because a very large percentage of the book business’s transaction base has moved online and an even larger part of book awareness has as well, consumers have also been leaving lots of bread crumbs in plain digital sight. So two things have shifted which really change everything.

Publishers are addressing the reader directly through publisher, book, and author websites; through social media, advertising, and direct marketing; and through their copy — whether or not they explicitly acknowledge that fact — because the publisher’s copy ends up being returned as a search result to many relevant queries.

The audience research itself is now much more accessible than it ever was: cheaper and easier to do in ways that are cost-effective and really could not be imagined as recently as ten years ago.

We’ve reached a point where no marketing copy for any book should be written without audience research having been done first. But no publisher is equipped to do that across the board. They don’t have the bodies; they don’t have the skill sets; and a process enabling that research doesn’t fit the current workflow and toolset.

So when the criticism was offered that publishers should be doing “market research” before 2005, just making that observation demonstrated a failure of understanding about the book business. But that changed in the past 10 years. Not recognizing the value of it now demonstrates a failure to understand how much the book business has changed.

What publishers need to do is to recognize “research” as a necessary activity, which, like Len Shatzkin’s work at Doubleday in the 1950s, needs to cut across functional lines. Publishers are moving in that direction, but mostly in a piecemeal way. One head of house pointed us to the fact that they’ve hired a data scientist for their team. We’ve seen new appointments with the word “audience” in their title or job description, as well as “consumer”, “data”, “analytics”, and “insight”, but “research” — while it does sometimes appear — is too often notable by its absence in the explicit description of their role.

Audience-centric research calls for a combination of an objective data-driven approach, the ability to use a large number of listening and analytical tools, and a methodology that examines keywords, terms, and topics looking to achieve particular goals or objectives. A similar frame of mind is required to perform other research tasks needed today: understanding the effect of price changes, or how the markets online and for brick stores vary by title or genre, or what impact digital promotion has on store sales.

The instincts to hire data scientists and to make the “audience” somebody’s job are good ones, but without changing the existing workflows around descriptive copy creation, they are practices that might create more distraction than enlightenment. Publishers need to develop the capability to understand what questions need to be asked and what insights need to be gained craft copy that will accomplish specific goals with identified audiences.

Perhaps they are moving faster on this in the UK than we are in the US. One high-ranking executive in a major house who has worked on both sides of the Atlantic told me a story of research the Audience Insight group at his house delivered that had significant impact. They wanted to sign a “celebrity” author. Research showed that the dedication of this author’s fans was not as large as they anticipated, but that there was among them a high degree of belief and faith in the author’s opinions about food. A food-oriented book by that author was the approach taken and a bestseller was the result. This is a great example of how useful research can be, but even this particular big company doesn’t have the same infrastructure to do this work on the west side of the Atlantic.

What most distinguishes our approach at Logical Marketing from other digital marketing agencies and from most publishers’ own efforts is our emphasis on research. We’ve seen clearly that it helps target markets more effectively, even if you don’t write the book to specs suggested by the research. But it also helps our clients skip the pain and cost of strategic assumptions or tactics that are highly unlikely to pay off: such as avoiding the attempt to compete on search terms a book could never rank high for; recognizing in advance a YouTube or Pinterest audience that might be large, but will be hard or impossible to convert to book sales; or trying to capture the sales directly from prospects that would be much more likely to convert through Amazon.

With the very high failure rate and enormous staff time suck that digital marketing campaigns are known for, research that avoids predictable failures pays for itself quickly in wasted effort not expended.

McCarthy tells me from his in-house experience that marketers — especially less-senior marketers — often know they’re working on a campaign that in all probability won’t work. We believe publishers often go through with these to show the agent and author — and sometimes their own editor — that they’re “trying” and that they are “supporting the book”. But good research is also something that can be shown to authors and agents to impress them, particularly in the months and years still left when not everybody will be doing it (and the further months and years when not everybody will be doing it well.) Good research will avoid inglorious failures as well as point to more likely paths to success.

Structural changes can happen in organic ways. Len Shatzkin became Director of Research at Doubleday by getting the budget to hire a mathematician (the term “data scientist” didn’t exist in 1953), using statistical knowledge to solve one problem (predicting advance sales from a small percentage of the orders), and then building on the company’s increasing recognition that analytical research “worked”.

If the research function were acknowledged at every publisher, it would be usefully employed to inform acquisition decisions (whether to bring in a title and how much it is worth), list development, pricing, backlist marketing strategies, physical book laydowns to retailers, geographical emphasis in marketing, and the timing of paperback edition release.

Perhaps the Director of Research — with a department that serves the whole publishing company — is an idea whose time has come again.

But, in the meantime, Logical Marketing can help.

Remember, you can help us choose the topics for Digital Book World 2016 by responding to our survey at this link.

4 Comments »

Seven key insights about VMI for books and why it is becoming a current concern


Vendor-managed inventory (VMI) is a supply paradigm for retailers by which the distributor makes the individual stocking decisions rather than having them determined by “orders” from an account. The most significant application of it for books was in the mass-market paperback business in its early days, when most of the books went through the magazine wholesalers to newsstands, drug stores, and other merchants that sold magazines. The way it worked, originally, was that mass-market publishers “allocated” copies to each of several hundred “independent distributors” (also known as I.D. wholesalers), who in turn allocated them to the accounts.

Nobody thought of this as “vendor-managed inventory”. It was actually described as “forced distribution”. And since there was no ongoing restocking component built into the thinking, that was the right way to frame it.

The net result was that copies of a title could appear in tens of thousands of individual locations without a publisher needing to show up at, or even ship to, each and every one.

To make this system functional at the beginning, the books, like magazines, had a predictable monthly cycle through the system. The copies that didn’t sell in their allotted time were destroyed, with covers returned to the publisher for credit.

Over time, the system became inefficient (the details of which are a story for another day, but the long story short is that publishers couldn’t resist the temptation to overload the system with more titles and copies than it could handle) and mass-market publishing evolved into something quite different which today, aside from mostly sticking to standard rack-sized books, works nothing like it did at the beginning.

My father, Leonard Shatzkin, introduced a much more sophisticated version of VMI for bookstores at Doubleday in 1957 called the Doubleday Merchandising Plan. In the Doubleday version, reps left the store with a count of the books on-hand rather than a purchase order. The store had agreed in advance to let Doubleday use that inventory count to calculate sales and determine what should then be shipped in. In 18 months, there were 800 stores on the Plan, Doubleday’s backlist sales had quadrupled and the cost of sales had quartered. VMI was much more efficient and productive — for Doubleday and for the stores — than the “normal” way of stocking was. That “normal” way — the store issues orders and the publisher then ships them — was described as “distribution by negotiation” by my father in his seminal book, “In Cold Type”, and it is still the way most books find their way to most retail shelves.

After my Dad left Doubleday in 1960, successor sales executives — who, frankly, didn’t really understand the power and value of what Dad had left them — allowed the system to atrophy. This started in a time-honored way, with reps appealing that some stores in their territory would rather just write their own backlist orders. Management conferred undue cred on the rep who managed the account and allowed exceptions. The exceptions, over time, became more prevalent than the real VMI and within a decade or so the enormous advantage of having hundreds of stores so efficiently stocked with backlist was gone.

And so, for the most part, VMI was gone from the book business by the mid-1970s. And, since then, there have been substantial improvements in the supply chain. PCs in stores that can manage vast amounts of data; powerful service offerings from the wholesalers (primarily Ingram and Baker & Taylor, but others too); information through services like Above the Treeline; and consolidation of the trade business at both ends so that the lion’s share of a store’s supply comes from a handful of major publishers and distributors (compared to my Dad’s day) and lots of the books go to a relatively smaller number of accounts have all combined to make the challenge of efficient inventory management for books at retail at least appear not to need the advantages of VMI the way it did 60 years ago.

And since so many bookstores not only really like to make the book-by-book stocking decisions, or at least to control them through the systems they have invested in and applying the title-specific knowledge they work hard to develop, there has been little motivation for publishers or wholesalers to invest in developing the capability to execute VMI.

Until recently. Now two factors are changing that.

One is that non-bookstore distribution of books is growing. And non-bookstores don’t have the same investments in book-specific inventory management and knowledge, let alone the emotional investments that make them want to decide what the books are, that bookstores do. Sometimes they just simply can’t do it: they don’t have the bandwidth or expertise to buy books.

And the other is that both of the two largest book chains, Barnes & Noble and Books-a-Million, are seeing virtue in transferring some of the stocking decisions to suppliers. B&N, at least, has been actively encouraging publishers to think about VMI for several years. These discussions have reportedly revolved around a concept similar to one the late Borders chain was trying a decade or more ago, finding “category captains” that know a subject well enough to relieve the chain of the need for broad knowledge of all the books that fall under that rubric.

This is compelling. Finding that you are managing business that could be made more efficient with a system to help you while at the same time some of your biggest accounts are asking for services that could benefit from the same automation are far more persuasive goads to pursue an idea than the more abstract notion that you could create a beneficial paradigm shift.

As a result, many publishing sales departments today are beginning to grapple with defining VMI, thinking about how to apply it, and confronting the questions around how it affects staffing, sales call patterns, and commercial terms. This interest is likely to grow. A well-designed VMI system for books (and buying one off-the-shelf that was not specifically designed for books is not a viable solution) will have applications and create opportunities all over the world. Since delivering books globally is an increasingly prevalent framework for business thinking, the case to invest in this capability gets easier to make in many places with each passing day.

VMI is a big subject and there’s a lot to know and think through about it. I’ve had the unusual — probably unique — opportunity to contemplate it with all its nuances for 50 years, thanks to my Dad’s visionary insight into the topic and a father-son relationship that included a lot of shop talk from my very early years. So here’s my starter list of conceptual points that I hope would be helpful to any publisher or retailer thinking about an approach to VMI.

1. Efficient and commercially viable VMI requires managing with rules, not with cases. Some of the current candidates to develop a VMI system have been drawn into it servicing planograms or spinner racks in non-book retailers. These restocking challenges are simpler than stocking a store because the title range is usually predetermined and confined and the restocking quantity is usually just one-for-one replenishment. We have found that even in those simple cases, the temptation to make individual decisions — swapping out titles or increasing or decreasing quantities in certain stores based on rates of movement — is hard to resist and rapidly adds complications that can rapidly overwhelm manual efforts to manage it.

2. VMI is based on data-informed shipments and returns. It must include returns, markdowns, or disposals to clear inventory. Putting books in quickly and efficiently to replace sold books is, indeed, the crux of VMI. But that alone is “necessary but not sufficient”. Most titles do not sell a single copy in most stores to which they are introduced. (This fact will surprise many people, but it is mathematically unavoidable and confirmed through data I have gotten from friends with retail data to query.) And many books will sell for a while and then stop, leaving copies behind. Any inventory management depending on VMI still requires periodic purging of excess inventory. That is, the publisher or distributor determining replenishment must also, from time to time, identify and deal with excess stock.

3. VMI sensibly combines with consignment and vendor-paid freight. The convention that books are invoiced to the account when they are shipped and that the store pays the shipping cost of returns (and frequently on incoming shipments as well) makes sense when the store holds the order book and decides what titles and quantities are coming in. But if the store isn’t deciding the titles and quantities, it obviously shouldn’t be held accountable for freight costs on returns; that would be license for the publisher or distributor to take unwise risks. The same is really true for the carrying cost of the inventory between receipt and sale. If the store’s deciding, it isn’t crazy for that to be their lookout. But if the publisher or distributor is deciding, then the inventory risk should be transferred to them. The simplest way to do that is for the commercial arrangement to shift so that the publisher offers consignment and freight paid both ways. The store should pay promptly — probably weekly — when the books are sold. (Publishers: before you get antsy about what all this means to your margins, read the post to the end.)

Aside from being fairer, commercially more logical, and an attractive proposition that should entice the store rather than a risky one that will discourage participation, this arrangement sets up a much more sensible framework for other discussions that need to take place. With publisher prices marked on all the books, it makes it clear to the retailer that s/he has a clear margin on every sale for the store to capture (or to offer as discounts to customers). And because the publisher is clearly taking all the inventory risk, it also makes it clear that the account must take responsibility for inventory “shrink” (books that disappear from the shelves without going through the cash register.)

Obviously, shrink is entirely the retailer’s problem in a sale-and-return arrangement; whatever they can’t return they will have paid for. But it is also obvious that retailers in consignment arrangements try to elide that responsibility. Publishers can’t allow a situation where the retailer has no incentive to make sure every book leaving the store goes through the sales scan first.

4. Frequent replenishment is a critical component of successful VMI. No system can avoid the reality that predicting book sales on a per-title-per-outlet basis is impossible to do with a high degree of accuracy. The best antidote to this challenge is to ship frequently, which allows lower quantities without lost sales because new copies replace sold copies with little delay. The vendor-paid freight is a real restraint because freight costs go down as shipments rise, but it should be the only limitation on shipment frequency, assuming the sales information is reported electronically on a daily basis as it should be. The publisher or distributor should always be itching to ship as frequently as an order large enough to provide tolerable picking and freight costs can be assembled. The retailer needs to be encouraged, or helped, to enable restocking quickly and as frequently as cost-efficient shipments will allow.

5. If a store has no costs of inventory — either investment or freight — its only cost is the real estate the goods require. GMROII — gross margin return on inventory investment — is the best measurement of profitability for a retailer. With VMI, vendor-paid freight, and consignment, it is infinity. Therefore, profitable margins can be achieved with considerably less than the 40 to 50 percent discounts that have prevailed historically. How that will play out in negotiations is a case-by-case problem, but publishers should really understand GMROII and its implications for retail profitability so they fully comprehend what enormous financial advantages this new way of framing the commercial relationship give the retailer.

(The shift is not without its challenges for publishers to manage but what at first appears to be the biggest one — the delay in “recognizing” sales for the balance sheet — is actually much smaller than it might first appear. And that’s also a subject for another day.)

6. Actually, the store also saves the cost of buying, which is very expensive for books. The most important advantage VMI gives a publisher is removing the need for a buyer to get their books onto somebody’s shelves. The publisher with VMI overcomes what has been the insuperable barrier blocking them from many retail establishments: the store can’t bear the expense of the expertise and knowledge required to do the buying. It is harder to sell that advantage to existing book retailers who have invested in systems to enable buyers, even if some buyer time can be saved through the publisher’s or distributor’s efforts and expertise. But a non-book retailer looking for complementary merchandise that might also be a traffic builder will appreciate largely cost-free inventory that adds margin and will see profitability at margins considerably lower than the discounts publishers must provide today.

7. Within reasonable limits, the publisher or distributor should be happy to honor input from the retailer about books they want to carry. It is important to remember that most titles shipped to most stores don’t actually sell one single unit. Giving a store a title they’re requesting should have odds good enough to be worth the risk (although that will be proven true or not for each outlet by data over time). Taking the huge number of necessary decisions off a store’s hands is useful for everybody; it shouldn’t suggest their input is not relevant. Indeed, getting information from stores about price or topical promotions they are running, on books or other merchandise, and incorporating that into the rules around stocking books, will help any book supplier provide a better and more profitable service to its accounts. After all, having a store say “I’d like to sell this title for 20 percent off next week in a major promotion, would you mind sending me more copies?” opens up a conversation every publisher is happy to have.

Of course, in a variety of consulting assignments, we are working on this, including system design. It is staggering to contemplate how much more sophistication it is possible to build into the systems today than it was a decade-and-a-half ago when we last immersed ourselves in this. In the short run, a VMI-management system will provide a competitive edge, primarily because it will open up the opportunity to deliver to retail shelves that will simply not be accessible without it. That will lead to it becoming a requirement. As I’ve said here before, a prediction like that is not worth much without being attached to a time scale. I think we’ll see this cycle play out over the next ten years. That is: by 2025, just about all book distribution to retailers will be through a VMI system.

2 Comments »

Better book marketing in the future depends a bit on unlearning the best practices of the past


[Note to subscribers. We have switched from Feedburner to Mail Chimp for email distribution to our list to improve our service. Please send us a note if you have any problems or think there’s anything we ought to know.]

*************************************************************************

A few years ago, publishers invented the position of Chief Digital Officer and many of the big houses hired one. The creation of a position with that title, reporting to the CEO, explicitly acknowledged the need to address digital change at the highest levels of the company.

Now we’re seeing new hires being put in charge of “audiences” or “audience development”. I don’t know exactly what that means (a good topic for Digital Book World 2016), but some conversations in the past couple of weeks are making clearer to me what marketing and content development in book publishing is going to have to look like. And audiences are, indeed, at the heart of it.

I’ve written before about Pete McCarthy’s conviction that unique research is needed into the audiences for every book and every author and that the flow of data about a book that’s in the marketplace provides continuing opportunities to sharpen the understandings of how to sell to those audiences. Applying this philosophy bumps up against two realities so long-standing in the trade book business that they’re very hard to change:

How the book descriptions which are the basis for all marketing copy get written
A generic lack of by-title attention to the backlist

The new skill set that is needed to address both of these is, indeed, the capability to do research, act on it, and, as Pete says, rinse and repeat. Research, analysis, action, observation. Rinse and repeat.

I had a conversation over lunch last week with an imprint-level executive at a Big House. S/he got my attention by expressing doubt about the value of “landing pages”, which are (I’ve learned through my work with Logical Marketing; I wouldn’t have known this a year ago) one of the most useful tools to improve discovery for books and authors. I have related one particularly persuasive anecdote about that here. This was a demonstration to me of how much basic knowledge about discovery and SEO is lacking in publishing. (The case for how widespread the ignorance of SEO in publishing has been made persuasively in an ebook by British marketer Chris McVeigh of Fourfiftyone, a marketing consultancy in the UK that seems to share a lot of the philosophy we employ at Logical Marketing.)

But then, my lunch companion made an important operational point. I was advocating research as a tool to decide what to acquire, or what projects might work. “But I could never get money to do research on a book we hadn’t signed,” s/he said, “except perhaps to use going after a big author who is with another house.” (Indeed, we’ve done extensive audits at Logical Marketing for big publishers who had exactly that purpose in mind.) “But, routinely? impossible!”

The team Pete leads can do what would constitute useful research which would really inform an acquisition decision, for $1000 a title. If the capability to do what we do — which probably requires the command of about two dozen analytical tools — were inhouse, it would cost much less than that.

Park that thought.

I also had an exchange last week with Hugh Howey, my friend the incredibly successful indie author with whom I generally agree on very little concerning big publishers and their value to authors. But Hugh made a point that is absolutely fundamental, one which I learned and absorbed so long ago that I haven’t dusted it off for the modern era. And it is profoundly important.

Hugh says there are new authors he’s encountering every day who are achieving success after publishers failed with them. It is when he described the sales curve of the successful indie — “steadily growing sales” — that a penny dropped for me. An old penny.

We recognize in our business that “word of mouth” is the most effective means of growing the market for a book. If that were the way things really worked, books would tend to have a sales curve that was a relatively gentle upward slope to a peak and then a relatively gentle downward slope.

Of course, very few books have ever had that sales curve. Nothing about the way big publishers routinely market and sell would enable it to happen. Everything publishers do tries to impose a different sales curve on their books.

A gentle upward slope followed by a gentle downward slope would, in the physical world, require a broad and very shallow distribution with rapid replenishment where the first copy or two put at an outlet had sold. But widespread coordination of rapid replenishment of this kind for books selling at low volumes at any particular outlet (let alone most outlets) is, for the most part, a practical impossibility in the world of distributed retail.

In fact, distributed retail demands a completely out-of-synch sales curve. It wants a big sale the first week a book is out to give it the best chance of making the bestseller list and, even failing that, the best chance of being worthy of continuing attention by a publisher’s sales staff, and therefore, the marketing team. Books in retail distribution are seen as failures if they don’t catch on pretty quickly, if not in days or weeks, certainly within a couple of months. And if a store sells two copies, say, of a new book in the first three months, it probably doesn’t make the cut as a book to be retained. If they bought two, they’re glad they’re gone and not likely to re-order without some push by the publisher or attention-grabbing other circumstance. If they bought ten, they’ll want to get their dollars back by making returns so they can invest in the next potentially big thing.

But that’s not the case online, where there is no need for distributed inventory (especially of ebooks!) If the first copies sold lead to word of mouth recommendations, the book will still be available to the online shopper. And there will be nothing in the way it is presented — it won’t have a torn cover hidden and be hidden in the back of the store, say — to indicate it isn’t successful. People can buy it and the chain can continue, building over time. Three months later, six months later, it really doesn’t matter; the book can keep selling. And, by the way, this will be true at any online retailer with an open account at Ingram (including for print-on-demand books), not just at Amazon.

But, in the brick and mortar world, the book will effectively be dead if it doesn’t catch on in the first three months. And the reality of staffing, focus, and the sales philosophy of most publishers means it won’t be getting any attention from the house’s digital marketers either.

If you live in the world of indie success like Hugh Howey does, you are repeatedly seeing authors breaking through months after a book’s publication, at a time when an experienced author knows a house would have given up on them.

Now park that.

I also had a chat last week with a former colleague of mine now at a periodical. He was explaining that one major conceptual challenge for his publication in the digital age was to see their readership as many pretty small and discrete audiences, not one big one at the level of the “subscriber”. No story in his publication is intended for “everybody”; what is important is for a newspaper or magazine to know whether particular stories are satisfying the needs of the particular niche of their audience that wants that topic, that kind of story. Talking to this former colleague about digital marketing and publishing was a variation on the themes that are topics with Pete.

One thing I learned in this conversation made another penny drop. Let’s say you have a story on any particular topic, from theater to rugby, my friend posited. Your total “theoretical market” within the publication’s readership is every person who ever read a single story on that subject. But your “core market” is every person who has read two stories on it. If a high percentage of those read it, the story succeeded. If not, the story failed.

And a further implication of this analysis is that seeing your audiences that way, and growing them that way, will also ultimately allow monetizing them more effectively. This wouldn’t be advertising-led, so much as harvesting the benefits of audience-informed content creation, but it is totally outside the way editorial creation at newspapers and magazines has always occurred.

And now park that.

We had a meeting two weeks ago with a fledgling publisher whose owner has a great deal of direct marketing expertise. As he heard Pete explaining what he did, looking for search terms that suggested opportunity (lots of use of the term and relatively few particularly good answers), he wondered if we could tell him through research what book to write. We’ve gotten some publishers in some circumstances to do marketing research early enough to influence titling and sub-titling. McVeigh in his ebook makes the same point under the rubric that SEO should be employed before titling any book.

Of course, we don’t sell that kind of help very often or we haven’t so far. It would require getting marketing money invoked early to pay for research like that. But we know it is useful.

And all of this together brings into sharper focus for me where trade publishing has to go, and how the marketing function, indeed, the whole publishing enterprise, needs to be about a constant process of audience segmentation, research, tweaks, analysis, and repeat. A persistently enhanced understanding of multiple audiences can productively inform title selection and creation. And systems and workflows need to be built to systematically apply what is being learned every day to every title which might benefit. Audience segmentation and constant research are really at the heart of the successful trade publishing enterprise of the future, even if we are only lurching toward them now with a primitive understanding of SEO, the occasional A-B test for a Facebook ad, and the gathering of some odd web traffic and email lists that don’t relate to any overall plan.

A publisher operating at scale ought to have the ability to provide those authors that want to build their audiences one reader at a time better analysis and tools than they would have to do it on their own.  Publishers have always depended on the energy of authors to sell their books; the techniques just have to change. Instead of footing the bill for expensive and wasteful author tours, publishers should be providing tools, data, and helpful coaching to be force multipliers for the efforts authors are happy to extend on their own behalf. The publisher’s goal should be have their authors saying “I don’t know how I could possibly be so effective without the help I get from my publisher.”

Publishers should also be doing the necessary research to examine the market for each book they might do before they bid on it. They should have audience groups with whom they’re in constant contact, and they also need the ability to quickly segment and analyze audiences “in the wild”. The dedicated research capabilities need to be applied to the opportunities surfaced by constant monitoring of both the sales of and the chatter about the backlist.

Size, scale, and a large number of titles about which a lot is known should give any publisher advantages over both indie authors and dominant retailers in building the biggest possible audience for the books it publishes. But getting there will require both learning the techniques of the future and unlearning the concepts and freeing themselves of the discipline of “pub date” timing that have always driven effective trade publishing.

The publishers creating new management positions with the word “audience” in the title would seem to be very much on the right track. It is worth recalling that my father, Leonard Shatzkin, carried the title of Director of Research at Doubleday in the 1950s. Research would be another function to glorify with a title and a budget assigned and monitored from the top of each company. Note to the CEOs: a budget for “research” for marketing and to inform acquisition should be explicit and it should be the job of somebody extremely capable to make sure it is productively invested.

12 Comments »

Getting books more retail shelf space is going to require a new approach


That bookstore shelf space is disappearing is a reality that nobody denies. It makes sense that there are people trying to figure out how to arrest the decline. There has been some recent cheerleading about the “growth” of indie bookstores, but the hard reality is that they’re expanding shelf space more slowly than chains are shrinking it. No publisher today can make a living selling books just through brick-and-mortar bookstores. For straight text reading, it is rapidly becoming an ancillary channel, a special market. Illustrated book publishers, whose books don’t port so well to ebooks and whose printed books are more likely to be bought if they are seen and touched, are working “special” sales — those not made through outlets that primarily sell books — harder than ever. That means they’re trying to put books into retail stores that aren’t primarily bookstores.

A recent Publishing Perspectives brings us an article by a small publisher envisioning an expanded market for selling books through libraries. Deborah Emin of Sullivan Street Press imagines a world where libraries become book retailers liberated from the normal retailer’s concerns about “exorbitant rent and the dealings with landlords who can terminate a lease renewal at will”. But what really caught my attention was this statement:

What if bookstores could invest in what bookstores are best at — filling their shelves with books and taking chances on new authors rather than being concerned that their stock won’t move fast enough and they are wasting valuable space trying to sell what is more difficult to sell but that they know can be sold?

This stopped me because, in fact, I have precisely the opposite take on the problem. What I see is that the cost of buying books, and the impossiblity of doing it “right” based on the sales and inventory data of a single store, is really the biggest barrier to profitable bookselling, even more of a challenge than the cost of the space.

One big component of the problem, in a nutshell, is that most books don’t sell enough copies to have a “sales rate” in any one store. Consider a little quick retail math. A store that does $1.2 million in sales a year ($100,000 a month) is selling 5 to 10 thousand books a month. Call it eight thousand. The chances are that store’s eight thousand sales will be more than 7,500 “ones”, with the balance made up mostly of “twos”, with a handful of titles — in the neighborhood of a dozen — that sell three or more. If the store turns its stock 4 times a year (which would be a very good performance), it is sitting on about 25,000 books at a time, also mostly “ones”, so let’s say they have 22,000 titles. So in the average month, 2/3 of their titles sell zero and more than 90 percent sell no more than one.

In the following month, the 7,500 titles that sell one will largely change.

There is no mathematician in the world that can make meaningful predictions for what any particular title will sell in a subsequent month with data like that. And there is no mathematician in the world that can tell you how the hundreds of thousands of titles not in the store would have done if they had been there, based on the store’s data on those titles (which is zilch).

In the past decade, indie stores have gotten some real help getting some indications about sales outside their four walls. Ingram ranks titles across a much broader universe. The store system provider Above the Treeline provides some title-level visibility across their client base. That’s a lot better than nothing, but the data is not provided in a form that would enable any automated use of it for reordering.

And that points to the second, and larger, component of the problem: automating the ordering. The human attention it takes to make the stocking decisions for a bookstore has not really been scaled. B. Dalton Booksellers, which was bought by, absorbed into, and then discarded by Barnes & Noble, pioneered automated models in the 1970s, the first real computer-assisted inventory management in bookstores. A buyer would set an inventory level and reorder point for a book in a store (“setting the model” or “modeling the title”) and the computer would take over from there, automatically reordering when inventory fell to or below the reorder point. This capability made Dalton grow faster than Walden, its chief competitor, which didn’t have this ability to keep backlist in the stores without buyer or store manager intervention. The shortcoming of the model system, of course, is that a buyer has to put it on, take it off, or change it. So we have a manual requirement to manage the automation.

When you think about the sheer number of store-title model combinations in a chain of hundreds of stores with hundreds, if not thousands, of modeled titles per store, that’s no trivial task.

Unfortunately, the art or science or technology (or all three) of inventory management for books in stores hasn’t progressed a whole lot since then. Barnes & Noble built a great internal supply chain with warehouses that could resupply its stores very quickly and that improved the efficiency of the models. But an unnoticed and uncommented upon current reality is that internal supply chain will be hard to sustain and increasingly costly as the base of stores and sales it serves diminishes in size.

My father recognized this problem sixty years ago and created the Doubleday Merchandising Plan to solve it. That plan provided vendor-managed inventory for the stores. The reps walked out with an inventory count rather than an order. It was posted (manually) to a ledger by a roomful of workers at Doubleday’s home office, and an order was then created and sent to the store which had agreed in advance to accept it. Sales exploded, cost of sales shrank, and this program propelled Doubleday into the top echelon of book publishers. Leonard Shatzkin’s system was not automated, but it was a lot faster and more efficient than the store’s own efforts, particularly in those days when there was no computer assistance to track the inventory.

As stores gained the ability to track inventory through the 1980s, and were further assisted by a wholesale network led by Ingram that could restock them quickly, improved inventory management sharply increased bookstore profitability and the bookstore network grew. But with bookstores now heavily invested in systems to help them order more efficiently, the need for and receptiveness to publisher management of inventory declined.

But stores that don’t normally buy books and which can’t make the investments in book-oriented inventory tracking and buyers with the huge amounts of special knowledge that book buyers have still needed the help. Nearly two decades ago, I helped a client build an automated stocking system that could manage inventory on thousands of titles in thousands of stores with very little human intervention. It has run successfully to this day and is used to stock books in three of the largest chains in the country.

We used a pretty simple logic to build this system, limited as we were by what computers could do in 2000. The system calculates stock turn by title across the chain and then ranks the books by that metric. Then each store gets the highest-ranked books it doesn’t already have each week to replace the books it has sold. This automated system is crude, but extremely effective.

Of course, persuading a bookstore to accept a publisher’s or wholesaler’s decisions about what titles to stock would be a very heavy lift. But as the retail book market shifts from dedicated bookstores to shelf-space-for-books in retailers with other specialties, it becomes easier for publishers or distributors to find shelf space that can be stocked on that basis.

Since I am now working on a more modern version of what we designed in 2000, it is easy to see that much more sophisticated ranking systems and stocking rules can be managed in an automated way than was possible then.

Changing the paradigm by which books find their way to store shelves is a way to meaningfully improve the efficiency of book sales in brick-and-mortar stores. Coupling it with true consignment terms (which sale-and-return is not) can make book sales viable for stores at lower discounts, which could meaningfully improve publishers’ margins.

There’s plenty of rent being paid for space books would sell well in. The problem is the cost of putting the right books into those spaces. We won’t get there presenting, ordering, and fulfilling title-by-title as we’ve always done. That’s the first place to look for a better answer. Reducing or eliminating rent would be helpful in the short run, probably not sustainable in the long run, and it would sidestep the real challenge of retail: presenting the most saleable possible mix to the consumers who will shop from it every single day.

24 Comments »

What makes books different…


Before the digital age, retailers that tried to sell across media were pretty rare. Barnes & Noble added music CDs to their product mix when the era of records and cassettes had long passed. Record stores rarely sold books and, if they did, tended to sell books related to an interest in music. For those stores, it wasn’t so much about combining media as it was about offering a defined audience content related to their interest, like Home Depot selling home repair books. For the most part in pre-Internet times, books, music, and video each had its own retail network.

But when media became largely digital in the first decade of the 21st century, the digital companies that decided to establish consumer retail tried to erase the distinction that had grown up dividing reading (books) from listening (music) from watching (movies and TV). The three principal digital giants in the media retailing space — Amazon, Apple, and Google — all sell all these media in their “pure” form and maintain a separate market for “apps” as well that might contain any or all of the legacy media.

The retailing efforts for all of them are divided along legacy media lines, acknowledging the reality that people are usually shopping specifically for a book or music or a cinematic experience. Most are probably not, as some seem to imagine, choosing which they’ll do based on what’s available at what price across the media. (This is a popular meme at the moment: books “competing” with other media because they are consumed on the same devices. Of course, only a minority of books are consumed on devices, unlike the other media. Even though this cross-media competition might be intuitive logic to some people, it has scarcely been “proven” and, while it might be true to a limited extent, it doesn’t look like a big part of the marketing problem to me.)

It seems from here that Amazon and Barnes & Noble have a distinct advantage over all their other competitors in the ebook space because, with books — unlike movies and TV and music — the audience toggles between print and digital. And this might not change anytime soon. The stats are scattered and not definitive, but a recent survey in Australia found that ninety-five percent of Australians under 30 preferred paperbacks to ebooks! Other data seem to indicate that most ebook readers also read print. To the extent that is true, a book shopper — or searcher — would want to be searching the universe of book titles, print and digital, to make a selection.

It should be more widely understood that the physical book will not go the way of the Dodo nearly as fast as the shrink-wrapped version has for music or TV/film. It hasn’t and it won’t. There are very good, understandable, and really undeniable reasons for this, even though it seems like many smart people expect all the media to go all-digital in much the same way.

Making the case that “books are different” requires me to unlearn what I was brought up to believe. My father, Leonard Shatzkin, used to ridicule the idea that “books are different”, which was too often (he thought) invoked to explain why “modern” (in the 1950s and 1960s) business practices like planning and forecasting and measuring couldn’t be applied to books like they were to so many other businesses after World War II. In fact, Dad shied away from hiring people with book business experience, “because they would have learned the wrong things”.

But in the digital age, and as compared to other media, books are definitely different and success in books, whether print or digital, is dependent on understanding that.

First of all, the book — unlike its hard good counterparts the CD (or record or cassette) and DVD (or videotape) — has functionality that the ebook version does not. Quite aside from the fact that you don’t need a powered device (or an Internet connection) to get or consume it, the book allows you to flip through pages, write margin notes, dog-ear pages you want to get back to quickly, and easily navigate around back and forth through the text much more readily than with an ebook. There are no comparable capabilities that come with a CD or DVD.

Second, the book has — or can have — aesthetic qualities that the ebook will not. Some people flip for the feel of the paper or the smell of the ink, but you don’t have to be weirdly obsessed with the craft of bookmaking to appreciate a good print presentation.

But third, and most important, is the distinction about the content itself. When you are watching a movie or TV show or listening to music through any device, the originating source makes only the most nuanced difference to your consumption experience. Yes, there are audiophiles who really prefer vinyl records to CDs and there probably are also those who will insist that the iTunes-file-version is not as good as the CDs. And everybody who has watched a streamed video has experienced times when the transmission was not optimal. There are almost certainly music and movie afficionados who will insist on a hard goods version to avoid those inferiorities.

But the differences between printed books and digital books are much more profound and they are not nuanced. In fact, there are categories of books that satisfy audiences very well in digital form and there are whole other categories of books that don’t sell at all well in digital. That is because while the difference between classical music and rock or the difference between a comedy and a thriller isn’t reflected in any difference between a streamed or hard-goods version, the difference between a novel and a travel guide or a book of knitting instruction is enormous when moving from a physical to digital format.

For one thing, the book — static words or images on a flat surface, whether printed or on a screen — is often a presentation compromise based on the limitations of “static”. The producer of a record doesn’t think “how would I present this content differently if it is going to be distributed as a file rather than a CD?” But the knitting stitch that is shown in eight captioned still pictures in a printed book could just as well be a video in an ebook. And it probably should be.

In fact, this might be the use case for which a consumer would make a media-specific decision. If you know what knitting stitch you need to learn, searching YouTube for a video might make more sense than trying to find instructions in a book!

Losing the 1-to-1 relationship between the printed version and the digital version adds expense and a whole set of creative decisions that are not faced by the music and movie/TV equivalents. And they are also not a concern for the publisher of a novel or a biography. But these are big concerns for everybody in the book business who doesn’t sell straight-text immersive reading. The point is that screen size and quality are not — and never were — the only barriers in the way of other books making the digital leap.

So even though fiction reading has largely moved to digital (maybe even more than half), most of the consumer book business, by far, is still print. Even eye-catching headlines like the one from July when the web site AuthorEarnings (organized and run by indie author Hugh Howey, who is a man with a strong point of view about all this) said “one in three ebooks” sold by Amazon is self-published, might not be as powerful at a second glance.

Although Howey weeds out the ebooks that were given away free, the share of the consumer revenue earned by those indie ebooks would be a much smaller fraction than their unit sales. The new ebooks from big houses, which is a big percentage of the ebook sales they make (and that AuthorEarnings report in July said the Big Five still had an even bigger share of units than the indies), are routinely priced anywhere from 3 to 10 times what indie ebooks normally sell for. So that “share” if expressed as a “share of revenue” might be more like five or ten percent. It really couldn’t be more than 15%.

(In fairness to Howey, he tries to make the point that indie authors earn more from lower revenue because their cut is so much bigger and he makes the argument that they are actually earning more royalties than the big guys. He also tells me that he calls some S-corp and LLC publishers “uncategorized”, even though they are almost certainly indies, in his own attempt to be even-handed. In fairness to the industry, I will point out that his accounting doesn’t take unearned advances into consideration, and since most sales of big house ebooks are of authors who don’t earn out, that lack of information really moots the whole analysis about what authors earn. Another big shortcoming of the comparison is that most published authors are getting a much more substantial print sale than most indie authors.)

But indie authors on Amazon are the industry high-water mark of indie share and ebook share. They are almost entirely books without press runs or sales forces, so they are almost entirely absent from store shelves. And they are also entirely narrative writing.

The facts, apparently, are that even heavy ebook readers still buy and consume print. There is not a lot of clear data about whether “hybrid readers” make their print-versus-digital choice categorically or some other way. There is some anecdata suggesting that some people read print when it is convenient (when they’re home) and digital when it is not. There are a number of bundling offers to sell both (offered by publishers and one called “Matchbook” from Amazon), which certainly seems to say that publishers believe there’s a market of people who would read the same book both ways at the same time!

What that all would seem to say is that the retailer selling ebooks only is seriously disadvantaged from getting searches for books from the majority of readers.

Do we have any independent evidence that selling to the digerati only — selling ebooks only — might limit one’s ability to sell ebooks? I think we do. It would appear that B&N has sold roughly the same number of Nooks as Apple has iPads. (This equivalence will probably not last since Nook sales seem to be in sharp decline.) That is somewhat startling in and of itself, since Apple is perhaps the leading seller of consumer electronics and B&N was entirely new to that game. Nook also seems to have — at least for a while — sold more ebooks than Apple. (This “fact” may also be in the rear view mirror with the apparent collapse of Nook device sales.) I will be so bold as to suggest that this is not because Nook has superior merchandising to the iBookstore. More likely it is because the B&N customer is a heavier reader than the Apple customer and prefers to do his or her book shopping — and even his or her book device shopping — with a bookseller.

[Correction to the above paragraph made on 11 Sept. I misheard and therefore misreported something that was caught by a reader in the comments below, but I should also correct here.  Apple has sold ~200M iPads but are only roughly 12% of the ebook market whereas B&N has sold only about 1/20th the number of Nooks and are about 18% of the ebook market. That fact makes little sense to anyone in Silicon Valley but speaks to how book audiences really behave. We all know a very high % of Nook owners are active store buyers.]

There is one more huge distinction between books and the other media and it is around the motivation of the consumer. While sometimes TV or movies might be consumed for some educational purpose, most of the time the motivation is simply “entertainment”, as it is with music. While analysis of prior video or music consumed and enjoyed might provide clues to what should be next, figuring out what book should be next is a much more complex challenge.

And the clues don’t just come from prior books consumed and enjoyed. Books are bought because people are learning how to cook or do woodworking, or because they are traveling to a distant place and want to learn a new language or about distant local customs, or because they are going to buy a new house or have suddenly been awakened to the need to save for retirement. You can’t really suggest the next book to buy to many consumers without knowing much more about them than knowing their recent reading habits would tell you.

But not only do (most of) the ebook-only retailers not know whether you’re moving or traveling, they don’t even know what you searched for when you were looking for print. And, even if they did know, operating in an ebook-only environment would make many of the best suggestions for appropriate books to address everyday needs off limits, because many of those books either don’t exist in digital form or aren’t as good as a YouTube video to satisfy the consumer’s requirements.

Indeed, it is the sheer “granularity” of the book business — so many books, so many types of books, so many (indeed, innumerable) audiences for books — that makes it so different from the other media.

Of course, there is one company — Google — that is not only in the content business and the search business but which also handles “granularity” better than any company on earth, down to the level of the attributes and interests of each individual. Google not only would know if you were moving or traveling, they would be in a great position to sell targeted ads to publishers with books that would help consumers with those or a million other information needs. (They also know about all your searches on YouTube!) But because Google’s retailing ambitions are bounded by digital, they are walking past the opportunity to be the state-of-the-art book recommendation engine. They’re applying pretty much the same marketing and distribution strategy across digital media at Google Play. They aren’t seeing that book customers are both print and digital. They aren’t seeing that books are, indeed, different.

When the day comes that they do, this idea will look better to them that it might have at first glance.

38 Comments »

New data on the Long Tail impact suggests rethinking history and ideas about the future of publishing


For most of my lifetime, the principal challenge a publisher faced to get a book noticed by a consumer and sold was to get it on the shelves in bookstores. Data was always scarce (I combed for it for years) but everything I ever saw reported confirmed that customers generally chose from what was made available through their retailers. Special orders — when a store ordered a particular book for a particular customer on demand, which meant the customer had to endure a gap between the visit when they ordered the book and one to pick it up — were a feature of the best stores and the subject of mechanisms (one called STOP in the 1970s and 1980s) that made it easier. But they constituted a very small percentage of any store’s sales, even when the wholesalers Ingram and Baker & Taylor made a vast number of books available to most stores within a day or two.

It was an article of faith, and one I accepted, that if you could expose most books to a broad public, they would “find their audience”. The challenge was overcoming the gatekeepers or, put another way, the aggregate effect of the gatekeepers (the store buyers) was to curate, or act as a filter, to find the worthwhile books that the public would really see from which they would choose what to buy.

There was also ample evidence over time that a large selection of books in a store acted as a magnet to draw customers. That fact was noted by my father, Leonard Shatzkin, in the early 1960s, when they doubled the inventory at the Short Hills, NJ, Brentano’s store (the chain reported to my father, who was a Vice-President of Crowell-Collier, the company that owned Brentano’s, Collier’s Encyclopedia, and Macmillan Publishers, among other things) and it went from the worst-performing store in the chain to the best. In the 1970s, BP Reports published a survey that said that nearly half of bookstore customers chose the store they were in on the basis of the selection they’d find and more than half reported their particular purchase decision was made in the store.

By the late 1980s, both of the big national bookstore chains — Barnes & Noble and Borders — were undergoing a massive expansion of “superstores”. Whereas chain bookstores (B&N’s B. Dalton and Borders’s Walden) carried 20,000 or 30,000 titles, and large independents carried as many as twice that, now the new superstores would carry 100,000 titles or more! Customers flocked to the massive bookstores and the ever-expanding chains ordered lots of the publishers’ backlists and everybody celebrated a new era, except the independent bookstores who were increasingly squeezed by their new large competitors. The era was less than 10 years old when it got disrupted.

In the 1970s, it was my responsibility for a couple of years to write the orders for stores that accepted vendor-managed inventory from Two Continents, my family’s distribution company. I was being careful to make sure that each store earned $2 gross margin per dollar of inventory investment, which was what you’d get from 40% discount with inventory turned 3 times a year. This gave me a hands-on look at how stock turn in the aggregate was affected by the inventory decisions on specific titles.

When you do this, you figure out pretty fast that you can produce very high stock turn on books that are moving consistently. If a store were selling five copies a month of a title on a sustained basis and I put in 10 and replenished monthly, they would be getting an annual turn of 10 or perhaps much more on those moving books. (Turn calculation: sales divided by average inventory for a period multiplied by the number of such periods in a year.) That would support a lot of single copies of books that moved very slowly or, as it turned out, not at all. Since very few stores managed a turn of 3 or 4 on their own (chain store turns were usually under 2), giving the stores on our Plan a good result with the advantage of shipping monthly was shooting fish in a barrel.

But if you think about the turn you’re achieving with the titles that really move, know that the titles that move are a large percentage of the store sales, and take on board what stores’ overall turns tended to be, it leaves you with the uncomfortable feeling, or calculation, that a very high percentage of the titles each store ordered didn’t sell a single copy in that store. In fact, one big advantage of vendor-managed inventory is that it gives you the ability to use the high turn on your titles to stock the titles of yours that turn slowly or don’t sell at all, rather than having the store “waste” those margin dollars your books produce stocking somebody else’s slow-moving books.

Remember, in physical retail, selection was the magnet. The books that didn’t sell were helping to pull in the customers for the books that did sell. Stores knew that too. Later work I did demonstrated that there were whole store sections that turned at half or less of the rate of the store as a whole. But if you want, say, a philosophy section that “turns”, it would only have about ten titles in it. If you want a philosophy section people will browse and shop from, you have to carry a lot of slow-moving titles.

But just when the bookstores put the inventory in place to stimulate book buying all over the country, along came the Internet, Amazon.com, print-on-demand, and ebooks, in that order. All four were fully integrated into the book publishing ecosystem over a decade-and-a-half starting in 1995. As quickly as the magic of selection via the 100,000-title store was implemented, it was superseded by the “total” selection provided by Amazon’s, and then BN.com’s, “unlimited shelf space”. Now every book would have its full chance to sell, or so it seemed.

Unlike the period of superstore expansion, when substantial orders for deep backlist suddenly became commonplace in a continuing windfall for publishers, the new era with Amazon was characterized by things getting harder for many publishers. That wasn’t necessarily clear at first, but the impact of Amazon, and then Lightning (print on demand offered by Ingram) was to dramatically increase the number of titles competing for sales. It gave the Long Tail a real opportunity to get to customers which, through bookstores — even very big bookstores — only the top 100,000 titles were able to do. Publishers were a bit like the metaphorical frog in heating water; the challenges imperceptibly became greater over time. In 1990, a new book competed with about 100,000 available titles. In 1997 it competed with many hundreds of thousands and that number just kept growing. Today it competes with millions.

The challenges for conventional publishers got steeper again when ebooks became mainstream, pioneered by Amazon’s Kindle in late 2007. There had been a modest ebook business building for about a decade, but until Amazon committed its resources to creating a dedicated device, a repository of content, and audience awareness, it had a trivial impact. But a full-fledged ebook business unleashed a new wave of competition from self-publishing authors. Amazon fostered growth by creating an easy on-ramp for self-publishing, a move quickly copied by B&N, Apple, and Kobo. In the several years that ebooks have been commercially important, many — certainly hundreds and perhaps thousands — of authors have achieved meaningful sales. Many of those have been of backlist books originally published conventionally but there have also been thousands of successful original ebooks. Whether revived formerly-dead backlist or new titles, these are books that are competing with the output of the conventional publishers and wouldn’t have been a decade or two ago.

So the Long Tail for books has been a topic of conversation for most of the past 20 years. Amazon’s limitless shelves and Ingram’s Lightning contributed heavily to this before the turn of the century; self-publishing has accelerated it dramatically. The early expectations, including mine, were that the Long Tail would take sales from all the books being “currently” published. But it became evident pretty early that the big books were just getting bigger: the head of the sales curve wasn’t diminishing. In fact, both the head and the Long Tail took sales from the middle of the curve. This was particularly challenging for publishers because publishing mid-list, those books they do that aren’t bestsellers, became much more challenging.

The Long Tail continues to grow. There are a limitless number of aspiring authors and their aspirations to self-publish successfully are fueled both by success stories and by a growing band of indie authors who tout their success and question the business models and practices of the majors. Because being conventionally published has its own set of hurdles and time requirements, it has seemed to many (and I haven’t been immune from this thought) that self-publishing would just continue inexorably to take share from the publishing business.

But now we have some data that calls that assumption into question. I encountered two examples of that in the past week.

In Toronto last Wednesday, Noah Genner of Booknet Canada presented information about the Canadian market showing that the number of ISBNs was expanding rapidly, but that the number of individual ISBNs selling at least one single copy was about flat.

Then this week, Marcello Vena of RCS Libri in Italy published a White Paper based on his company’s data (link through to the White Paper from the DBW piece introducing it) which showed something similar. Sales of his company’s books were becoming increasingly concentrated in a small number of titles. Vena added an analysis using the Herfindahl-Hirschman Index (HHI). HHI measures the concentration in a market and is, according to Vena, used by the US Department of Justice to measure concentration in an industry. The HHI is calculated by adding the squares of the market shares of the players. So if one company owned 100% of a market, the HHI would be 100 squared, or 10,000. But if 100 players each owned 1% of the market, the HHI would be 100 times 1/10,000 (1/100 squared) or 0.01. Using the market concentration and title concentration numbers in tandem, Vena finds that they’re linked. As market concentration increases, the sales move to the head of the sales curve and flatten further in the Long Tail.

Of course, Italy and Canada are not the United States. Our market is bigger and richer. But Italy and Canada are not trivial samples, either.

One further point about Long Tail sales. In the aggregate, they can be very significant. But for each individual title, they are trivial. So the real commercial benefits flow to the aggregators — Amazon and Lightning — and much less to the publishers or authors of the individual titles. There certainly are situations where particular publishers have a lot of Long Tail books: the Oxford and Cambridge University Presses would be prime examples of this. For them, with thousands of titles in the Long Tail, the aggregate sales are probably commercially significant. But for a publisher with 100 titles, or even 1000 titles, selling a copy or two a year (or none), and that’s what we’re talking about here, it hardly makes any difference. I personally own several Long Tail titles. I get checks from somebody every month, but it adds up to three figures a year, not four.

The implications of this in the discussion of how the publishing industry might be affected by self-publishing disruption are interesting. It would suggest to me that the boosts publishers can give a book — even their catalogs provide more marketing lift than most self-published books start with — will become increasingly important as the market becomes increasingly flooded. If the data Vena has presented turns out to be the future trend, the increase in self-published titles will drive more and more sales to a smaller number of winners, and my hunch would be that the winners will most likely be from publishers. That would indeed be a paradox and a totally unintended consequence.

Of course, the publishing business isn’t one business; it is segmented. So far, the commercially successful self-published authors overwhelmingly, if not entirely, fall into two categories. There are authors who have reclaimed a backlist of previously published titles and self-published them. And there are authors of original genre fiction who write prolifically, putting many titles into the marketplace quickly. Successful self-publishing authors are often in both categories but very few are in neither. Those two categories are nearly 100% of the self-publishing success stories but a minority of the books from publishers. So, even before Vena published his White Paper, the idea that self-publishing would upset the commercial establishment was way overblown. If Vena’s data turns out to be prophetic, the road is going to get harder and harder for all books, but especially the self-published.

Two big items in the news today. On B&N’s decision to spin out Nook and college into a separate public company, I have little to say except to wish them all well. On Hachette’s and Ingram’s division of the two Perseus businesses, I’d say this. 1) The notion that this is about Hachette “bulking up” for the Amazon battle is almost certainly wildly wrong and anybody saying that has disqualified themself as an expert. 2) The titles Hachette get here really change the character of their list, adding a non-fiction and academic dimension they never had. 3) Ingram has made a major leap in scale for their Ingram Publisher Services business which now, in the aggregate, is Big Five sized.

Once again, the Feedburner service failed to distribute my most recent post, which was a graf-by-graf disagreement with a post by Hugh Howey. The comment string of that post contains ample evidence that the fact contained in the last paragraph here is not widely acknowledged.

66 Comments »

Marketing will replace editorial as the driving force behind publishing houses


One of the things my father, Leonard Shatzkin, taught me when I was first learning about book publishing a half-century ago was that “all publishing houses are started with an editorial inspiration”. What he meant by that is that what motivated somebody to start a book publisher was an idea about what to publish. That might be somebody who just believed in their own taste; it might be something like Bennett Cerf’s idea of a “Modern Library” of compendia organized by author; it might even be Sir Allen Lane’s insight that the public wanted cheaper paperback books. But Dad’s point was that publishing entrepreneurs were motivated by the ideas for books, not by a better idea for production efficiency or marketing or sales innovation.

In fact, those other functions were just requirements to enable somebody to pursue their vision or their passion and their fortune through their judgment about what content or presentation form would gain commercial success.

My father’s seminal insight was that sales coverage really mattered. When he recommended, on the basis of careful analysis of the sales attributable to rep efforts, that Doubleday build a 35-rep force in 1955, publishers normally had fewer than a dozen “men” (as they were, and were called, back then) in the field. The quantum leap in relative sales coverage that Doubleday gained by such a dramatic sales force expansion established them as a power in publishing for decades to come.

Over the first couple of decades of my time in the business — the 1960s and 1970s — the sales department grew in importance and influence. It became clear that the tools for the sales department — primarily the catalog, the book’s jacket, and a summary of sales points and endorsements that might be on a “title information sheet” that the sales reps used — were critical factors in a book’s success.

There was only very rarely a “marketing” department back then. There was a “publicity” function, aimed primarily at getting book reviews. There was often a “sales promotion” function, which prepared materials for sales reps, like catalogs. There might be an art department, which did the jackets. And there was probably an “advertising manager”, responsible for the very limited advertising budget spent by the house. Management of coop advertising, the ads usually placed locally by retail accounts that were partly supported by the publishers, was another function managed differently in different houses.

But the idea that all of this, and more, might be pulled together as something called “marketing” — which, depending on one’s point of view, was either also in charge of sales or alternatively, viewed as a function that existed in support of sales — didn’t really arise until the 1980s. Before that, the power of the editors was tempered a bit by the opinions and needs of the sales department, but marketing was a support function, not a driver.

In the past decade, things have really changed.

While it is probably still true that picking the “right books” is the single most critical set of decisions influencing the success of publishers, it is increasingly true that a house’s ability to get those books depends on their ability to market them. As the distribution network for print shrinks, the ebook distribution network tends to rely on pull at least as much as on push. The retailers of ebooks want every book they can get in their store — there is no “cost” of inventory like there is with physical — so the initiative to connect between publisher and retailer comes from both directions now. That means the large sales force as a differentiator in distribution clout is not nearly as powerful as it was. Being able to market books better is what a house increasingly finds itself compelled to claim it can do.

In the past, the large sales force and the core elements that they worked with — catalog, jacket, and consolidated and summarized title information — were how a house delivered sales to an author. Today the distinctions among houses on that basis are relatively trivial. But new techniques — managing the opportunities through social networks, using Google and other online ads, keeping books and authors optimized for search through the right metadata, expanding audiences through the analysis of the psychographics, demographics, and behavior of known fans and connections — are still evolving.

Not only are they not all “learned” yet, the environment in which digital marketing operates is still changing daily. What worked two years ago might not work now. What works now might not work a year from now. Facebook hardly mattered five years ago; Twitter hardly mattered two years ago. Pinterest matters for some books now but not for most. Publishers using their own proprietary databases of consumer names with ever-increasing knowledge of how to influence each individual in them are still rare but that will probably become a universal requirement.

So marketing has largely usurped the sales function. It will probably before long usurp the editorial function too.

Fifty years ago, editors just picked the books and the sales department had to sell them. Thirty years ago, editors picked the books, but checked in with the sales departments about what they thought about them first. Ten years from now, marketing departments (or the marketing “function”) will be telling editors that the audiences the house can touch need or want a book on this subject or filling that need. Osprey and some other vertical publishers are already anticipating this notion by making editorial decisions in consultation with their online audiences.

Publishing houses went from being editorially-driven in my father’s prime to sales-driven in mine. Those that didn’t make that transition, expanding their sales forces and learning to reach more accounts with their books than their competitors, fell by the wayside. The new transition is to being marketing-driven. Those that develop marketing excellence will be the survivors as book publishing transitions more fully into the digital age.

A very smart and purposeful young woman named Iris Blasi, then a recently-minted Princeton graduate, worked for me for a few years a decade ago. She left because she wanted to be an editor and she had a couple of stops doing that, briefly at Random House and then working for a friend named Philip Turner in an editorial division at Sterling. From there Iris developed digital marketing chops working for Hilsinger-Mendelson and Open Road. She’s just taken a job at Pegasus Books, a small publisher in Manhattan, heading up marketing but doubling as an acquiring editor. I think many publishers will come to see the benefits of marketing-led acquisition in the years to come. Congratulations to Pegasus and Iris for breaking ground where I think many will follow.

Many of the topics touched on in the post will be covered at the Marketing Conference on September 26, a co-production of Publishers Launch Conferences and Digital Book World, with the help and guidance of former Penguin and Random House digital marketer Peter McCarthy. We’ve got two bang-up panels to close with — one on the new requirement of collaboration between editorial and marketing within a house and then in turn between the house and the author, and the other on how digital marketing changes how we must view and manage staff time allocations, timing, and budgeting. These panels will frame conversations that will continue in this industry for a very long time to come as the transition this post sketches out becomes tangible.

37 Comments »

Vendor-managed inventory: why it is more important than ever


The idea of vendor-managed inventory has never become particularly popular in the book business, despite a few experiments over the years where it was implemented with great success. (And despite the fact that I was pushing for it back in 1997 and 1998.) But as the book business overall declines, with the print book business leading the slide and that portion of the print book business which takes place in retail stores falling off at an alarming rate, it is time for the industry to think about it again.

In fact, VMI for the book business began with the ID wholesalers and mass-market paperbacks right after World War II. The IDs — the initials stood for “independent distributors” — managed the distribution of magazines and newspapers at newsstands and other accounts within their geographical territory. The retailers had no interest in deciding how many copies of LIFE they got in relation to Ladies Home Journal; the ID made that determination. And since only the torn off covers were necessary for confirmation of a “return”, the “bulk” cost of distribution was in putting the copies in, not taking back the overage. And because newspapers and magazines had a disciplined frequency, it was obvious that you had to clear out yesterday’s, or last week’s, or last month’s to make room for the next issue.

When the first mass-market paperback publishers started their activity right after World War II, providing books for, among others, returning servicemen who had access to special servicemen’s editions of paperbacks (in a program created by the polymath Philip Van Doren Stern, a Civil War historian and friend of my father’s) they helped the jobbers along by having monthly lists. They also were comfortable with a book only having a one-month shelf life and having the stripped covers serve as evidence the book hadn’t been sold.

For quite some time, the initial allocations to the ID wholesalers (the local rack jobbers were called “Independent Distributors”) were really determined by the paperback publishers. Eventually, that freedom to put books into distribution choked the system, but there were a lot of other causes of the bloat. By the 1960s, many bookstores were carrying paperbacks and many other big outlets were served “direct” by the publishers, leaving the IDs with the least productive accounts. But VMI, even without any system and very little in the way of restraints on the publishers, was responsible for the explosive growth of mass-market paperbacks in the two decades following World War II.

In the late 1950s, Leonard Shatzkin, my father, introduced The Doubleday Merchandising Plan, which was VMI for bookstores on Doubleday books. For stores that agreed to the plan, reps reported the store’s inventory back to headquarters of Doubleday books rather than sending an order. Then a team posted the inventories, calculated the sales, and followed rules to generate an order of books to the store. Sales mushroomed, particularly of the backlist, and returns and cost of sales plummeted. Doubleday was launched into the top tier of publishing companies.

In a much more modest way, a distributor that my father owned called Two Continents introduced a VMI plan in the 1970s. Even with a very thin list and no cachet, we (I was the Marketing Director) were able to get 500 stores on the Plan in a year. We achieved similarly dramatic results, but from a much more modest base.

Two Continents was undone by the loss of some distribution clients. The Doubleday plan was undermined by reps who convinced headquarters years after my father left that their stores would be more comfortable if they wrote the Plan orders rather than letting them be calculated at headquarters. And the rise of computerized record-keeping systems for inventory and national wholesalers who could replenish stock quickly improved inventory performance, and store profitability, without VMI. Although our client West Broadway Book Distribution has successfully operated VMI in specialty retail for more than a decade, and Random House has worked some version of VMI at Barnes & Noble for the past several years, the technique has hardly been considered by the book trade for a long time.

It is time for that to change. What can foster the change is a recognition about VMI that is readily apparent in West Broadway’s implementations in non-bookstores, but would not have been so obvious to the bookstores using Doubleday’s or Two Continents’ services.

From the publisher’s perspective, the requirement that there be a title-by-title, book-by-book buying function in the store in order for the store to stock books purely and simply reduces the number of stores that can stock books. The removal of that barrier was the key achievement of the ID wholesalers racking paperbacks after World War II. Suddenly there were thousands of points of sale that didn’t require a buyer.

From the store’s perspective, buying — and managing the supply chain to support the buying decisions — is expensive. VERY expensive. Books are hard to buy. New ones are coming all the time; the number of publishers from which they come (and who are the primary sources of information about the books, even if you could “source” them from wholesalers at a slight margin sacrifice for operational simplicity) is huge; the shelf life of any particular title is undeterminable; and the sales in any one outlet are very hard to read.

Consider this data provided by a friend who owns a pretty substantial bookstore.

Looking at the store’s records for a month, 65% of the units sold were singles: one copy of a title. Only 35% were of books that sold 2 or more. (I didn’t ask the question, but that would suggest that 80-90 percent of the titles that sold any copies sold only one.)

Then, the following month, once again 65% of the units sold were singles. But only 20-30 percent of them were the same books as had sold as singles the prior month. Upwards of 70% of them were different titles. And upwards of 70% of the ones that sold one the prior month didn’t sell at all.

To further underscore how slowly book inventory moves, another report they do shows that more than 80% of the titles in the store do not sell a single copy in any particular month. So it is no surprise that an analysis of books from a major publisher that promotes heavily showed that more than half the new titles they receive from that publisher don’t sell a single copy within a month of their arrival in the store, which would include the promotion around publication date!

These data points demonstrate another compelling reason for VMI. When a store sells none of 80% of its titles in a month, and of the ones they do sell 80% of those sell one unit, they clearly need information about what is going on in other stores to know which ones to keep or reorder and which ones to return. Above the Treeline is an inventory service which provides its stores with broader sales data to address that issue, but the information is not as granular or as susceptible to analysis as what a publisher or aggregator could do with VMI.

Partly because of the high cost of buying and a supporting supply chain that a book outlet requires, publishers will see shelf space for books drop faster than retail demand. (The closure of Borders, which wiped out a big portion of the shelf space, is part of what is behind the recent good sales reports from many independents.) At the same time, retailers of all things will be under increased pressure to find more sales as the Internet — often, but not always, Amazon — keeps eating into their market.

This all adds up to VMI to me. We’ll see over the next couple of years whether industry players come to the same conclusion.

9 Comments »

Full-service publishers are rethinking what they can offer


At lunch a few months ago, Brian Murray, the CEO of HarperCollins, expressed dissatisfaction with the term “legacy” to describe the publishers who had been successful since before the digital revolution began. For one thing, he felt that sounded too much like “the past”. “We need to come up with a different term,” was his assessment and he suggested that perhaps “full-service” was more apt.

I find I keep coming back to “full service” as an accurate description of the publisher’s relationship to an author. That’s what the long-established publishers have evolved to be.

It would be disingenuous to suggest that publishing organizations were deliberately created as service organizations for authors. They weren’t. In fact, as we shall see, the service component of a publisher’s DNA was developed in service to other publishers.

My Dad, Leonard Shatzkin, pointed out to me 40 years ago that all trade book publishing companies were started with an “editorial inspiration”: an idea of what they would publish. Sometimes that was a highly personal selection dictated by an individual’s taste, such as by so many of the great company and imprint names: Scribners, Knopf, Farrar and Straus and Giroux, for examples. Random House was begun on the idea of the Modern Library series; Simon & Schuster was started to do crossword puzzle books.

That is: people had the idea that they knew what books would sell and built a company around finding them, developing them, and bringing them to market.

And the development and delivery to the market required building up a repertoire of capabilities that comprised a full-service offering.

The publisher would find a manuscript or the idea for one and then provide everything that was necessary — albeit largely by engaging and coordinating the activities of other contractors or companies — to make the manuscript or idea commercially productive for the author and themselves.

The list of these services describes the publishing value chain. It includes:

select the project (and assume a financial risk, sometimes relieving the author of any);

guide its editorial development (although the work is mostly done by the contracted author or packager);

execute the delivery of the content into transactable and consumable forms (which used to mean “printed books” but now also means as ebooks, apps, or web-viewable content);

put it into the world in a way that it will be found and bought (which used to mean “put it in a catalog widely distributed to opinion-makers or buyers” but now largely means “manage metadata”);

publicize and market it;

build awareness and demand among the people at libraries and bookstores and other distribution channels who can buy it;

process the orders;

manufacture and warehouse the actual books or files or other packaged product;

deliver;

collect;

and, along the way, sell rights to exploit the intellectual property in other forms and markets, including other languages.

It has long been customary for publishers to unbundle the components of their service offering. The most common form of unbundling is through “distribution deals” by which one publisher takes on some of the most scaleable activities on behalf of other smaller ones. It has reached the point where almost every publisher is either a distributor or a distributee. Many are depending on a third party, quite often a competing publisher, for warehousing, shipping, and billing and perhaps sales or even manufacturing. All the big ones and many others, along with a few companies dedicated to distribution, are providing that batch of services. It is not unheard of for one publisher to do both: offering distribution services to a smaller competitor while they are in turn actually being distributed by somebody larger than they.

An assumption which influenced the way things developed was that the key to competitive advantage for a publisher was in the selection and editorial development of books and in their marketing and publicity, which emerged organically from their editorial efforts. All the other functions were necessary, but were not where many editorially-conceived businesses wanted to put their attention or monopolize their own capabilities.

About 15 years ago, working on VISTA’s “Publishing in the 21st Century” program, I learned the concept of “parity functions” in an enterprise. They were defined as things which can’t give you much competitive advantage by doing them well but which can destroy your business if you screw them up. This led to the conclusion that these things were often best laid off on somebody else who specialized in them, leaving the publisher greater ability to focus on the things which truly and meaningfully differentiated them from competitors.

Another driving force here was the way that bigger and smaller publishers look at costs and scale. If you’re very big, it is attractive to handle parity functions as fixed costs: to own your own warehouse, have a salaried sales force, and to invest in having state-of-the-art systems that do exactly what you want them to do. If you’re smaller, you often can’t afford to own these things anyhow and, on a smaller base, fluctuations in sales could suddenly render those fixed costs much too high for commercial success.

It is therefore more attractive to smaller entities to have these costs become variable costs, a percentage of sales or activity, that go up when sales go up but, most importantly, that also go down if sales go down. And the larger entity, by pumping more volume through their fixed-cost capabilities, subsidizes its own overheads and improves the profitability and stability of its business.

One of the things that is challenging the big publishers — the full-service publishers — today is that the unbundling of their, ahem, legacy full-service offering has accelerated. You need scale to cover the buyers and bill and ship to thousands of independent accounts. If you’re mainly focused on the top accounts — which today means Amazon, Barnes & Noble, Ingram, and Baker & Taylor for most general trade publishers — you might feel you can do it as well or better yourself with one dedicated person of your own.

And if you’re willing to confine your selling universe to sales that can be made online — print or digital — you can eliminate the need for a huge swath of the full-service offering. Obviously, you give up a lot of potential sales with that strategy. But the percentage of the market that can be reached that way, combined with the redivision of revenue enabled by cutting the publisher out of the chain, has made this a commercially viable option for some authors and a path to discovery for others.

So the consolidation of business in a smaller number of critical accounts as well as the shifting of business increasingly to online sales channels has been a challenge for some time that larger publishers and distributors like Perseus and Ingram have been dealing with.

But now the need for services and the potential for unbundling is moving further up the value chain. The first instances of this have been seen through the stream of publishing efforts coming directly from authors and content-driven businesses like newspapers, magazines, and websites.

To the extent that the new service requirements are for editorial development help and marketing, it gets complicated for the full-service publishers to deal with. The objective of organization design for large publishers for years has been to consolidate the functions that were amenable to scale and to “keep small” the more creative functions. So it is a point of pride that editorial decisions and the publicity and marketing efforts that follow directly from the content be housed in smaller editorial units — imprints — within the larger publishing house.

That means they are not designed to be scaleable and they’re not amenable to getting work from the outside. It’s much less of an imposition for somebody in a corporate business development role to ask a sales rep to pitch a book that had origins outside the house than it is to assign one to an editor in an imprint. The former is routine and the latter is extremely complicated.

But what does this mean? Should publishers have editorial services for rent? Should they try to scale and use technology to handle editiorial functions — certainly proofreading and copy-editing but ultimately, perhaps, developmental editing — as a commodity to assure themselves a competitive advantage on cost base the way they do now for distribution? Should publishers try to scale digital marketing? Should they have teams that can map out and execute publishing programs for major brands?

The way Murray sees it, a major publisher applies a synthesis of market intelligence and skills that can only be delivered by publishing at scale. He believes that monitoring across markets and marketing channels along with sophisticated and integrated analysis of how they interact provide an unmatchable set of services.

The scale challenge for trade publishers to collaborate with what I’m envisioning will be an exploding number of potential partners is to find ways to deliver the value of the synthesized pool of knowledge and experience efficiently to smaller units of creativity and marketing.

There is plenty of evidence that publishers are thinking along these lines. The most obvious recent event suggesting it is Penguin’s acquisition of Author Solutions. Penguin had shown prior interest in the author services market by creating Book Country, a community and commercial assistance site for genre fiction authors. Penguin suddenly has real scale in the self-publishing market. They have tools nobody else has now to explore where services for the masses provide efficiencies for the professional and how the expertise of the professionals can add value to the long tail.

There are initiatives that stretch the previous constraints of the publisher’s value chain that I know about in other big companies, and undoubtedly a good deal more that I don’t know about. Random House has a bookstore curation capability that they’ve coupled with editorial development in a deal with Politico that could be a prototype. Hachette has developed some software tools for sales and marketing that they’re making available as SaaS to the industry. Macmillan has a division that is developing educational platforms that might become global paths to locked-in student readers. Scholastic has a new platform for kids reading called Storia that involves teachers and parents that they’d hope to make an industry standard. Penguin has a full-time operative in Hollywood forging connections with projects that can spawn licensing deals. Random House has both film and television production initiatives.

These developments are very encouraging. One of the reasons that Amazon has been so successful in our business is that our business is not the only thing they do. One of the elements of genius they have applied ubiquitously is that every capability they build for themselves has additional value if it can be delivered unbundled as well. Publishers were comfortable with that idea for the relatively low-value things that they do long before they ever heard of Amazon. It is a good time to think along the same lines for functions which formerly seemed closer to the core.

Speaking of which, many of publishing’s most creative executives will be speaking as “Publishing Innovators” at our Publishers Launch Frankfurt conference on Monday, October 8, 10:30-6:30, on the grounds of the Book Fair. 

We did a free webinar with a taste of the Frankfurt conference last week and it’s archived and available and worth a listen. Michael Cader and I were joined by Peter Hildick-Smith of The Codex Group, Rick Joyce of Perseus, and Marcello Vena of RCS Libri.

Dominique Raccah of Sourcebooks, Helmut Pesch of Lubbe,  Rebecca Smart of Osprey, Anthony Forbes Watson of Pan Macmillan, Ken Michaels of Hachette, Stephen Page of Faber, and Charlie Redmayne of Pottermore (as well as Joyce and Vena) will all be talking about initiatives in their shops that you won’t find (yet) going on much elsewhere. And that’s just part of the program. There is a ton of other useful information — about developments in the Spanish language, the BRIC countries, the strategies of tech giants and how they affect publishing, and much more — that will make this the most useful single jam-packed day of digital change information you’ll have ever experienced. We hope to see you there.

34 Comments »

Explaining my skepticism about the likelihood of success for a general subscription model for ebooks


In a prior post, I observed that the apparently-successful subscription offerings for books were in niches. And I said I believed that a more general subscription model wouldn’t work for ebooks the way it has seemed to work for music (Spotify), movies and TV shows (Netflix), and audiobooks (Audible).

By that I meant two things. First of all, it will be impossible for any aggregator to secure the rights to anything like enough of the most appealing titles to deliver an offering comparable to what’s succeeded in other media. But even if they did, that kind of offering wouldn’t deliver nearly as much value to the book reader as general subscription offerings do in other media.

The latter point is based on intuitive speculation. The former is based on an informed view of the commercial realities.

Let’s briefly reiterate the case about consumer appeal. The number of songs, movies, and even audiobooks a subscriber might use in a month (the normal billing period for any subscription, so a relevant unit of measurement) dwarfs the number of books most people would read or refer to. And the heaviest readers — people who read several books a month — are often in genres (romance, science fiction) that already have subscription offerings. They don’t need a more general one.

So the price a subscription offering can command for general ebooks is almost certainly lower in relation to an individual book purchase than the price that can be charged in other media in relation to purchase. That was reflected in the thinking of the fledgling company that got me started writing these posts. They wanted to go to market with a subscription price of about $5 a month, which is less than Spotify, Netflix, or Audible!

(I may disagree with them about the overall viability of the subscription idea, but at least they recognize the necessity of a truly bargain price point.)

But it will be very hard for them, or anybody else, to put together a title base sufficiently appealing for that offering to work commercially.

Big books that consumers know about and want drive them to the points of acquisition for the title. When bookstores talk about how sales are going, they almost always cite the particular books that are driving traffic to their stores (or bemoan the fact that there haven’t been enough of them). That’s why booksellers heavily discounted Harry Potter titles the day they came out and why Book-of-the-Month Club and Literary Guild promoted the availability of the biggest bestsellers they had rights for in their advertising.

Everybody in trade publishing understands this effect. Publishers “overpay” for big books because they know the control of them provides critical leverage dealing with bookstores and wholesalers. BOMC and Literary Guild would bid up the prices for rights to predictable bestsellers beyond what the books would “earn” in royalties on book club sales to gain the value those books had bringing members into the Clubs.

When consumers tie themselves into a subscription service, the power equation shifts for those people. Some of the power of the titles that brought in the consumer is transferred to the owner of the subscription service. If there is enough of value to keep the consumer from looking elsewhere for more content, that can provide great leverage.

It creates enough leverage that Audible can flip the 70-30 model and pay publishers 30% of the attributable revenue for digital downloads of their audiobooks. Since they are the content providers for both iTunes (Apple) and Amazon (their parent company), they have an effective monopoly on audiobooks sold that way. Any publisher that doesn’t want to agree to that split for the subscription business, and I know of at least one very big one that doesn’t, effectively has to live without most of the digital download market for their audio titles.

There have been expressions of dissatisfaction with the payment formula by which Spotify compensates the owners of the songs in their service. But how could there not be? With a combination of free and very low-cost offerings, Spotify is delivering music for far less cost to the consumer than purchasing a collection would require. (There is, theoretically, compensation on the back end because the subscription fee has to continue to be paid to maintain access, whereas older consumers — like me — get a lot of “free” listening to the music we purchased years ago.)

But less cost to consumers means less revenue to be divided by creators. And book authors can’t expect to collect on “repeat reads” the way music creators can collect on “repeat plays”.

So, from an author’s perspective, putting content into a general subscription service threatens to build up the leverage for a market channel that will almost certainly find it less necessary in the future to pay high prices for incremental content.

Simon Lipskar at Writers House, which represents a significant number of major bestselling writers, sees subscriptions as an inherently bad deal for successful writers. In our conversation about this, he echoed my thinking by saying, “Subscriptions by definition transfer the brand value of the author to the brand of the subscription service.”

Users of subscription services, he explained, are attracted to the services by the presence of authors they want to read. But once they are members and paying a monthly fee, their dollars are earmarked for the service rather than to the acquisition of individual discrete books by individual authors.

From Lipskar’s perspective, which is the author’s perspective, “these services act as a very expensive distribution model, inserting themselves between the publishers who license books from authors and the readers who read them, often taking a much bigger piece of the pie than traditional retailers.”

(This point by Lipskar makes me recall my Dad’s — Leonard Shatzkin’s — disdain 50 years ago for the “other” methods of selling consumer books — book clubs and direct mail — because they did, indeed, require more of the consumer’s dollar to execute than selling through stores did. Dad liked “efficient” and he’d argue until the cows came home that bookstores, including returns, were a remarkably efficient mechanism for distributing consumer books. This, of course, was long before the Internet. He started saying it before there were bookstore chains or national wholesalers.)

Lipskar can imagine a subscription service more along the lines of the traditional Book-of-the-Month-Club, in which readers are aided in their discovery of titles by a curatorial/editorial process that helps to select quality titles and, even more important from a commercial perspective, in which the reader’s monthly fee just funds a discount on a discrete monthly purchase.

Lipskar says that for a subscription service to be embraced by authors and publishers, the economics would have to favor authors and their publishers to a much greater extent than the models currently on the market. On that note, the one thing he said he simply could not imagine would be good for authors (or publishers) on any level would be the “all you can eat” model like Spotify, which he believes has spawned a broad feeling within the music business to be a very effective means of transferring the financial value of music from the creators to Spotify.

All the big publishers know that continuing to sign up the authors is what provides the oxygen that keeps them alive. The biggest threat from Amazon is not that they’ll extract another point or three of margin — although that is definitely a continuing concern — but that they’ll reach a point where their market share is large enough to enable them to start signing up really desirable authors on a regular basis and pull them from the rest of the distribution ecosystem. (It is worth noting that Barnes & Noble and Kobo and Apple have as much at stake in that regard as the major publishers do.)

Because of that, major publishers will never do anything that would distress the major agents. It doesn’t really matter whether a close reading of a contract would give a publisher the “right” to put an author’s work into a subscription service. If the publisher believes the author’s agent would react adversely to them doing that, they’ll be very disinclined to do it.  And some agents might well react adversely to their doing that for any book, not just one under contract to that agent, because agents for big authors who think the way I’m describing don’t want to see subscription services enabled at all!

So that’s why I believe that fledgling subscription services have practically zero chance to get major publishers to commit major books to their pool of available titles.

Of course, there is one entity that might make subscription for general books work and that’s Amazon. They are actually already trying to pull this off even though their efforts have apparently been unanimously rebuffed by the biggest publishers.

The Kindle Owners Lending Library (KOLL) is offered to “subscribers” to Amazon Prime, the retailer’s overall package of “loyalty” benefits offering that start with free shipping. KOLL allows a loan of an unlimited length, so it is, in effect, a cat’s paw for an ebook subscription program.

Amazon is only now able to offer a robust selection in that program because of a combination of its willingness to spend and the ebook contracts it has with most publishers aside from the Big Six, as well as a very large pool of self-published titles in Kindle Direct Publishing KOLL has not — so far — noticeably damaged the ability of the publishers to sell their “branded author” ebooks successfully. The ebooks from successful authors are still benefiting from a “power law” distribution of sales (things tend to move that way in the Internet world) that favors the biggest SKUs.

Amazon has marketplace clout that dwarfs that of any fledgling with a great idea and they went to great lengths to build up a robust title repository for the KOLL debut. Still, when they launched in November 2011 they only had 5,157 titles which they said included “over 100 current and former New York Times Best Sellers”. It wasn’t an impressive selection.

But the wholesale purchasing terms under which Amazon acquires the ebooks of all publishers except the Big Six apparently enable Amazon to lend any title it wants to, as long as it purchases a copy to lend each time it does so. And it is in the ether that Amazon offered publishers a lot of money to put titles into this program. They have an impressive list of publishers whose work they are offering — including Scholastic, Norton, Bloomsbury, Grove/Atlantic, Workman/Algonquin, F+W Media, Lonely Planet, Rosetta Books as well as their own publishing imprints — but there’s no way to know how many of them went for the deals being offered or which ones are included simply because Amazon is buying a copy of any ebook from them each time a customer wants to borrow one.

And while agency pricing rules are definitely a barrier that makes it more difficult for a Big Six publisher to participate, there seemed to be no burst of creativity on any publisher’s part to figure out a way around it.

So Amazon is, in effect, conducting an experiment testing my theory that a general subscription offering won’t be a powerful magnet. For now, the test is to see how many of the Prime customers find it possible to live largely or entirely within the selection of titles that KOLL offers them, and particularly whether they are weaning those customers away from the higher-profile offerings of the Big Six. Perhaps we’ll see Amazon extend the reach of KOLL sometime by offering a Kindle feature package that is cheaper than what Prime has to be to offer free shipping. I’d sort of expect that. Wouldn’t you?

Will Amazon have an argument to make in a year or two, to publishers or to authors, saying that there is a substantial pool of desirable readers they that they can only reach by participating in KOLL?

They might.

But can anybody else but Amazon put together the combination of the audience and title base they have, piggybacking as they are on Prime and willing as they are to buy an ebook just to lend it once to demonstrate that they can?

I doubt it.

It was been called to my attention by Pam Boiros of Books24x7 that in my prior piece I gave Safari Books Online credit for pioneering the subscription model and the payment by metered usage and that actually credit for both should go to Books24x7. Safari came along a few short years after Books24x7 had started the model which they operate today across a wide range of verticals, serving a mostly institutional customer base. I thank Pam for refreshing my memory, which was the source of the information. Safari is still a great service and the closest thing to a trade subscription model outside the single-publisher efforts, but they followed a path that was originally cut by Books24x7.

4 Comments »