New data on the Long Tail impact suggests rethinking history and ideas about the future of publishing
For most of my lifetime, the principal challenge a publisher faced to get a book noticed by a consumer and sold was to get it on the shelves in bookstores. Data was always scarce (I combed for it for years) but everything I ever saw reported confirmed that customers generally chose from what was made available through their retailers. Special orders — when a store ordered a particular book for a particular customer on demand, which meant the customer had to endure a gap between the visit when they ordered the book and one to pick it up — were a feature of the best stores and the subject of mechanisms (one called STOP in the 1970s and 1980s) that made it easier. But they constituted a very small percentage of any store’s sales, even when the wholesalers Ingram and Baker & Taylor made a vast number of books available to most stores within a day or two.
It was an article of faith, and one I accepted, that if you could expose most books to a broad public, they would “find their audience”. The challenge was overcoming the gatekeepers or, put another way, the aggregate effect of the gatekeepers (the store buyers) was to curate, or act as a filter, to find the worthwhile books that the public would really see from which they would choose what to buy.
There was also ample evidence over time that a large selection of books in a store acted as a magnet to draw customers. That fact was noted by my father, Leonard Shatzkin, in the early 1960s, when they doubled the inventory at the Short Hills, NJ, Brentano’s store (the chain reported to my father, who was a Vice-President of Crowell-Collier, the company that owned Brentano’s, Collier’s Encyclopedia, and Macmillan Publishers, among other things) and it went from the worst-performing store in the chain to the best. In the 1970s, BP Reports published a survey that said that nearly half of bookstore customers chose the store they were in on the basis of the selection they’d find and more than half reported their particular purchase decision was made in the store.
By the late 1980s, both of the big national bookstore chains — Barnes & Noble and Borders — were undergoing a massive expansion of “superstores”. Whereas chain bookstores (B&N’s B. Dalton and Borders’s Walden) carried 20,000 or 30,000 titles, and large independents carried as many as twice that, now the new superstores would carry 100,000 titles or more! Customers flocked to the massive bookstores and the ever-expanding chains ordered lots of the publishers’ backlists and everybody celebrated a new era, except the independent bookstores who were increasingly squeezed by their new large competitors. The era was less than 10 years old when it got disrupted.
In the 1970s, it was my responsibility for a couple of years to write the orders for stores that accepted vendor-managed inventory from Two Continents, my family’s distribution company. I was being careful to make sure that each store earned $2 gross margin per dollar of inventory investment, which was what you’d get from 40% discount with inventory turned 3 times a year. This gave me a hands-on look at how stock turn in the aggregate was affected by the inventory decisions on specific titles.
When you do this, you figure out pretty fast that you can produce very high stock turn on books that are moving consistently. If a store were selling five copies a month of a title on a sustained basis and I put in 10 and replenished monthly, they would be getting an annual turn of 10 or perhaps much more on those moving books. (Turn calculation: sales divided by average inventory for a period multiplied by the number of such periods in a year.) That would support a lot of single copies of books that moved very slowly or, as it turned out, not at all. Since very few stores managed a turn of 3 or 4 on their own (chain store turns were usually under 2), giving the stores on our Plan a good result with the advantage of shipping monthly was shooting fish in a barrel.
But if you think about the turn you’re achieving with the titles that really move, know that the titles that move are a large percentage of the store sales, and take on board what stores’ overall turns tended to be, it leaves you with the uncomfortable feeling, or calculation, that a very high percentage of the titles each store ordered didn’t sell a single copy in that store. In fact, one big advantage of vendor-managed inventory is that it gives you the ability to use the high turn on your titles to stock the titles of yours that turn slowly or don’t sell at all, rather than having the store “waste” those margin dollars your books produce stocking somebody else’s slow-moving books.
Remember, in physical retail, selection was the magnet. The books that didn’t sell were helping to pull in the customers for the books that did sell. Stores knew that too. Later work I did demonstrated that there were whole store sections that turned at half or less of the rate of the store as a whole. But if you want, say, a philosophy section that “turns”, it would only have about ten titles in it. If you want a philosophy section people will browse and shop from, you have to carry a lot of slow-moving titles.
But just when the bookstores put the inventory in place to stimulate book buying all over the country, along came the Internet, Amazon.com, print-on-demand, and ebooks, in that order. All four were fully integrated into the book publishing ecosystem over a decade-and-a-half starting in 1995. As quickly as the magic of selection via the 100,000-title store was implemented, it was superseded by the “total” selection provided by Amazon’s, and then BN.com’s, “unlimited shelf space”. Now every book would have its full chance to sell, or so it seemed.
Unlike the period of superstore expansion, when substantial orders for deep backlist suddenly became commonplace in a continuing windfall for publishers, the new era with Amazon was characterized by things getting harder for many publishers. That wasn’t necessarily clear at first, but the impact of Amazon, and then Lightning (print on demand offered by Ingram) was to dramatically increase the number of titles competing for sales. It gave the Long Tail a real opportunity to get to customers which, through bookstores — even very big bookstores — only the top 100,000 titles were able to do. Publishers were a bit like the metaphorical frog in heating water; the challenges imperceptibly became greater over time. In 1990, a new book competed with about 100,000 available titles. In 1997 it competed with many hundreds of thousands and that number just kept growing. Today it competes with millions.
The challenges for conventional publishers got steeper again when ebooks became mainstream, pioneered by Amazon’s Kindle in late 2007. There had been a modest ebook business building for about a decade, but until Amazon committed its resources to creating a dedicated device, a repository of content, and audience awareness, it had a trivial impact. But a full-fledged ebook business unleashed a new wave of competition from self-publishing authors. Amazon fostered growth by creating an easy on-ramp for self-publishing, a move quickly copied by B&N, Apple, and Kobo. In the several years that ebooks have been commercially important, many — certainly hundreds and perhaps thousands — of authors have achieved meaningful sales. Many of those have been of backlist books originally published conventionally but there have also been thousands of successful original ebooks. Whether revived formerly-dead backlist or new titles, these are books that are competing with the output of the conventional publishers and wouldn’t have been a decade or two ago.
So the Long Tail for books has been a topic of conversation for most of the past 20 years. Amazon’s limitless shelves and Ingram’s Lightning contributed heavily to this before the turn of the century; self-publishing has accelerated it dramatically. The early expectations, including mine, were that the Long Tail would take sales from all the books being “currently” published. But it became evident pretty early that the big books were just getting bigger: the head of the sales curve wasn’t diminishing. In fact, both the head and the Long Tail took sales from the middle of the curve. This was particularly challenging for publishers because publishing mid-list, those books they do that aren’t bestsellers, became much more challenging.
The Long Tail continues to grow. There are a limitless number of aspiring authors and their aspirations to self-publish successfully are fueled both by success stories and by a growing band of indie authors who tout their success and question the business models and practices of the majors. Because being conventionally published has its own set of hurdles and time requirements, it has seemed to many (and I haven’t been immune from this thought) that self-publishing would just continue inexorably to take share from the publishing business.
But now we have some data that calls that assumption into question. I encountered two examples of that in the past week.
In Toronto last Wednesday, Noah Genner of Booknet Canada presented information about the Canadian market showing that the number of ISBNs was expanding rapidly, but that the number of individual ISBNs selling at least one single copy was about flat.
Then this week, Marcello Vena of RCS Libri in Italy published a White Paper based on his company’s data (link through to the White Paper from the DBW piece introducing it) which showed something similar. Sales of his company’s books were becoming increasingly concentrated in a small number of titles. Vena added an analysis using the Herfindahl-Hirschman Index (HHI). HHI measures the concentration in a market and is, according to Vena, used by the US Department of Justice to measure concentration in an industry. The HHI is calculated by adding the squares of the market shares of the players. So if one company owned 100% of a market, the HHI would be 100 squared, or 10,000. But if 100 players each owned 1% of the market, the HHI would be 100 times 1/10,000 (1/100 squared) or 0.01. Using the market concentration and title concentration numbers in tandem, Vena finds that they’re linked. As market concentration increases, the sales move to the head of the sales curve and flatten further in the Long Tail.
Of course, Italy and Canada are not the United States. Our market is bigger and richer. But Italy and Canada are not trivial samples, either.
One further point about Long Tail sales. In the aggregate, they can be very significant. But for each individual title, they are trivial. So the real commercial benefits flow to the aggregators — Amazon and Lightning — and much less to the publishers or authors of the individual titles. There certainly are situations where particular publishers have a lot of Long Tail books: the Oxford and Cambridge University Presses would be prime examples of this. For them, with thousands of titles in the Long Tail, the aggregate sales are probably commercially significant. But for a publisher with 100 titles, or even 1000 titles, selling a copy or two a year (or none), and that’s what we’re talking about here, it hardly makes any difference. I personally own several Long Tail titles. I get checks from somebody every month, but it adds up to three figures a year, not four.
The implications of this in the discussion of how the publishing industry might be affected by self-publishing disruption are interesting. It would suggest to me that the boosts publishers can give a book — even their catalogs provide more marketing lift than most self-published books start with — will become increasingly important as the market becomes increasingly flooded. If the data Vena has presented turns out to be the future trend, the increase in self-published titles will drive more and more sales to a smaller number of winners, and my hunch would be that the winners will most likely be from publishers. That would indeed be a paradox and a totally unintended consequence.
Of course, the publishing business isn’t one business; it is segmented. So far, the commercially successful self-published authors overwhelmingly, if not entirely, fall into two categories. There are authors who have reclaimed a backlist of previously published titles and self-published them. And there are authors of original genre fiction who write prolifically, putting many titles into the marketplace quickly. Successful self-publishing authors are often in both categories but very few are in neither. Those two categories are nearly 100% of the self-publishing success stories but a minority of the books from publishers. So, even before Vena published his White Paper, the idea that self-publishing would upset the commercial establishment was way overblown. If Vena’s data turns out to be prophetic, the road is going to get harder and harder for all books, but especially the self-published.
Two big items in the news today. On B&N’s decision to spin out Nook and college into a separate public company, I have little to say except to wish them all well. On Hachette’s and Ingram’s division of the two Perseus businesses, I’d say this. 1) The notion that this is about Hachette “bulking up” for the Amazon battle is almost certainly wildly wrong and anybody saying that has disqualified themself as an expert. 2) The titles Hachette get here really change the character of their list, adding a non-fiction and academic dimension they never had. 3) Ingram has made a major leap in scale for their Ingram Publisher Services business which now, in the aggregate, is Big Five sized.
Once again, the Feedburner service failed to distribute my most recent post, which was a graf-by-graf disagreement with a post by Hugh Howey. The comment string of that post contains ample evidence that the fact contained in the last paragraph here is not widely acknowledged.