There is a fitting irony in the fact that a big publisher on neither side of the Atlantic published one of the biggest Anglo-American bestsellers of the 1990s, the book “Longitude” by Dava Sobel. The irony is that the true story of “Longitude”, in which James Harrison struggles for the better part of a century to persuade the powers-that-were that he has solved the problem of accurately computing longitude at sea, actually analogizes the real reasons that consumer book publishers don’t make money.
Before Harrison’s chronometer permitted accurate seafaring navigation, ships foundered on shoals that might even have been accurately mapped, just because they couldn’t tell how far east or west they were. Seafarers tried a variety of intricate measurements, many having to do with the moon and the stars, to determine their location. But nothing worked. And it wasn’t the accuracy of their readings and computations that led them astray, it was that the things they measured bore no real relationship to the information they were seeking.
And that is exactly why book publishing enterprises seldom have clear sailing for very long. Almost all the meters and gauges publishers use to monitor their business are irrelevant at best, wildly misleading at worst.
This morning I want to discuss those meters. We’ll start with some common computations and tallies publishers use in the day-to-day operations of their business which are, at best, irrelevant to finding the direction they really want to go. And we’ll go on, trying to offer a service rather like Harrison’s but one which we hope won’t take so long to sell, by suggesting some meters that could be constructed that would provide much more useful information.
The first useless meter which is almost universally employed is the Unit Cost. All publishers use it, compute it, make decisions by it. Except to the accountant who may need to use it to comply with a government’s or shareholder’s need for information, it has absolutely no value.
The initial struggle with the concept of a unit cost at most companies begins with this question: “over how many copies do we amortize the pre-press costs?” The answer usually is: “over the first printing.” This leads to the most destructive version of the Unit Cost Fallacy, because it tempts the publisher to over-print in pursuit of a notionally, and fictionally, more profitable economic structure in which to profit on the book.
But even in the case of a reprint or a publisher clear-sighted enough to ignore plant costs in looking at running costs, what does the unit cost really mean? The way most publishers do their figures, it is the unit cost of what you print. But if unit cost has any constructive value in running a business, it could only help to know the unit cost of what you sell. And when you print too many in pursuit of a lower unit cost, you may lower the cost of each one you print and raise the cost of each one you sell.
And, either way, you are ignoring the value of cash.
Am I saying, “ignore unit printing costs” in making printing and publishing decisions? Except for those few books, usually heavily-illustrated ones, which are enabled by a book club deal where the publisher’s own customer has a cost-to-price ratio to enforce, the answer is “yes”.
The second too-commonly applied useless computation in publishing is the application of a company’s overhead as a percentage associated with each transaction. Although its use is nearly universal, this notion is so full of holes that it is hard for me to describe it in terms that are as respectful as I prefer to be when I entertain widely-held beliefs to which I do not subscribe.
The principle is very simple. If a company’s budgeted overhead expenses, by which we mean basically the costs of doing business that are not clearly attributable to any particular title, amount to, say, 40 percent of its budgeted sales, then the computations attendant to each unit sold assume it must carry its 40 percent share of those costs.
This apparently logical follow-through of the macro-reality to the micro-computations, however, is incorrect in its premise and leads regularly to bad business judgements by its use.
A perfectly meaty 45-minute speech, or even a 45-minute stand-up comedy routine, could be delivered enumerating the errors caused by this approach, from turning down perfectly good projects to overprinting to overpricing. But I think the point is made best in an insight offered by my father, Leonard Shatzkin. He has pointed out that it is a GOOD THING that publishers don’t actually succeed in weeding out the projects in advance that ultimately don’t produce a 40 percent gross margin to cover overheads.
Why? Because if publishers never realized the contributions to their overheads made by projects that contribute less than 40 percent, they wouldn’t have enough cash, or enough of a business to pay the bills.
It really shouldn’t be surprising that attributing overhead, sale by sale, book by book, leads to business decisions that make no sense. The concept makes no sense. Only the very smallest part of a publisher’s cash outlays are actually due to a particular sale, not including the printing, by the way, which is incurred in batches independently of the sale. Selling a book doesn’t make you hire a sales or customer service rep; acquiring a book doesn’t make you hire an editor or a production person; neither activity affects your rent or travel expense. So what is the logic of attributing the costs that way when you make business decisions?
A third totally useless barometer in too-common use in our trade is a comparison of returns percentage on a year-to-year basis. Ask any publisher if returns are up or down this year, and you’ll probably get an answer. He or she will know.
But what will they know? They’ll know that the returns they have credited this year, as a percentage of the shipments they have billed this year, are X, and that last year they were Y. It is fairly simple to see that if X is larger, returns are up; if Y is larger, returns are down.
Are they really? No.
The logical problem here is that returns in any one year have more to do with sales in the prior year than in the current year. Most of those returns that are made very quickly, within as little as six months, tend to occur early in the calendar year on the books that were shipped in the Fall. Periodic large purges of inventory, such as were done by several major US trade accounts in 1996, include books that were originally shipped to them one, two, three years or even longer before.
This flawed analysis takes two largely unrelated numbers — this year’s shipments and this year’s returns and draws a relationship between them. So a poor current list, resulting in low shipments right now, will make any level of returns from old books seem higher than if the current list were strong.
Analyzing returns percentage is a very tricky business under the best of circumstances. You can certainly do it reliably on an individual title when sales have effectively ceased and all the returns are in. You can approximate it for any title mature enough to have settled into a fairly reliable re-order pattern some months after publication. But for any group of titles, accurate measurement of returns requires interpreting which returns relate to which shipments.
I have seen the logical extreme of year-to-year across-the-list computations. A veteran sales executive of our acquaintance used to figure the returns percentage for the two big American chains every MONTH: this months’ sales was the denominator, this month’s returns was the numerator. The percentage for each chain jumped around a lot, but none of these numbers was worth a thing to help manage the business or to enlighten the dialog with the account.
A fourth inaccurate meter over-applied by too many publishers is the units “advanced” per title. Here, we must concede, that tallying the total number on order can have utility in determining the quantity to print. Unfortunately, too few publishers use it for that purpose. Instead, they are looking at this gross units measure as a way to evaluate whether the sales force has done its job well placing the book into the marketplace.
The problem with working with a total, is that the profile of a book’s advance matters much more than the total number. Looking at the total number reveals nothing about the profile: how many stores? how many books in wholesalers? And while the total number needed to fill advance orders is a good superficial guide to what to print, knowing the shape of the advance, not just its size, might lead a publisher to cut orders to cut the printing, if he knew the facts.
Publishers also pay close attention, particularly on big books, on what sale is required to “earn out” an advance against royalties to the author. This is another indicator that, at best, is meaningless, except perhaps as a scorecard item for an editor’s performance. And, even then, it doesn’t tell you much on a book-by-book basis.
As US publishing sage and former St. Martin’s Chairman Tom McCormack has pointed out in numerous writings, publishers are earning profits on books long before an author’s advance has been paid back. In fact, the recent Stephen King deal with Simon & Schuster, where he receives a “profit-sharing” royalty reported as 27 percent, highlights that the “maximum” royalty percentage of 15 percent of retail in the US market is routinely ignored in the cases of very commercial authors. In King’s case, he is being paid a lower advance and this escalated royalty. But this is little different than what has occurred with many authors, paying them an advance so large it will never “earn out” so that the effective royalty percentage is much higher than the contractual 15 percent.
The danger in focusing on this “earn out” number is it presents the temptation to overprint and overadvance and then take overreturns. In fact, the advance is a “sunk cost.” Once a publisher has written those checks, he can’t get the money back. Sunk costs should normally be ignored in making operating decisions about printing, pricing, selling, and promoting. Devoting more efforts to sell Book X because it has not earned out its advance rather than Book Y which has, rather than putting effort where it will produce the most margin for the house, is a costly, and ubiquitous, mistake.
Our sixth irrelevant gauge, used in every publishing house I know except one, is what is called in most places the “title P&L”. This is an odd name for the calculation actually, because it is almost always employed BEFORE the book is even acquired, almost never after it has run its course. The motivation behind the “title P&L” is reasonable: trying to gauge the economic effect of a contemplated publication before the house commits to it.
The “title P&L” has two fundamental conceptual weaknesses. The first is that it employs meters we have already identified as flawed: the “unit cost” and the “percentage overhead.” The second is that the “title P&L”, as it is practiced, almost always relies on one estimated sales number, wherever it comes from. Many times the editor makes up the number (knowing what it will take to give the result needed to get a “go” to buy the book) and tries to get a marketing or sales colleague to “buy in” to it, to support it. Fortunately for the egos and reputations of everybody in publishing, only editors with huge unearned advance pools — which is a secondary indicator of inflated sales expectations — have to recall their sales projections.
Because the fact is that very few books sell any number close to their estimated number. They are wrong on both sides. This is nothing anybody should be ashamed of; it simply indicates the difficulty of calculating the real sales potential for most books in advance of publishing them. But it does indicate that putting great stock in those sales projections is probably a mistake.
But even after the fact, when all the numbers on a book are in, trying to create a “P&L” for it, as if it were a free-standing business, is a futile exercise. While assigning constant percentages of overhead to each book sold is nonsensical, it is still incontrovertible that each title draws on the capabilities of the house in its publication.
The correct way to visualize the business is to look for the dollars in contribution made by each title. That is a much simpler number to arrive at: the difference between actual revenue for the title and actual disbursements against it. If the number is actually negative, then the title clearly “lost money”. In fact, if overadvancing and overprinting, which is often a consequence of overadvancing, are eliminated, very few titles suffer this fate. VERY few.
If the book takes in more than is spent on it, the difference is a contribution to overhead and profit. When a company’s margins have covered its overheads, it makes a profit. If they haven’t, it doesn’t. So it might be said that whether a book is profitable depends on whether it was published before overheads were covered or after. But that would be silly, of course.
Our hero James Harrison in “Longitude” didn’t just trash the established techniques to determine a ship’s position at sea; he built an alternative. There are meters publishers could create which could really help steer their businesses in a profitable direction.
The first one that I would start with is very simple: each year, count the new titles that survive their initial launch to become backlist. Most books don’t. Mostly as a result of the way books are marketed and distributed rather than because of any inherent dated element of the books themselves, the ordering action on most titles is virtually nil within a month or two of publication.
But there is backlist attrition each year as reliable backlist titles decline in sales to a point perhaps below the numbers needed to keep them in print. Although new technologies for short-run, or even make-one, printing are developing that may change this, right now it is still true that when a title falls below the threshold of sales required to reprint, the publisher loses ALL the sales on that book.
But before it even gets to that point, a book must be re-ordered repeatedly by a variety of customers to even make the backlist. And most books don’t.
How to define the point at which a book has survived to become part of the active backlist is not a simple question to answer. The best way to judge is some combination of tallying reorders from stores and sales at retail. Or, taking a longer view, any book that is being maintained in a substantial number of stores a year after its publication would certainly qualify.
A second count publishers would be wise to keep is a negative one: what I would call the overmanufactures.
Overmanufactures come in two varieties: books that were printed that were never needed; and books that were manufactured much too soon. For this purpose, we are talking about copies that never made it into the marketplace, not copies that were shipped and then returned, although it is certainly true that the tendency to overmanufacture leads to the tendency to overdistribute.
When a book dies or is remaindered, it is easy to know that every copy that never shipped was an overmanufacture. But they can be counted earlier than that as well. For any straight text book selling more than a thousand copies a month, copies manufactured more than six months before they are shipped are also certainly an overmanufacture.
The tally of overmanufactures is most useful as a percentage of total books manufactured as a barometer of waste, potential waste, and unnecessary interest costs that a publisher can avoid through better decisions.
A third tally that seems very basic, but which is rare if not non-existent, is a count of the number of on-sale points for each title on its date of publication.
This is not a number which any publisher’s systems can simply “spit out”. It is a number that requires information that often isn’t even fed to the system today. When the big American chains order books, the sales rep often doesn’t even know how many stores the order will be placed in. There are other accounts besides Barnes & Noble and Borders that take central shipments and disperse them themselves. It requires investigation that often is not done today to know how many on-sale points each title will get from those accounts.
The fourth barometer we’d recommend is a variation on the third: track the percentage of distributed books actually on-sale versus being warehoused by customers, by title and across the board.
Both the number of on-sale points per title and the percentage of units shipped that are in stores and not warehouses, by title and for the list, are key indices of the effectiveness and efficiency of a publisher’s distribution. Publishers who do this kind of analysis will probably see that differences in sales results beyond the initial lay-down of books that were once attributed to the book’s appeal might actually be the result of the shape of its distribution. Books that have more on-sale points and books whose distributions are more to stores and less to wholesalers will have better sell-through and lower returns than books that might have equivalent numbers shipped to a less productive mix of recipients.
The fifth new meter we’ll suggest this morning is, by my reckoning, the single most important measurement of the health of a publisher’s business. It is a very simple one: count the number of customers doing over a certain amount of business.
In the past several years, consolidation of sales into fewer and fewer large accounts has been a definite trend. There is obvious danger in this, but hidden danger as well. And the hidden danger is that the concentration on the big accounts that do most of the business will lead publishers to neglect their overall account base.
A healthy publishing business always has new worlds to conquer. There are trade accounts which present knotty problems of access, inventory management, or credit headaches that can only be overcome with disproportionate effort. Most publishers have opportunities with retailers outside the book trade, or in foreign markets, or with direct mail sellers, that can be nurtured with more focus and concentration.
No matter how much the large accounts dominate, a publisher can grow the number of profitable accounts year after year. Doing this will add to the profitability of any publisher, but more important, it will add to its stability.
Of course, in the years since Harrison’s chronometer became standard equipment for all seafarers, shipwrecks have not been completely eliminated. Captains can make mistakes, even with the best data available to them. And storms can sink a ship even if the captain and every member of the crew know its latitude and longitude to three decimal places. And publishers who discard meaningless meters and gauges and install relevant ones will not be commercially infallible either. But if you were a passenger, you know which boat you’d rather be on.