Leonard Shatzkin

The reality of publishing economics has changed for the big players

A veteran agent who was formerly a publisher confirmed a point for me about how trade publishing has changed over the past two decades, particularly for the big houses. This challenges a fundamental tenet of my father’s understanding of the business. (And that’s the still the source of most of mine.) I had long suspected this gap had opened up between “then” and “now”; it was really great to have it confirmed by a smart and experienced industry player.

One of the things that I took from my father’s experience — he was active in publishing starting in the late 1940s — was that just about every book issued by a major publisher recovered its direct costs and contributed some margin. There were really only two ways a book could fail to recover its costs:

1. if the advance paid to the author was excessive, or

2. if the quantity of the first printing far exceeded the advance copy laydown.

In other words, books near the bottom of the list didn’t actually “lose” money; they just didn’t make much as long as the publisher avoided being too generous with the advance or overly optimistic about what they printed. (Actually, overprinting was and is not as often driven by optimism as by trying to achieve a unit cost that looks acceptable, which is a different standard fallacy of publishing thinking.)

The insight that just about every book contributed to overhead and profit was obscured by the common practice of doing “title P&Ls” that assigned each book a share of company overheads. Whatever that number was, when it was calculated into the mix it reduced the contribution of each sale and showed many books to be “unprofitable”. That led publishers to a misunderstanding: perhaps they could make more money doing fewer books, if only they could pick them a little bit better. Trying to do that, of course, raised the overhead, which was neither the objective nor any help in making money.

(Raised the overhead? I can hear some people asking…Yes, two ways. One is that publishing fewer books would mean that each one now had to cover a larger piece of the overhead. The other is that being “more careful” about acquisition implies more time and effort for each book that ends up on the list, and that costs overhead dollars too.)

For years, this “reduce the list and focus more” strategy was seen by my father, and those who learned from him, as a bad idea.

One of the young publishers my father mentored was Tom McCormack, who — a decade after Len worked with him — became the CEO of St. Martin’s Press. There, McCormack applied Len’s insight with a vengeance, increasing St. Martin’s title output steadily over time. And, just as Len would have expected they would, St. Martin’s profits grew as well.

All of this was taking place in a book retailing world that was still dominated by stores making stocking decisions independently from most other stores. In the 1970s, the two big chains (Walden and B. Dalton) accounted for about 20 percent of the book trade. The other 80 percent was comprised of nearly as many decision-makers as there were outlets. So while it took a really concerted effort (or a very high-profile book or author) to get a title in every possible store location, just about every book went into quite a few. With five thousand individuals making the decision about which books to take, even a small minority of the buyers could put a book into 500 or 1000 stores.

But two big things have conspired to change that reality. The larger one is the consolidation of the retail trade. Now there are substantially fewer than 1000 decision-makers that matter. Amazon is half the sales. Barnes & Noble is probably in the teens. Publishers tell us that there are about 500 independent stores that are significant and that all the indies combined add up to 6 to 8 percent of the retail potential. The balance of the trade — about 25 percent — is the wholesalers, libraries, and specialty accounts. The wholesalers are feeding the entire ecosystem, but the libraries and specialty accounts are both very much biased as to the books they take and very unevenly covered by the publishers. In any case, ten percent of the indie bookstores today gets you 50 on-sale points, not 500. That’s a big difference.

The other thing that has happened is that the houses are much better organized about which books they are “getting behind”. This has the beneficial effect of making sure the books seen to have the biggest potential get full distribution. But it also has the impact of reducing the chances that the “other” books will get full attention from Barnes & Noble (able to deliver more outlets with a single buyer than one would customarily get from the entire indie store network). And, without that, it takes a lot of luck or online discovery to rescue a book from oblivion.

The agent who was confirming my sense of these things agreed that the big houses used to be able to count on a sale of 1500 or 2000 copies for just about any title they published. Now it is not uncommon for books to sell in the very low triple digits, even on a big publisher’s list.

Even before any overhead charge and with a paltry advance, that isn’t going to cover a house’s cost of publication. So there definitely are books today — lots of books — coming from major houses that are not recovering even their direct costs.

This is a fundamental change in big publisher economics from what it was two decades ago. While the potential wins have become exponentially bigger than they were in bygone days, the losses have become increasingly common. And while it is still an open question how well anybody can predict sales for a book that isn’t even written yet (which is the case for most books publishers acquire), there is a real cost to getting it wrong, even when the advance being paid is minimal.

So it is no longer irrational to cut the list and focus. Obviously, every book published is a lottery ticket for a big win, and the odds in a lottery are never good. But the world most general trade publishers have long believed in, where the big hits pay for the rest of the books, is really now the one they inhabit.

I am proud to be part of the organizing committee for Publishing People for Hillary. We’re staging a fundraiser for her in midtown Manhattan on Friday, September 30, at which Senator Cory Booker and Senator Amy Klobuchar will be the featured speakers. You can sign up to join us here. Contribution levels for the event range from $250 to $2500, with a special opportunity to meet the Senators at the higher levels.

And, having NOTHING to do with publishing, but for all baseball fans in the crowd, please check out this story about Yogi’s mitt and Campy’s mitt that you will not have seen anywhere else.


Barnes and Noble faces a challenge that has not been clearly spelled out

The sudden dismissal of Ron Boire, the CEO of Barnes & Noble, follows the latest financial reporting from Barnes & Noble and has inspired yet another round of analysis about their future. When the financial results were released last month, there was a certain amount of celebrating over the fact that store closings are down compared to prior years. But Publishers Lunch makes
clear that store closings are primarily a function of lease cycles, not overall economics, and we have no guarantees that they won’t rise again this year and in the years to follow when a greater number of current leases expire.

With B&N being the only single large source of orders for most published titles for placement in retail locations, publishers see an increasing tilt to their biggest and most vexing (but also, still their most profitable) trading partner, Amazon.

Although PW reported immediate dismay from publishers over Boire’s departure, there has been plenty of second-guessing and grumbling in the trade about B&N’s strategy and execution. Indeed, getting their dot com operation to work properly is a sine qua non that they haven’t gotten right in two decades of trying. But one thing Boire did was to bring in a seasoned digital executive to address the problem. This is presumably not rocket science — it isn’t even particularly new tech — so perhaps they will soon have their online offering firing on all cylinders.

The big new strategy they revealed, one they’re going to try in four locations this year, is what they call “concept stores” that include restaurants. And, although it was a bit unclear from their last call whether the store-size reduction they’re planning extends to these restaurant-including stores, they have said that the overall store footprint they’re planning will be 20-25 percent smaller than their current standard. These two facts both make the point that B&N is facing a reality which has become evident over the last decade, and which questions a strategy and organizational outlook that was formulated in another time. If this new challenge is properly understood, and I haven’t seen it clearly articulated anywhere, it would make the restaurant play more comprehensible. (Note: I have to admit that my own recent post, where I traced the history of bookstores in the US since World War II, failed, along with everybody else, to pinpoint the sea change that makes B&N’s historical perspective its enemy while trying to survive today.)

Here’s the change-that-matters in a nutshell. A “bookstore” doesn’t have the power it did 25 years ago to make customers visit a retail location. Selection, which means a vast number of titles, doesn’t in and of itself pull traffic sufficient to support a vast number of large locations anymore. This changes the core assumption on which the B&N big store buildout since the late 1980s was based.

This has been true before. One hundred years ago the solution to the problem became the department store book department. Post-war prosperity grew shelf space for books, but the department stores remained the mainstays for book retail. The first big expansion of bookstores started in the 1960s when the malls were built out, which put Waldens and Daltons in every city and suburb in America. The mall substituted for the department store; it delivered the traffic. In fact, department stores “anchored” all the malls to be sure they’d get that traffic!

(Here are a couple of additional factoids to illustrate the importance of the department store channel in the mid-20th century. When Publishers Weekly did an article about the Doubleday Merchandising Plan in 1957, the stores they used as examples were the book departments of Wanamakers and Gimbels! When I came into the business fulltime in the 1970s, there were two significant “chain” accounts in Chicago: the bookstore chain Kroch’s & Brentano’s and the Marshall Field department stores.)

Bookstore customers came in many flavors, but they all benefited from a store with greater selection. My father, Leonard Shatzkin, first noticed that selection was a powerful magnet when he was overseeing the Brentano’s chain (no relation to K&B in Chicago) in the 1960s. Their Short Hills, New Jersey store was an underperformer. They doubled the number of titles in it and it became their best performer. Whether the bookstore customer knew what they wanted or just wanted to shop, the store with more titles gave them a better chance of a satisfying result.

Over time, that understanding was followed to a logical conclusion.

By the late 1980s, it appeared that standalone bookstores outside of malls could become “destinations” if their selections were large enough, and that created the superstore expansion: B&Ns and Borders. But, only a few years later when it opened in 1995, the universal selection at Amazon mooted value of the big-selection store, especially for customers who knew before they shopped what book they wanted. Selection as a traffic magnet stopped working pretty quickly after Amazon opened in 1995 although it was not so immediately obvious to anybody.

I had some experience with B&N data that demonstrated pretty emphatically by 2002 that the action on slow-selling university press titles had shifted overwhelmingly to Amazon. (At that time, the late Steve Clark, the rep for Cambridge University Press, told me that Amazon was a bigger account for CUP than all other US retail combined.) It took the further hit of expanded Internet shopping at the consumer level, which grew with increased connectivity even before ebooks, to make what had been a great business obviously difficult. Then, as if to emphasize the point, we lost Borders…

What just doesn’t make it anymore, at least not nearly as frequently, is the “big bookstore”. Although there is no scientific way to prove this, most observers I’ve asked agree that the new indie stores popping up over the past few years tend to be smaller than than the Borders and older indie stores they are replacing. We are seeing book retailing become a mix of pretty small book-and-literary-centric stores and an add-on in many places: museums, gift shops, toy stores. These have always existed but they will grow. And true “bookstore” shelf space will shrink, as has space for “general” books in mass merchants. The indie bookstore share will definitely continue to grow, but whether their growth will replace what is lost at B&N and the mass merchant chains is doubtful. Every publisher I’ve asked acknowledges significant indie store growth in the past couple of years, but they are also unanimous in saying the growth has not replaced the sales and shelf space lost when Borders closed.

Barnes & Noble is clearly rethinking its strategies, but this is one component that I have never seen clearly articulated. Back when I had my “aha!” moment about what was happening with the university press books, I suggested to one B&N executive that they had to figure out how to make the 25,000-title store work.

He said, “that’s not where we are. We’re thinking about the million-title store!” In other words, “we want to manage big retail locations”. This is thinking shaped by what we can now see is an outdated understanding of what the value of a big store is. So now they’re trying to sustain slightly-smaller big locations with things other than books. (Whether they plan to go as low as 25,000 titles in stores that used to stock four or five times that many is not clear. But they did say in their recent earnings call that the new concept stores would get 60 percent of their revenues from books, rather than the 67 percent they get now.) They have added non-book merchandise; now they’re thinking about restaurants. All of that is to increase traffic and to increase sales from the traffic they already get.

But there is another way to attack the challenge that “books alone” doesn’t work the way it used to. Barnes & Noble’s core competency is book supply to retail locations anywhere in the United States. Nobody, except Ingram, does this as well. (Although Amazon clearly is now planning to give it a try.)

Other retailers are suffering the same Internet sales erosion as booksellers, and a properly-curated selection of books can work for just about any store’s customer profile. Might Barnes & Noble complement its own stores by offering branded B&N Book Departments to other retailers? Let them bring in the traffic (although the books will undoubtedly bring in some more) and then B&N could manage those departments. (This is a variation of a tactic I suggested for Penguin Random House some years ago.) Let other retailers play the role the department stores and then the malls played for books in the past 100 years. Let’s not require the retail customer to come to a location strictly to shop for books.

The “trick” would be for B&N merchandisers to adjust their book selection to suit the specific customer base each store attracts. But is that a harder challenge than going into the restaurant business? And isn’t extending the B&N brand for books a more sensible tactic than trying to extend it to food? Or to create a new brand for food? And wouldn’t it be a good idea to get started on this tactic to expand book retail shelf space before Amazon, which keeps showing signs of wanting a retail presence, does?

This is not an easy market to just walk in and take over. There are already wholesalers providing books to retailers who don’t support a full-fledged buying effort for them. Those wholesalers are often getting more margin from the publishers than B&N is now, but that’s actually more of an opportunity than an obstacle. Presumably, a B&N-branded book section is worth something. (If it isn’t, that’s another problem.) Presumably, B&N has buying expertise and domain knowledge that would enable them to fine-tune a selection of books for each outlet’s customer base. And, presumeably, B&N’s supply chain efficiency would be superior to anybody else’s in the industry, except Amazon’s and perhaps Ingram’s.

The big bookstore model is an anachronism. Just making it big doesn’t pull in the customers anymore. So a new strategy is definitely called for. B&N is going part of the way to one by recognizing that they need to do more to bring in customers and, at the same time, they can’t profitably shelve 100,000 titles across hundreds of stores. Taking their capabilities to where the customers already are would seem like an idea worth exploring.

It should be noted that the Indigo chain in Canada, under the leadership of owner Heather Reisman, has apparently successfully transitioned to a “culture” store where books are the key component of the offering. She has apparently found a product mix, or an approach to creating one, that is working for Indigo. Every large book retailer in the world is going to school on what Indigo has done. Because Amazon and online purchasing in general have not taken hold in Canada the way they have in the United States, we can’t jump to the conclusion that the Indigo formula could be successfully applied here. But it sure wouldn’t be a crazy idea for B&N to buy Indigo to gain the benefit of Reisman’s insights and expertise, assuming that a) Canadian law would permit U.S. ownership of such an important cultural asset and b) Reisman herself would sell and then work for somebody else. Two very big assumptions.

It is also worth nothing that the Pocket Shop chain, the small-bookstore concept chain that we’ve written about previously, is going to start opening stores in the UK. 


On Amazon stores and publishers accepting standardization; two unrelated commentaries

When the “Amazon-opening-400-stores” rumor landed a week ago, many people were gobsmacked. It took me a minute to get past that, which also required getting past my firm conviction when they opened the Seattle store last year that it was an information-gathering exercise, not the opening move of a bigger retail play.

But, when you think it through, it not only doesn’t seem crazy that Amazon would open stores, it seems like an obviously compelling move.

Other retailers that started strictly online have opened retail locations, most notably the eyeglasses shop Warby Parker. (This New Yorker story mentions that. It also has an interesting disclaimer at the end because “Amazon Studios is producing a New Yorker series in partnership with Condé Nast Entertainment”. Wow.)

“Omni-channel”, which is really a new-fangled fancy term for selling both online and through a brick store, is the buzzword du jour of retailing. Actually, the online piece of that is the harder part and Amazon already had that licked.

Barnes & Noble “beat” Borders largely because they had a network of distribution centers that made stocking their retail locations extremely efficient. Amazon’s network of distribution centers is complicated because it isn’t just books, but they have many times the number of points of inventory storage as B&N. In fact, they have many times the number of storage points as B&N and Ingram and Baker & Taylor combined!

Amazon has tons of information that nobody else does that would inform their stocking decisions if they harnessed it. They know where searches are coming from for particular book titles or for generic needs, both geographically and psychographically. And they probably can detect early lifts for particular books faster than anybody else, simply because they have more data.

It is possible that if B&N and the indies had responded differently to Amazon Publishing, agreeing to stock the books rather than boycotting them, this could have played out differently. (No stronger argument could be made for the efficacy of that strategy than this post arguing that stores should stock Amazon titles to punish them because the returns would make them unprofitable! You can’t beat logic like that.) If the stores had stocked their titles, Amazon might have chosen to use their distribution center advantage to start wholesaling, rather than to support their own retail locations (as they appear to be doing).

But the determination of the brick retailers to boycott Amazon was spelled out loudly and clearly. So opening Amazon retail locations — as it increasingly appears they have every intention to do — has two strategic payoffs for them. One is that it gives them access to at least some brick-and-mortar retail locations for their publishing output, which otherwise they can only sell online. And the other is that it capitalizes on their distribution centers, delivering additional sales and margin for investments already made.

In a recent post, I suggested one specific way Amazon could get very disruptive if they had more than a handful of stores. There’s another. They are a tech company that likes to have computers make decisions that in other companies and in other times have been made by humans. I suspect they’ll figure out pretty fast that they will want to have some sort of vendor-managed inventory system to streamline and optimize the stocking decisions for what will almost certainly be a growing network of retail locations. (The part of a trade book person’s DNA that is most out-of-step with the digital age is that we like to make decisions case-by-case, rather than living with decisions made by rules we create. That’s the key to the second half of this post.)

Sophisticated but automated stocking and restocking decisions are not part of the toolkit at B&N or of any other retailer or wholesaler we know. Could that be the next battleground that Amazon retail stores create? That would certainly be disruptive, but at least in this corner of the world it would not be a surprise.


One mantra of the book publishing world is “every book is different”. We sometimes refer to that fact as reflecting the “granularity” of the book business compared to other kinds of consumer goods businesses or other media. Even if you think in terms of categories, there are just more of them in publishing than there are for other products or media.

Perhaps, then, it isn’t surprising that publishers are often inclined to encourage that uniqueness beyond where it is required. And, frankly, it is only required for editorial development and for targeting the marketing. The objective at every place in the value chain in between should be to standardize and, as much as possible, to treat many different books the same. That’s not a creative imperative; it is a commercial imperative.

My father first experienced the tension that this insight can create at Doubleday in the 1950s when he persuaded the company to standardize the trim sizes of their books for maximum printing efficiency. That didn’t require radical changes. It simply meant that books would be an eighth- or quarter-inch longer or shorter, wider or narrower. These were differences that were really not perceptible to most people, yet it was a real internal corporate battle to wrest control from designers who believed “every book is different” and that this mystery (or cookbook) had to be published as a 6 by 9 inch book while that one had to be 6-1/2 inches by 9.

In fact, the trivial differences in trim size were not important at all to the books’ chances of success. There were other decisions — the specific paper or type face among them — that also had no discernible commercial impact on each individual book but were, nonetheless, intentionally made book-by-book as though they did. In many houses, and (admittedly I’m saying this without any supporting data) probably more in smaller houses than larger ones, they still are. And that’s true even though whether the paper is 55 pound or 60 pound or the type face is Times Roman or Baskerville can’t be shown to have any impact at all on a book’s sales.

Now the University of North Carolina Press has been funded by the Mellon Foundation to put Dad’s theory to use in the university press and academic publishing world. They’ve created a service offering through their Longleaf distribution platform that takes the design, pre-press, production, and distribution burden off the hands of university press and academic publishers so they can focus on what makes them distinctive: the books they choose to publish and the skill with which they edit them.

This fits an industry reality I identified a couple of years ago that I called “unbundling”.

On one hand, UNC Press Director John Sherer reports real success, expecting to grow that part of their business by 50 percent in the coming year. But he also reports resistance by some presses who believe that making these design and production decisions adds so significantly to the “quality” of their output that they’re comfortable losing money doing it.

My own hunch is that many directors just don’t have the heart (or courage) to get rid of staff that, with all the best intentions and capabilities but without the advantages of technology and scale, provide them with no better than average quality at a much higher cost than they need to spend. This was a battle for Leonard Shatzkin when he fought it at Doubleday in the early 1950s and apparently it is still being fought hard six decades later.


An obituary last week reminded me of some family history we are proud of

Normally what is written here is about publishing’s present with a look to its future. An obituary notice last week recalled some personal family history about publishing’s past and shed some light on how much has changed in the past six decades. It’s publishing history from a highly personal point of view, but it seems an appropriate story with which to end the year.

Last week there were several reports, including from The New York Times and from Publishers Weekly, of the death of Charles F. Harris at the age of 81. Harris had been the founder of Armistad Press, now an imprint of HarperCollins, and was for a time the director of Howard University Press. He was very unusual, if not unique, being an African-American executive in the world of trade and university press publishing. It was noted that he began his career at Doubleday in 1956.

This brought back to me that my father, Leonard Shatzkin, had hired Harris at Doubleday back then. The obit triggered a partial memory: that Dad had hired two black men at Doubleday in the 1950s and was then told, by somebody in authority: “that’s enough, Len”. Dad died in May of 2002 and Mom, Eleanor Shatzkin, in January of 2007, so I pinged my sisters Karen and Nance to help me piece together more of the story. Turns out there was more in my memory that my sisters helped me to dig out (but memory turned out to be only a secondary authority).

I had met the second of the two hires in the 1970s. I believe that at that time he owned a printer of book jackets. I couldn’t remember his name — which was Ed Simmons — until I dug it up in a way you’ll be told in the postscript. Nance and I had some professional interaction with Harris in his Howard University Press days. Ed Simmons’s name was temporarily lost to memory, including with the one veteran of Doubleday at that time with whom I was able to check, until I found it later.

The family account had always been that, after he got the word from higher-ups to stop his personal integration movement that our Mom had to wrestle with him to cooperate so he wouldn’t lose his job. That shook loose another slightly-off memory, which was that Dad corrected that account sometime before he died. I recalled that he had defied both Mom and his bosses and did offer a third black man a position at Doubleday. “How come you didn’t lose your job, Dad?” The answer? Because the person to whom the job was offered turned it down!

Karen recalled that Dad had gotten help from the Urban League to find worthy candidates who would pass muster in as genteel (and in this case gentile as well, Len very much excepted) an environment as this major New York publisher in the 1950s.

Reflecting on this made me think a bit harder about Dad’s career. He left Doubleday in 1961 for Crowell-Collier Macmillan, a company that would have been presumably more hospitable to him since it was headed by a Jew. (Jews were vanishingly rare at Doubleday in the 1950s.) But Dad found reasons to object ethically to the Macmillan management too, and he resigned from there in 1963. He went on to McGraw-Hill in a much less exalted (and substantially lower-paid) position and was there for about five years before starting his own businesses: first a book production service and then a trade book distributor. During that same period, he quit a lucrative consulting gig in a company originally started by a mentor of his because he objected to the Board decisions of the founder’s children.

In other words, Dad wasn’t real good at working for other people.

He apparently passed that along to us. Sister Karen runs her own law firm and sister Nance runs her own consultancy doing data management in the health care business. But at least they have worked for other people, at least for a while. Since I left my Dad’s employ in 1978, I never have. Fortunately, the independent temperament we inherited from Dad and which was nurtured by both our parents was augmented by training in basic business skills that we got mostly from our mother. (She started as a physicist but then became a management consultant.)

We now live in times where we would not be told by any employer that would conceivably have us that we couldn’t hire a person of any particular color. There are other aspects of corporate and organizational life that would prevent a child of our parents from being a happy component of somebody’s larger scheme, but that particular problem is a relic of history. I’m so glad that our Dad’s courage in the 1950s gave Charlie Harris an opportunity that, with Harris’s own talent and application, turned into a remarkable career.

And, as I was finishing this post, I searched “Urban League” within my blog and found out I had told the story once before, back in April, 2009. Turns out I remembered back then the name of the second hire, Ed Simmons, so I was able to put it in here. And I also got the story straight about how I learned that Dad had hired a third person, which I left mangled in this recollection. AND I got at least part of the straight scoop on exactly how the word came down from Doubleday. This post adds some insight that the first didn’t, but I would also refer you to the original. And let this one and the gaps between them stand as testimony to how much we can forget in nearly seven years.


It is being proven that smaller bookstores can work commercially

Sometimes it takes a decade or more for an insight to be validated, but it is always nice when it happens.

Around the turn of the century, I was developing a business called “Supply Chain Tracker”, which had a nice client base for a few years. What we did was take the data feeds — Excel spreadsheets — provided by publishers’ major accounts and find the nuggets of insight within them that enabled better inventory decisions.

This followed the logic of one of Shatzkin’s Laws, which in this case is “every spreadsheet is one calculation short of useful”. We added some calculations to make meaningful metrics out of raw data. For B&N’s spreadsheets reporting inventory and sales activity to publishers, two of these were calculating the “percentage of store inventory sold” from the “on hand” and “sales this week” columns and “the percentage of total stock in the warehouse” derived from “on-hand in the stores” and “on-hand in the warehouse”.

My first client for this work was Sterling in the final year that they were independently owned before they were bought by B&N, which still owns them. When we showed our first prototype of a Supply Chain Tracker report to Sterling, we sorted by “the percentage of total stock in the warehouse” and two books popped to the top: 5000 copies with 100 percent in the warehouse! When Sterling’s then-Sales VP (later CEO) Charles Nurnberg saw that he said, “those books have been there since October!” This analysis was taking place the following February.

It turns out that B&N at the time had no systematic check of this metric in their workflow. If a B&N buyer bought five thousand copies and didn’t order a “store distribution”, the books would go into the warehouse and just sit there. It was a hole in their system. And since publishers tended to eyeball the spreadsheets in order of “sales”, looking for books that needed to be replenished, they just never caught this.

When Sterling showed the problem to the responsible execs at B&N, it bolstered the view of one of them that having the publishers intelligently reviewing inventory was useful support for the chain’s buying activity. They became supporters of our Supply Chain Tracker reporting (which we then extended to other accounts: Borders, Books-A-Million, Amazon, Ingram, and Baker & Taylor). But Barnes & Noble was everybody’s biggest account at the time and they offered the most robust reporting, so they were the primary focus of our work.

Let’s recall that the early years of this century were still the years of superstore expansion. B&N and Borders were proudly featuring stores that had 120,000 titles or more. It was precisely because they stocked so many titles and that the great majority of them turned very slowly that they wanted the additional publisher help in inventory tracking, particularly further down the sales ranks. And no publishers seemed more logical candidates for that help than university presses. B&N wanted to stock them more heavily, but their books were predominantly in the slow-turning majority. Distinguishing the books that would sell a copy or two in a store versus the ones that wouldn’t demanded the deep title knowledge of the publisher combined with the insight of well-structured reporting. Our work seemed to fit, so B&N subsidized our relatively expensive engagements providing our reports and tutorials on how to use them to university presses.

What we found as we started analyzing, though, was disappointing and initially surprising to all of us. But, as we thought about it, it was intuitively logical.

The university press titles had effectively stopped selling, even in B&N stores that were near university campuses. Why? Those sales had all moved to Amazon, which, at the time, was barely more than five years old. This first struck us all as disappointing and surprising. But, then, think about it…

The university professor would hear about a book. S/he’d go down to the local bookshop — could be a B&N or another store, didn’t matter — and look for the book. It would almost always not be there. So s/he’d “special order” it and wait for it. It didn’t take long for this to become an expectation, so ordering online became a very sensible default behavior. By 2002 and 2003, when we were doing this work, the battle to sell the obscure book to an audience that knew it was there and wanted it through a brick-and-mortar store was already lost. When you thought about this, it was intuitive, even though none of us anticipated it when we started doing the work.

Cambridge University Press at the time had a sales representative (since deceased) named Steve Clark. He was one of my most engaged B&N-subsidized clients. As we were doing this work and analysis, Clark told me that Amazon was already a bigger account for CUP than all other US retail outlets combined! That was a “wow”. But it underscored the degree to which Amazon had captured market share from the stores on hard-to-find books.

B&N still operated smaller stores that had been in the B. Dalton chain and Borders had a similar chain called Waldenbooks. While B&N and Borders were building out the 100,000-plus title stores, their mostly-mall chains were 20,000 and 30,000 title stores. They were in the process of shutting them down as leases expired.

With full knowledge of the strategy that governed their activity in those days, I said to my principal contact at B&N, “you guys should be figuring out how to use your infrastructure to make the twenty-thousand title store work”. He said to me, “Mike, we’re thinking about the million title store!” In other words, there was no appetite to take on board what we had all just learned to make a big change to the overall strategy. They had fully absorbed and couldn’t rapidly unlearn the lesson first discovered by my father, Leonard Shatzkin, when he was running Brentano’s in the 1960s: a big selection of books is a huge magnet for customers.

Unfortunately, Amazon had already changed that reality in a few short years after their inception. The huge selection was not as powerful a magnet as the online marketplace when the customer knew exactly what they wanted, particularly if it wasn’t a bestseller.

Now, flash forward to the present day. I’ve been fishing for lessons from retailers around the world that might constitute useful insight for the Digital Book World audience. My friend Lorraine Shanley of Market Partners suggested I talk to Anna Borne Minberger, the CEO of the Pocket Shop chain of stores, owned by the Swedish publisher, Bonniers. I got to meet Minberger for a conversation at the Frankfurt Book Fair in the last fortnight.

And, lo and behold, Pocket Shop has taken the suggestion I made to Barnes & Noble well over a decade ago and made it work at an extreme I didn’t imagine. Their tiny bookstores stock only about TWO thousand titles, but they are a thriving chain in Sweden and Finland now expanding into Germany. Their formula is a very small title selection placed in very-high-traffic locations (of particular interest here in New York City where both our main railroad stations are losing somewhat larger bookstores) with highly knowledgeable and helpful staff. I didn’t get into the details of buying, inventory management, and centralized infrastructure support in our Frankfurt conversation.

But, near as I can tell, Barnes & Noble still needs a solution to grow their book business; the strategy today only seems to be about how to profitably manage shrinking it. Particularly if it continues to work in Germany, a market (unlike Sweden and Finland) where online buying is strong and Amazon is a real presence in the market, one would think that the Pocket Shop formula would be even more effective if supported by the B&N infrastructure and branding in the United States. Of course, making a strategic shift of this nature is probably a heavier lift for B&N now than it would have been when I first suggested it many years ago.

But I don’t discern any other strategy that leads to growth in what B&N is doing now. If they don’t try copying Pocket Shops strategy in the US, maybe somebody else will. One could execute on this leaning on Ingram’s infrastructure rather than creating one’s own supply chain. Who knows? Maybe even Pocket Shops themselves would like to give it a try.


Market research used to be a silly idea for publishers but it is not anymore

When my father, Leonard Shatzkin, was appointed Director of Research at Doubleday in the 1950s, it was a deliberate attempt to give him license to use analytical techniques to affect how business was done across the company. He had started out heading up manufacturing, with a real focus on streamlining the number of trim sizes the company manufactured. (They were way ahead of their time doing that. Pete McCarthy has told me about the heroic work Andrew Weber and his colleagues did at Random House doing the same thing in the last decade, about a half-century later!)

Len Shatzkin soon thereafter was using statistical techniques to predict pre-publication orders from the earliest ones received (there were far fewer major accounts back then so the pre-pub orders lacked the few sizable big pieces that comprise a huge chunk of the total today) to enable timely and efficient first printings. Later he took a statistically-based approach to figure out how many sales reps Doubleday needed and how to organize their territories. When the Dolphin Books paperback imprint was created (a commercial imprint to join the more academic Anchor Books line created a few years before by Jason Epstein), research and analytical techniques were used to decide which public domain classics to do first.

In the many years I’ve been around the book business, I have often heard experts from other businesses decry the lack of “market research” done by publishers. In any other business (recorded music might be an exception), market research is a prerequisite to launching any new product. Movies use it. Hotel chains use it. Clothing manufacturers use it. Software companies use it. Online “content producers” use it. Sports teams use it. Politicians use it. It is just considered common sense in most businesses to acquire some basic understandings of the market you’re launching a new product into before you craft messages, select media, and target consumers.

In the past, I’ve defended the lack of consumer market research by publishers. For one thing, publishers (until very recently) didn’t “touch” consumers. Their interaction was with intermediaries who did. The focus for publishers was on the trade, not the reader, and the trade was “known” without research. To the extent that research was necessary, it was accomplished by phone calls to key players in the trade. The national chain buyer’s opinion of the market was the market research that mattered. If the publisher “knew different”, it wouldn’t do them any good if the gatekeeper wouldn’t allow the publisher’s books on his shelves.

And there were other structural impediments to applying what worked for other consumer items. Publishers did lots of books; the market for each one was both small and largely unique. The top line revenue expected for most titles was tiny by other consumer good standards. The idea of funding any meaningful market research for the output of a general trade publisher was both inappropriate and impractical.

But over the past 20 years, because a very large percentage of the book business’s transaction base has moved online and an even larger part of book awareness has as well, consumers have also been leaving lots of bread crumbs in plain digital sight. So two things have shifted which really change everything.

Publishers are addressing the reader directly through publisher, book, and author websites; through social media, advertising, and direct marketing; and through their copy — whether or not they explicitly acknowledge that fact — because the publisher’s copy ends up being returned as a search result to many relevant queries.

The audience research itself is now much more accessible than it ever was: cheaper and easier to do in ways that are cost-effective and really could not be imagined as recently as ten years ago.

We’ve reached a point where no marketing copy for any book should be written without audience research having been done first. But no publisher is equipped to do that across the board. They don’t have the bodies; they don’t have the skill sets; and a process enabling that research doesn’t fit the current workflow and toolset.

So when the criticism was offered that publishers should be doing “market research” before 2005, just making that observation demonstrated a failure of understanding about the book business. But that changed in the past 10 years. Not recognizing the value of it now demonstrates a failure to understand how much the book business has changed.

What publishers need to do is to recognize “research” as a necessary activity, which, like Len Shatzkin’s work at Doubleday in the 1950s, needs to cut across functional lines. Publishers are moving in that direction, but mostly in a piecemeal way. One head of house pointed us to the fact that they’ve hired a data scientist for their team. We’ve seen new appointments with the word “audience” in their title or job description, as well as “consumer”, “data”, “analytics”, and “insight”, but “research” — while it does sometimes appear — is too often notable by its absence in the explicit description of their role.

Audience-centric research calls for a combination of an objective data-driven approach, the ability to use a large number of listening and analytical tools, and a methodology that examines keywords, terms, and topics looking to achieve particular goals or objectives. A similar frame of mind is required to perform other research tasks needed today: understanding the effect of price changes, or how the markets online and for brick stores vary by title or genre, or what impact digital promotion has on store sales.

The instincts to hire data scientists and to make the “audience” somebody’s job are good ones, but without changing the existing workflows around descriptive copy creation, they are practices that might create more distraction than enlightenment. Publishers need to develop the capability to understand what questions need to be asked and what insights need to be gained craft copy that will accomplish specific goals with identified audiences.

Perhaps they are moving faster on this in the UK than we are in the US. One high-ranking executive in a major house who has worked on both sides of the Atlantic told me a story of research the Audience Insight group at his house delivered that had significant impact. They wanted to sign a “celebrity” author. Research showed that the dedication of this author’s fans was not as large as they anticipated, but that there was among them a high degree of belief and faith in the author’s opinions about food. A food-oriented book by that author was the approach taken and a bestseller was the result. This is a great example of how useful research can be, but even this particular big company doesn’t have the same infrastructure to do this work on the west side of the Atlantic.

What most distinguishes our approach at Logical Marketing from other digital marketing agencies and from most publishers’ own efforts is our emphasis on research. We’ve seen clearly that it helps target markets more effectively, even if you don’t write the book to specs suggested by the research. But it also helps our clients skip the pain and cost of strategic assumptions or tactics that are highly unlikely to pay off: such as avoiding the attempt to compete on search terms a book could never rank high for; recognizing in advance a YouTube or Pinterest audience that might be large, but will be hard or impossible to convert to book sales; or trying to capture the sales directly from prospects that would be much more likely to convert through Amazon.

With the very high failure rate and enormous staff time suck that digital marketing campaigns are known for, research that avoids predictable failures pays for itself quickly in wasted effort not expended.

McCarthy tells me from his in-house experience that marketers — especially less-senior marketers — often know they’re working on a campaign that in all probability won’t work. We believe publishers often go through with these to show the agent and author — and sometimes their own editor — that they’re “trying” and that they are “supporting the book”. But good research is also something that can be shown to authors and agents to impress them, particularly in the months and years still left when not everybody will be doing it (and the further months and years when not everybody will be doing it well.) Good research will avoid inglorious failures as well as point to more likely paths to success.

Structural changes can happen in organic ways. Len Shatzkin became Director of Research at Doubleday by getting the budget to hire a mathematician (the term “data scientist” didn’t exist in 1953), using statistical knowledge to solve one problem (predicting advance sales from a small percentage of the orders), and then building on the company’s increasing recognition that analytical research “worked”.

If the research function were acknowledged at every publisher, it would be usefully employed to inform acquisition decisions (whether to bring in a title and how much it is worth), list development, pricing, backlist marketing strategies, physical book laydowns to retailers, geographical emphasis in marketing, and the timing of paperback edition release.

Perhaps the Director of Research — with a department that serves the whole publishing company — is an idea whose time has come again.

But, in the meantime, Logical Marketing can help.

Remember, you can help us choose the topics for Digital Book World 2016 by responding to our survey at this link.


Seven key insights about VMI for books and why it is becoming a current concern

Vendor-managed inventory (VMI) is a supply paradigm for retailers by which the distributor makes the individual stocking decisions rather than having them determined by “orders” from an account. The most significant application of it for books was in the mass-market paperback business in its early days, when most of the books went through the magazine wholesalers to newsstands, drug stores, and other merchants that sold magazines. The way it worked, originally, was that mass-market publishers “allocated” copies to each of several hundred “independent distributors” (also known as I.D. wholesalers), who in turn allocated them to the accounts.

Nobody thought of this as “vendor-managed inventory”. It was actually described as “forced distribution”. And since there was no ongoing restocking component built into the thinking, that was the right way to frame it.

The net result was that copies of a title could appear in tens of thousands of individual locations without a publisher needing to show up at, or even ship to, each and every one.

To make this system functional at the beginning, the books, like magazines, had a predictable monthly cycle through the system. The copies that didn’t sell in their allotted time were destroyed, with covers returned to the publisher for credit.

Over time, the system became inefficient (the details of which are a story for another day, but the long story short is that publishers couldn’t resist the temptation to overload the system with more titles and copies than it could handle) and mass-market publishing evolved into something quite different which today, aside from mostly sticking to standard rack-sized books, works nothing like it did at the beginning.

My father, Leonard Shatzkin, introduced a much more sophisticated version of VMI for bookstores at Doubleday in 1957 called the Doubleday Merchandising Plan. In the Doubleday version, reps left the store with a count of the books on-hand rather than a purchase order. The store had agreed in advance to let Doubleday use that inventory count to calculate sales and determine what should then be shipped in. In 18 months, there were 800 stores on the Plan, Doubleday’s backlist sales had quadrupled and the cost of sales had quartered. VMI was much more efficient and productive — for Doubleday and for the stores — than the “normal” way of stocking was. That “normal” way — the store issues orders and the publisher then ships them — was described as “distribution by negotiation” by my father in his seminal book, “In Cold Type”, and it is still the way most books find their way to most retail shelves.

After my Dad left Doubleday in 1960, successor sales executives — who, frankly, didn’t really understand the power and value of what Dad had left them — allowed the system to atrophy. This started in a time-honored way, with reps appealing that some stores in their territory would rather just write their own backlist orders. Management conferred undue cred on the rep who managed the account and allowed exceptions. The exceptions, over time, became more prevalent than the real VMI and within a decade or so the enormous advantage of having hundreds of stores so efficiently stocked with backlist was gone.

And so, for the most part, VMI was gone from the book business by the mid-1970s. And, since then, there have been substantial improvements in the supply chain. PCs in stores that can manage vast amounts of data; powerful service offerings from the wholesalers (primarily Ingram and Baker & Taylor, but others too); information through services like Above the Treeline; and consolidation of the trade business at both ends so that the lion’s share of a store’s supply comes from a handful of major publishers and distributors (compared to my Dad’s day) and lots of the books go to a relatively smaller number of accounts have all combined to make the challenge of efficient inventory management for books at retail at least appear not to need the advantages of VMI the way it did 60 years ago.

And since so many bookstores not only really like to make the book-by-book stocking decisions, or at least to control them through the systems they have invested in and applying the title-specific knowledge they work hard to develop, there has been little motivation for publishers or wholesalers to invest in developing the capability to execute VMI.

Until recently. Now two factors are changing that.

One is that non-bookstore distribution of books is growing. And non-bookstores don’t have the same investments in book-specific inventory management and knowledge, let alone the emotional investments that make them want to decide what the books are, that bookstores do. Sometimes they just simply can’t do it: they don’t have the bandwidth or expertise to buy books.

And the other is that both of the two largest book chains, Barnes & Noble and Books-a-Million, are seeing virtue in transferring some of the stocking decisions to suppliers. B&N, at least, has been actively encouraging publishers to think about VMI for several years. These discussions have reportedly revolved around a concept similar to one the late Borders chain was trying a decade or more ago, finding “category captains” that know a subject well enough to relieve the chain of the need for broad knowledge of all the books that fall under that rubric.

This is compelling. Finding that you are managing business that could be made more efficient with a system to help you while at the same time some of your biggest accounts are asking for services that could benefit from the same automation are far more persuasive goads to pursue an idea than the more abstract notion that you could create a beneficial paradigm shift.

As a result, many publishing sales departments today are beginning to grapple with defining VMI, thinking about how to apply it, and confronting the questions around how it affects staffing, sales call patterns, and commercial terms. This interest is likely to grow. A well-designed VMI system for books (and buying one off-the-shelf that was not specifically designed for books is not a viable solution) will have applications and create opportunities all over the world. Since delivering books globally is an increasingly prevalent framework for business thinking, the case to invest in this capability gets easier to make in many places with each passing day.

VMI is a big subject and there’s a lot to know and think through about it. I’ve had the unusual — probably unique — opportunity to contemplate it with all its nuances for 50 years, thanks to my Dad’s visionary insight into the topic and a father-son relationship that included a lot of shop talk from my very early years. So here’s my starter list of conceptual points that I hope would be helpful to any publisher or retailer thinking about an approach to VMI.

1. Efficient and commercially viable VMI requires managing with rules, not with cases. Some of the current candidates to develop a VMI system have been drawn into it servicing planograms or spinner racks in non-book retailers. These restocking challenges are simpler than stocking a store because the title range is usually predetermined and confined and the restocking quantity is usually just one-for-one replenishment. We have found that even in those simple cases, the temptation to make individual decisions — swapping out titles or increasing or decreasing quantities in certain stores based on rates of movement — is hard to resist and rapidly adds complications that can rapidly overwhelm manual efforts to manage it.

2. VMI is based on data-informed shipments and returns. It must include returns, markdowns, or disposals to clear inventory. Putting books in quickly and efficiently to replace sold books is, indeed, the crux of VMI. But that alone is “necessary but not sufficient”. Most titles do not sell a single copy in most stores to which they are introduced. (This fact will surprise many people, but it is mathematically unavoidable and confirmed through data I have gotten from friends with retail data to query.) And many books will sell for a while and then stop, leaving copies behind. Any inventory management depending on VMI still requires periodic purging of excess inventory. That is, the publisher or distributor determining replenishment must also, from time to time, identify and deal with excess stock.

3. VMI sensibly combines with consignment and vendor-paid freight. The convention that books are invoiced to the account when they are shipped and that the store pays the shipping cost of returns (and frequently on incoming shipments as well) makes sense when the store holds the order book and decides what titles and quantities are coming in. But if the store isn’t deciding the titles and quantities, it obviously shouldn’t be held accountable for freight costs on returns; that would be license for the publisher or distributor to take unwise risks. The same is really true for the carrying cost of the inventory between receipt and sale. If the store’s deciding, it isn’t crazy for that to be their lookout. But if the publisher or distributor is deciding, then the inventory risk should be transferred to them. The simplest way to do that is for the commercial arrangement to shift so that the publisher offers consignment and freight paid both ways. The store should pay promptly — probably weekly — when the books are sold. (Publishers: before you get antsy about what all this means to your margins, read the post to the end.)

Aside from being fairer, commercially more logical, and an attractive proposition that should entice the store rather than a risky one that will discourage participation, this arrangement sets up a much more sensible framework for other discussions that need to take place. With publisher prices marked on all the books, it makes it clear to the retailer that s/he has a clear margin on every sale for the store to capture (or to offer as discounts to customers). And because the publisher is clearly taking all the inventory risk, it also makes it clear that the account must take responsibility for inventory “shrink” (books that disappear from the shelves without going through the cash register.)

Obviously, shrink is entirely the retailer’s problem in a sale-and-return arrangement; whatever they can’t return they will have paid for. But it is also obvious that retailers in consignment arrangements try to elide that responsibility. Publishers can’t allow a situation where the retailer has no incentive to make sure every book leaving the store goes through the sales scan first.

4. Frequent replenishment is a critical component of successful VMI. No system can avoid the reality that predicting book sales on a per-title-per-outlet basis is impossible to do with a high degree of accuracy. The best antidote to this challenge is to ship frequently, which allows lower quantities without lost sales because new copies replace sold copies with little delay. The vendor-paid freight is a real restraint because freight costs go down as shipments rise, but it should be the only limitation on shipment frequency, assuming the sales information is reported electronically on a daily basis as it should be. The publisher or distributor should always be itching to ship as frequently as an order large enough to provide tolerable picking and freight costs can be assembled. The retailer needs to be encouraged, or helped, to enable restocking quickly and as frequently as cost-efficient shipments will allow.

5. If a store has no costs of inventory — either investment or freight — its only cost is the real estate the goods require. GMROII — gross margin return on inventory investment — is the best measurement of profitability for a retailer. With VMI, vendor-paid freight, and consignment, it is infinity. Therefore, profitable margins can be achieved with considerably less than the 40 to 50 percent discounts that have prevailed historically. How that will play out in negotiations is a case-by-case problem, but publishers should really understand GMROII and its implications for retail profitability so they fully comprehend what enormous financial advantages this new way of framing the commercial relationship give the retailer.

(The shift is not without its challenges for publishers to manage but what at first appears to be the biggest one — the delay in “recognizing” sales for the balance sheet — is actually much smaller than it might first appear. And that’s also a subject for another day.)

6. Actually, the store also saves the cost of buying, which is very expensive for books. The most important advantage VMI gives a publisher is removing the need for a buyer to get their books onto somebody’s shelves. The publisher with VMI overcomes what has been the insuperable barrier blocking them from many retail establishments: the store can’t bear the expense of the expertise and knowledge required to do the buying. It is harder to sell that advantage to existing book retailers who have invested in systems to enable buyers, even if some buyer time can be saved through the publisher’s or distributor’s efforts and expertise. But a non-book retailer looking for complementary merchandise that might also be a traffic builder will appreciate largely cost-free inventory that adds margin and will see profitability at margins considerably lower than the discounts publishers must provide today.

7. Within reasonable limits, the publisher or distributor should be happy to honor input from the retailer about books they want to carry. It is important to remember that most titles shipped to most stores don’t actually sell one single unit. Giving a store a title they’re requesting should have odds good enough to be worth the risk (although that will be proven true or not for each outlet by data over time). Taking the huge number of necessary decisions off a store’s hands is useful for everybody; it shouldn’t suggest their input is not relevant. Indeed, getting information from stores about price or topical promotions they are running, on books or other merchandise, and incorporating that into the rules around stocking books, will help any book supplier provide a better and more profitable service to its accounts. After all, having a store say “I’d like to sell this title for 20 percent off next week in a major promotion, would you mind sending me more copies?” opens up a conversation every publisher is happy to have.

Of course, in a variety of consulting assignments, we are working on this, including system design. It is staggering to contemplate how much more sophistication it is possible to build into the systems today than it was a decade-and-a-half ago when we last immersed ourselves in this. In the short run, a VMI-management system will provide a competitive edge, primarily because it will open up the opportunity to deliver to retail shelves that will simply not be accessible without it. That will lead to it becoming a requirement. As I’ve said here before, a prediction like that is not worth much without being attached to a time scale. I think we’ll see this cycle play out over the next ten years. That is: by 2025, just about all book distribution to retailers will be through a VMI system.


Better book marketing in the future depends a bit on unlearning the best practices of the past

[Note to subscribers. We have switched from Feedburner to Mail Chimp for email distribution to our list to improve our service. Please send us a note if you have any problems or think there’s anything we ought to know.]


A few years ago, publishers invented the position of Chief Digital Officer and many of the big houses hired one. The creation of a position with that title, reporting to the CEO, explicitly acknowledged the need to address digital change at the highest levels of the company.

Now we’re seeing new hires being put in charge of “audiences” or “audience development”. I don’t know exactly what that means (a good topic for Digital Book World 2016), but some conversations in the past couple of weeks are making clearer to me what marketing and content development in book publishing is going to have to look like. And audiences are, indeed, at the heart of it.

I’ve written before about Pete McCarthy’s conviction that unique research is needed into the audiences for every book and every author and that the flow of data about a book that’s in the marketplace provides continuing opportunities to sharpen the understandings of how to sell to those audiences. Applying this philosophy bumps up against two realities so long-standing in the trade book business that they’re very hard to change:

How the book descriptions which are the basis for all marketing copy get written
A generic lack of by-title attention to the backlist

The new skill set that is needed to address both of these is, indeed, the capability to do research, act on it, and, as Pete says, rinse and repeat. Research, analysis, action, observation. Rinse and repeat.

I had a conversation over lunch last week with an imprint-level executive at a Big House. S/he got my attention by expressing doubt about the value of “landing pages”, which are (I’ve learned through my work with Logical Marketing; I wouldn’t have known this a year ago) one of the most useful tools to improve discovery for books and authors. I have related one particularly persuasive anecdote about that here. This was a demonstration to me of how much basic knowledge about discovery and SEO is lacking in publishing. (The case for how widespread the ignorance of SEO in publishing has been made persuasively in an ebook by British marketer Chris McVeigh of Fourfiftyone, a marketing consultancy in the UK that seems to share a lot of the philosophy we employ at Logical Marketing.)

But then, my lunch companion made an important operational point. I was advocating research as a tool to decide what to acquire, or what projects might work. “But I could never get money to do research on a book we hadn’t signed,” s/he said, “except perhaps to use going after a big author who is with another house.” (Indeed, we’ve done extensive audits at Logical Marketing for big publishers who had exactly that purpose in mind.) “But, routinely? impossible!”

The team Pete leads can do what would constitute useful research which would really inform an acquisition decision, for $1000 a title. If the capability to do what we do — which probably requires the command of about two dozen analytical tools — were inhouse, it would cost much less than that.

Park that thought.

I also had an exchange last week with Hugh Howey, my friend the incredibly successful indie author with whom I generally agree on very little concerning big publishers and their value to authors. But Hugh made a point that is absolutely fundamental, one which I learned and absorbed so long ago that I haven’t dusted it off for the modern era. And it is profoundly important.

Hugh says there are new authors he’s encountering every day who are achieving success after publishers failed with them. It is when he described the sales curve of the successful indie — “steadily growing sales” — that a penny dropped for me. An old penny.

We recognize in our business that “word of mouth” is the most effective means of growing the market for a book. If that were the way things really worked, books would tend to have a sales curve that was a relatively gentle upward slope to a peak and then a relatively gentle downward slope.

Of course, very few books have ever had that sales curve. Nothing about the way big publishers routinely market and sell would enable it to happen. Everything publishers do tries to impose a different sales curve on their books.

A gentle upward slope followed by a gentle downward slope would, in the physical world, require a broad and very shallow distribution with rapid replenishment where the first copy or two put at an outlet had sold. But widespread coordination of rapid replenishment of this kind for books selling at low volumes at any particular outlet (let alone most outlets) is, for the most part, a practical impossibility in the world of distributed retail.

In fact, distributed retail demands a completely out-of-synch sales curve. It wants a big sale the first week a book is out to give it the best chance of making the bestseller list and, even failing that, the best chance of being worthy of continuing attention by a publisher’s sales staff, and therefore, the marketing team. Books in retail distribution are seen as failures if they don’t catch on pretty quickly, if not in days or weeks, certainly within a couple of months. And if a store sells two copies, say, of a new book in the first three months, it probably doesn’t make the cut as a book to be retained. If they bought two, they’re glad they’re gone and not likely to re-order without some push by the publisher or attention-grabbing other circumstance. If they bought ten, they’ll want to get their dollars back by making returns so they can invest in the next potentially big thing.

But that’s not the case online, where there is no need for distributed inventory (especially of ebooks!) If the first copies sold lead to word of mouth recommendations, the book will still be available to the online shopper. And there will be nothing in the way it is presented — it won’t have a torn cover hidden and be hidden in the back of the store, say — to indicate it isn’t successful. People can buy it and the chain can continue, building over time. Three months later, six months later, it really doesn’t matter; the book can keep selling. And, by the way, this will be true at any online retailer with an open account at Ingram (including for print-on-demand books), not just at Amazon.

But, in the brick and mortar world, the book will effectively be dead if it doesn’t catch on in the first three months. And the reality of staffing, focus, and the sales philosophy of most publishers means it won’t be getting any attention from the house’s digital marketers either.

If you live in the world of indie success like Hugh Howey does, you are repeatedly seeing authors breaking through months after a book’s publication, at a time when an experienced author knows a house would have given up on them.

Now park that.

I also had a chat last week with a former colleague of mine now at a periodical. He was explaining that one major conceptual challenge for his publication in the digital age was to see their readership as many pretty small and discrete audiences, not one big one at the level of the “subscriber”. No story in his publication is intended for “everybody”; what is important is for a newspaper or magazine to know whether particular stories are satisfying the needs of the particular niche of their audience that wants that topic, that kind of story. Talking to this former colleague about digital marketing and publishing was a variation on the themes that are topics with Pete.

One thing I learned in this conversation made another penny drop. Let’s say you have a story on any particular topic, from theater to rugby, my friend posited. Your total “theoretical market” within the publication’s readership is every person who ever read a single story on that subject. But your “core market” is every person who has read two stories on it. If a high percentage of those read it, the story succeeded. If not, the story failed.

And a further implication of this analysis is that seeing your audiences that way, and growing them that way, will also ultimately allow monetizing them more effectively. This wouldn’t be advertising-led, so much as harvesting the benefits of audience-informed content creation, but it is totally outside the way editorial creation at newspapers and magazines has always occurred.

And now park that.

We had a meeting two weeks ago with a fledgling publisher whose owner has a great deal of direct marketing expertise. As he heard Pete explaining what he did, looking for search terms that suggested opportunity (lots of use of the term and relatively few particularly good answers), he wondered if we could tell him through research what book to write. We’ve gotten some publishers in some circumstances to do marketing research early enough to influence titling and sub-titling. McVeigh in his ebook makes the same point under the rubric that SEO should be employed before titling any book.

Of course, we don’t sell that kind of help very often or we haven’t so far. It would require getting marketing money invoked early to pay for research like that. But we know it is useful.

And all of this together brings into sharper focus for me where trade publishing has to go, and how the marketing function, indeed, the whole publishing enterprise, needs to be about a constant process of audience segmentation, research, tweaks, analysis, and repeat. A persistently enhanced understanding of multiple audiences can productively inform title selection and creation. And systems and workflows need to be built to systematically apply what is being learned every day to every title which might benefit. Audience segmentation and constant research are really at the heart of the successful trade publishing enterprise of the future, even if we are only lurching toward them now with a primitive understanding of SEO, the occasional A-B test for a Facebook ad, and the gathering of some odd web traffic and email lists that don’t relate to any overall plan.

A publisher operating at scale ought to have the ability to provide those authors that want to build their audiences one reader at a time better analysis and tools than they would have to do it on their own.  Publishers have always depended on the energy of authors to sell their books; the techniques just have to change. Instead of footing the bill for expensive and wasteful author tours, publishers should be providing tools, data, and helpful coaching to be force multipliers for the efforts authors are happy to extend on their own behalf. The publisher’s goal should be have their authors saying “I don’t know how I could possibly be so effective without the help I get from my publisher.”

Publishers should also be doing the necessary research to examine the market for each book they might do before they bid on it. They should have audience groups with whom they’re in constant contact, and they also need the ability to quickly segment and analyze audiences “in the wild”. The dedicated research capabilities need to be applied to the opportunities surfaced by constant monitoring of both the sales of and the chatter about the backlist.

Size, scale, and a large number of titles about which a lot is known should give any publisher advantages over both indie authors and dominant retailers in building the biggest possible audience for the books it publishes. But getting there will require both learning the techniques of the future and unlearning the concepts and freeing themselves of the discipline of “pub date” timing that have always driven effective trade publishing.

The publishers creating new management positions with the word “audience” in the title would seem to be very much on the right track. It is worth recalling that my father, Leonard Shatzkin, carried the title of Director of Research at Doubleday in the 1950s. Research would be another function to glorify with a title and a budget assigned and monitored from the top of each company. Note to the CEOs: a budget for “research” for marketing and to inform acquisition should be explicit and it should be the job of somebody extremely capable to make sure it is productively invested.


Getting books more retail shelf space is going to require a new approach

That bookstore shelf space is disappearing is a reality that nobody denies. It makes sense that there are people trying to figure out how to arrest the decline. There has been some recent cheerleading about the “growth” of indie bookstores, but the hard reality is that they’re expanding shelf space more slowly than chains are shrinking it. No publisher today can make a living selling books just through brick-and-mortar bookstores. For straight text reading, it is rapidly becoming an ancillary channel, a special market. Illustrated book publishers, whose books don’t port so well to ebooks and whose printed books are more likely to be bought if they are seen and touched, are working “special” sales — those not made through outlets that primarily sell books — harder than ever. That means they’re trying to put books into retail stores that aren’t primarily bookstores.

A recent Publishing Perspectives brings us an article by a small publisher envisioning an expanded market for selling books through libraries. Deborah Emin of Sullivan Street Press imagines a world where libraries become book retailers liberated from the normal retailer’s concerns about “exorbitant rent and the dealings with landlords who can terminate a lease renewal at will”. But what really caught my attention was this statement:

What if bookstores could invest in what bookstores are best at — filling their shelves with books and taking chances on new authors rather than being concerned that their stock won’t move fast enough and they are wasting valuable space trying to sell what is more difficult to sell but that they know can be sold?

This stopped me because, in fact, I have precisely the opposite take on the problem. What I see is that the cost of buying books, and the impossiblity of doing it “right” based on the sales and inventory data of a single store, is really the biggest barrier to profitable bookselling, even more of a challenge than the cost of the space.

One big component of the problem, in a nutshell, is that most books don’t sell enough copies to have a “sales rate” in any one store. Consider a little quick retail math. A store that does $1.2 million in sales a year ($100,000 a month) is selling 5 to 10 thousand books a month. Call it eight thousand. The chances are that store’s eight thousand sales will be more than 7,500 “ones”, with the balance made up mostly of “twos”, with a handful of titles — in the neighborhood of a dozen — that sell three or more. If the store turns its stock 4 times a year (which would be a very good performance), it is sitting on about 25,000 books at a time, also mostly “ones”, so let’s say they have 22,000 titles. So in the average month, 2/3 of their titles sell zero and more than 90 percent sell no more than one.

In the following month, the 7,500 titles that sell one will largely change.

There is no mathematician in the world that can make meaningful predictions for what any particular title will sell in a subsequent month with data like that. And there is no mathematician in the world that can tell you how the hundreds of thousands of titles not in the store would have done if they had been there, based on the store’s data on those titles (which is zilch).

In the past decade, indie stores have gotten some real help getting some indications about sales outside their four walls. Ingram ranks titles across a much broader universe. The store system provider Above the Treeline provides some title-level visibility across their client base. That’s a lot better than nothing, but the data is not provided in a form that would enable any automated use of it for reordering.

And that points to the second, and larger, component of the problem: automating the ordering. The human attention it takes to make the stocking decisions for a bookstore has not really been scaled. B. Dalton Booksellers, which was bought by, absorbed into, and then discarded by Barnes & Noble, pioneered automated models in the 1970s, the first real computer-assisted inventory management in bookstores. A buyer would set an inventory level and reorder point for a book in a store (“setting the model” or “modeling the title”) and the computer would take over from there, automatically reordering when inventory fell to or below the reorder point. This capability made Dalton grow faster than Walden, its chief competitor, which didn’t have this ability to keep backlist in the stores without buyer or store manager intervention. The shortcoming of the model system, of course, is that a buyer has to put it on, take it off, or change it. So we have a manual requirement to manage the automation.

When you think about the sheer number of store-title model combinations in a chain of hundreds of stores with hundreds, if not thousands, of modeled titles per store, that’s no trivial task.

Unfortunately, the art or science or technology (or all three) of inventory management for books in stores hasn’t progressed a whole lot since then. Barnes & Noble built a great internal supply chain with warehouses that could resupply its stores very quickly and that improved the efficiency of the models. But an unnoticed and uncommented upon current reality is that internal supply chain will be hard to sustain and increasingly costly as the base of stores and sales it serves diminishes in size.

My father recognized this problem sixty years ago and created the Doubleday Merchandising Plan to solve it. That plan provided vendor-managed inventory for the stores. The reps walked out with an inventory count rather than an order. It was posted (manually) to a ledger by a roomful of workers at Doubleday’s home office, and an order was then created and sent to the store which had agreed in advance to accept it. Sales exploded, cost of sales shrank, and this program propelled Doubleday into the top echelon of book publishers. Leonard Shatzkin’s system was not automated, but it was a lot faster and more efficient than the store’s own efforts, particularly in those days when there was no computer assistance to track the inventory.

As stores gained the ability to track inventory through the 1980s, and were further assisted by a wholesale network led by Ingram that could restock them quickly, improved inventory management sharply increased bookstore profitability and the bookstore network grew. But with bookstores now heavily invested in systems to help them order more efficiently, the need for and receptiveness to publisher management of inventory declined.

But stores that don’t normally buy books and which can’t make the investments in book-oriented inventory tracking and buyers with the huge amounts of special knowledge that book buyers have still needed the help. Nearly two decades ago, I helped a client build an automated stocking system that could manage inventory on thousands of titles in thousands of stores with very little human intervention. It has run successfully to this day and is used to stock books in three of the largest chains in the country.

We used a pretty simple logic to build this system, limited as we were by what computers could do in 2000. The system calculates stock turn by title across the chain and then ranks the books by that metric. Then each store gets the highest-ranked books it doesn’t already have each week to replace the books it has sold. This automated system is crude, but extremely effective.

Of course, persuading a bookstore to accept a publisher’s or wholesaler’s decisions about what titles to stock would be a very heavy lift. But as the retail book market shifts from dedicated bookstores to shelf-space-for-books in retailers with other specialties, it becomes easier for publishers or distributors to find shelf space that can be stocked on that basis.

Since I am now working on a more modern version of what we designed in 2000, it is easy to see that much more sophisticated ranking systems and stocking rules can be managed in an automated way than was possible then.

Changing the paradigm by which books find their way to store shelves is a way to meaningfully improve the efficiency of book sales in brick-and-mortar stores. Coupling it with true consignment terms (which sale-and-return is not) can make book sales viable for stores at lower discounts, which could meaningfully improve publishers’ margins.

There’s plenty of rent being paid for space books would sell well in. The problem is the cost of putting the right books into those spaces. We won’t get there presenting, ordering, and fulfilling title-by-title as we’ve always done. That’s the first place to look for a better answer. Reducing or eliminating rent would be helpful in the short run, probably not sustainable in the long run, and it would sidestep the real challenge of retail: presenting the most saleable possible mix to the consumers who will shop from it every single day.


What makes books different…

Before the digital age, retailers that tried to sell across media were pretty rare. Barnes & Noble added music CDs to their product mix when the era of records and cassettes had long passed. Record stores rarely sold books and, if they did, tended to sell books related to an interest in music. For those stores, it wasn’t so much about combining media as it was about offering a defined audience content related to their interest, like Home Depot selling home repair books. For the most part in pre-Internet times, books, music, and video each had its own retail network.

But when media became largely digital in the first decade of the 21st century, the digital companies that decided to establish consumer retail tried to erase the distinction that had grown up dividing reading (books) from listening (music) from watching (movies and TV). The three principal digital giants in the media retailing space — Amazon, Apple, and Google — all sell all these media in their “pure” form and maintain a separate market for “apps” as well that might contain any or all of the legacy media.

The retailing efforts for all of them are divided along legacy media lines, acknowledging the reality that people are usually shopping specifically for a book or music or a cinematic experience. Most are probably not, as some seem to imagine, choosing which they’ll do based on what’s available at what price across the media. (This is a popular meme at the moment: books “competing” with other media because they are consumed on the same devices. Of course, only a minority of books are consumed on devices, unlike the other media. Even though this cross-media competition might be intuitive logic to some people, it has scarcely been “proven” and, while it might be true to a limited extent, it doesn’t look like a big part of the marketing problem to me.)

It seems from here that Amazon and Barnes & Noble have a distinct advantage over all their other competitors in the ebook space because, with books — unlike movies and TV and music — the audience toggles between print and digital. And this might not change anytime soon. The stats are scattered and not definitive, but a recent survey in Australia found that ninety-five percent of Australians under 30 preferred paperbacks to ebooks! Other data seem to indicate that most ebook readers also read print. To the extent that is true, a book shopper — or searcher — would want to be searching the universe of book titles, print and digital, to make a selection.

It should be more widely understood that the physical book will not go the way of the Dodo nearly as fast as the shrink-wrapped version has for music or TV/film. It hasn’t and it won’t. There are very good, understandable, and really undeniable reasons for this, even though it seems like many smart people expect all the media to go all-digital in much the same way.

Making the case that “books are different” requires me to unlearn what I was brought up to believe. My father, Leonard Shatzkin, used to ridicule the idea that “books are different”, which was too often (he thought) invoked to explain why “modern” (in the 1950s and 1960s) business practices like planning and forecasting and measuring couldn’t be applied to books like they were to so many other businesses after World War II. In fact, Dad shied away from hiring people with book business experience, “because they would have learned the wrong things”.

But in the digital age, and as compared to other media, books are definitely different and success in books, whether print or digital, is dependent on understanding that.

First of all, the book — unlike its hard good counterparts the CD (or record or cassette) and DVD (or videotape) — has functionality that the ebook version does not. Quite aside from the fact that you don’t need a powered device (or an Internet connection) to get or consume it, the book allows you to flip through pages, write margin notes, dog-ear pages you want to get back to quickly, and easily navigate around back and forth through the text much more readily than with an ebook. There are no comparable capabilities that come with a CD or DVD.

Second, the book has — or can have — aesthetic qualities that the ebook will not. Some people flip for the feel of the paper or the smell of the ink, but you don’t have to be weirdly obsessed with the craft of bookmaking to appreciate a good print presentation.

But third, and most important, is the distinction about the content itself. When you are watching a movie or TV show or listening to music through any device, the originating source makes only the most nuanced difference to your consumption experience. Yes, there are audiophiles who really prefer vinyl records to CDs and there probably are also those who will insist that the iTunes-file-version is not as good as the CDs. And everybody who has watched a streamed video has experienced times when the transmission was not optimal. There are almost certainly music and movie afficionados who will insist on a hard goods version to avoid those inferiorities.

But the differences between printed books and digital books are much more profound and they are not nuanced. In fact, there are categories of books that satisfy audiences very well in digital form and there are whole other categories of books that don’t sell at all well in digital. That is because while the difference between classical music and rock or the difference between a comedy and a thriller isn’t reflected in any difference between a streamed or hard-goods version, the difference between a novel and a travel guide or a book of knitting instruction is enormous when moving from a physical to digital format.

For one thing, the book — static words or images on a flat surface, whether printed or on a screen — is often a presentation compromise based on the limitations of “static”. The producer of a record doesn’t think “how would I present this content differently if it is going to be distributed as a file rather than a CD?” But the knitting stitch that is shown in eight captioned still pictures in a printed book could just as well be a video in an ebook. And it probably should be.

In fact, this might be the use case for which a consumer would make a media-specific decision. If you know what knitting stitch you need to learn, searching YouTube for a video might make more sense than trying to find instructions in a book!

Losing the 1-to-1 relationship between the printed version and the digital version adds expense and a whole set of creative decisions that are not faced by the music and movie/TV equivalents. And they are also not a concern for the publisher of a novel or a biography. But these are big concerns for everybody in the book business who doesn’t sell straight-text immersive reading. The point is that screen size and quality are not — and never were — the only barriers in the way of other books making the digital leap.

So even though fiction reading has largely moved to digital (maybe even more than half), most of the consumer book business, by far, is still print. Even eye-catching headlines like the one from July when the web site AuthorEarnings (organized and run by indie author Hugh Howey, who is a man with a strong point of view about all this) said “one in three ebooks” sold by Amazon is self-published, might not be as powerful at a second glance.

Although Howey weeds out the ebooks that were given away free, the share of the consumer revenue earned by those indie ebooks would be a much smaller fraction than their unit sales. The new ebooks from big houses, which is a big percentage of the ebook sales they make (and that AuthorEarnings report in July said the Big Five still had an even bigger share of units than the indies), are routinely priced anywhere from 3 to 10 times what indie ebooks normally sell for. So that “share” if expressed as a “share of revenue” might be more like five or ten percent. It really couldn’t be more than 15%.

(In fairness to Howey, he tries to make the point that indie authors earn more from lower revenue because their cut is so much bigger and he makes the argument that they are actually earning more royalties than the big guys. He also tells me that he calls some S-corp and LLC publishers “uncategorized”, even though they are almost certainly indies, in his own attempt to be even-handed. In fairness to the industry, I will point out that his accounting doesn’t take unearned advances into consideration, and since most sales of big house ebooks are of authors who don’t earn out, that lack of information really moots the whole analysis about what authors earn. Another big shortcoming of the comparison is that most published authors are getting a much more substantial print sale than most indie authors.)

But indie authors on Amazon are the industry high-water mark of indie share and ebook share. They are almost entirely books without press runs or sales forces, so they are almost entirely absent from store shelves. And they are also entirely narrative writing.

The facts, apparently, are that even heavy ebook readers still buy and consume print. There is not a lot of clear data about whether “hybrid readers” make their print-versus-digital choice categorically or some other way. There is some anecdata suggesting that some people read print when it is convenient (when they’re home) and digital when it is not. There are a number of bundling offers to sell both (offered by publishers and one called “Matchbook” from Amazon), which certainly seems to say that publishers believe there’s a market of people who would read the same book both ways at the same time!

What that all would seem to say is that the retailer selling ebooks only is seriously disadvantaged from getting searches for books from the majority of readers.

Do we have any independent evidence that selling to the digerati only — selling ebooks only — might limit one’s ability to sell ebooks? I think we do. It would appear that B&N has sold roughly the same number of Nooks as Apple has iPads. (This equivalence will probably not last since Nook sales seem to be in sharp decline.) That is somewhat startling in and of itself, since Apple is perhaps the leading seller of consumer electronics and B&N was entirely new to that game. Nook also seems to have — at least for a while — sold more ebooks than Apple. (This “fact” may also be in the rear view mirror with the apparent collapse of Nook device sales.) I will be so bold as to suggest that this is not because Nook has superior merchandising to the iBookstore. More likely it is because the B&N customer is a heavier reader than the Apple customer and prefers to do his or her book shopping — and even his or her book device shopping — with a bookseller.

[Correction to the above paragraph made on 11 Sept. I misheard and therefore misreported something that was caught by a reader in the comments below, but I should also correct here.  Apple has sold ~200M iPads but are only roughly 12% of the ebook market whereas B&N has sold only about 1/20th the number of Nooks and are about 18% of the ebook market. That fact makes little sense to anyone in Silicon Valley but speaks to how book audiences really behave. We all know a very high % of Nook owners are active store buyers.]

There is one more huge distinction between books and the other media and it is around the motivation of the consumer. While sometimes TV or movies might be consumed for some educational purpose, most of the time the motivation is simply “entertainment”, as it is with music. While analysis of prior video or music consumed and enjoyed might provide clues to what should be next, figuring out what book should be next is a much more complex challenge.

And the clues don’t just come from prior books consumed and enjoyed. Books are bought because people are learning how to cook or do woodworking, or because they are traveling to a distant place and want to learn a new language or about distant local customs, or because they are going to buy a new house or have suddenly been awakened to the need to save for retirement. You can’t really suggest the next book to buy to many consumers without knowing much more about them than knowing their recent reading habits would tell you.

But not only do (most of) the ebook-only retailers not know whether you’re moving or traveling, they don’t even know what you searched for when you were looking for print. And, even if they did know, operating in an ebook-only environment would make many of the best suggestions for appropriate books to address everyday needs off limits, because many of those books either don’t exist in digital form or aren’t as good as a YouTube video to satisfy the consumer’s requirements.

Indeed, it is the sheer “granularity” of the book business — so many books, so many types of books, so many (indeed, innumerable) audiences for books — that makes it so different from the other media.

Of course, there is one company — Google — that is not only in the content business and the search business but which also handles “granularity” better than any company on earth, down to the level of the attributes and interests of each individual. Google not only would know if you were moving or traveling, they would be in a great position to sell targeted ads to publishers with books that would help consumers with those or a million other information needs. (They also know about all your searches on YouTube!) But because Google’s retailing ambitions are bounded by digital, they are walking past the opportunity to be the state-of-the-art book recommendation engine. They’re applying pretty much the same marketing and distribution strategy across digital media at Google Play. They aren’t seeing that book customers are both print and digital. They aren’t seeing that books are, indeed, different.

When the day comes that they do, this idea will look better to them that it might have at first glance.