Doubleday

In an indie-dominant world, what happens to the high-cost non-fiction?


I first learned and wrote about Hugh Howey about four years ago. At the time, he was one of the first real breakthrough successes as an indie author, making tens of thousands of dollars a month exclusively through Amazon for his self-published futurist novel, “Wool”. As soon as I could track him down, I invited Hugh and his agent, Kristin Nelson, to speak at the next Digital Book World, which they did several months later, in January 2013.

In the years since, Hugh has had a very public profile as a champion of indie publishing and as a critic of big publishers. When I first encountered Howey, he and his agent had already turned down more than one six-figure publishing deal. Nelson ultimately did a print-only deal for “Wool” with Simon & Schuster, a deal consummated before the big publishers made the apparently-universal decision that they would not sign books for which they didn’t get electronic rights.

This week there was a lengthy interview with Howey done by DBW editor Daniel Berkowitz published on the DBW blog. In this piece, Howey reviews many of his complaints against publishers. According to him, their royalty rates are too low and they pay too infrequently and on too much of a delay. Their authors are excluded from Kindle’s subscription revenue at Kindle Unlimited. Their ebook prices to consumers are too high. And, on top of that, they pay too much rent to be in New York City and they pay their big advances to wealthy authors who don’t really need the money, while aspiring authors get token advance payments that aren’t enough to give them time off to write.

Howey’s observations are not particularly welcomed by publishers, but he has a deep interest in indie authors and, by his lights, is always trying to help them by encouraging them to indie-publish through Amazon rather than seeking a traditional deal through an agent. He has organized the AuthorEarnings website and data repository along with Data Guy, the games-business data analyst who has turned his analytical skills to the book business whom we featured at the most recent Digital Book World this past March.

Howey and I have had numerous private conversations over the years. He’s intelligent and sincere in his beliefs and truly devotes his energy to “industry education” motivated by his desire to help other authors. Yet there are holes in his analysis of the industry and where it is going that he doesn’t fill. Given his substantial following and obvious comfort level doing the marketing (such as it is, and it appears Howey’s success as an author hasn’t required much) for his own books as well as his commercial performance, it is easy to understand why he would never consider publishing any other way but as he has, as an indie author who is “all in” with Amazon. But he seems to think what worked well for him would work best for anybody.

In this interview, Howey says that any author would be better off self-publishing his or her first book than going the route of selling it to a publisher. And he actually dismisses the marketing effort required to do that. Howey says the best marketing is publishing your next book. He thinks the best strategy is for authors to write several books a year to gain success. In fact, he says taking time away from writing to do marketing is a bad choice. Expecting most writers, or even many writers, to do several books a year strikes me as a highly dubious proposition.

It is impossible to quarrel with the fact of Howey’s success. But he makes a big mistake assuming that what worked effectively for him makes self-publishing the right path for anybody else, let alone everybody else.

Howey also has an unrealistically limited view of the output of big publishing. If you read this interview (and I would encourage anybody interested in the book business to do so), you see that he thinks almost exclusively about fiction or, as he puts it, “storytelling”. Books come, like his did, out of an author’s imagination and all the author needs is the time to write. Exposure through Amazon does the rest.

He gives publishers credit for putting books into stores (although he would have them eliminate returns, which would cut down sharply on how effectively they accomplished that). But he thinks stores will be of diminishing importance. (We certainly agree on that.) He gives credit for the indie bookstore resurgence to Amazon, which would be true if you credit Amazon with the demise of Borders that wiped out over 400 big bookstores and created new opportunities for indies. But the idea that Amazon is allied with indie bookstores is contradicted by two realities. One is that the indie stores won’t stock Amazon-published books. The other is that Amazon, now in the process of opening its second retail store, may plan dozens, hundreds, or thousands more to come! We really don’t know. Certainly, very few indie bookstores would be applauding that.

Here’s how Howey sums up his advice to authors.

“Too few successful self-pubbed authors talk about the incredible hours and hard work they put in, so it all seems so easy and attainable. The truth is, you’ve got to outwork most other authors out there. You’ve got to think about writing a few novels a year for several years before you even know if you’ve got what it takes. Most authors give up before they give themselves a chance. It’s similar to how publishers give up on authors before they truly have a chance.”

This seems like sound advice, but it isn’t how it appeared to work for Howey. He published a novella which was the start of Wool and his Amazon audience asked for more. Three more novellas later, over a period of just a few months, and the four combined became his bestselling novel. Six months after he started, he was making $50,000 a month or more and had an agent selling his film rights. Then his agent started selling his book rights in non-US territories and in other languages. Meanwhile, Howey continued to earn 70 percent of the revenues from his ebooks, in a deal Amazon offered that matched what they paid to agency publishers, the biggest publishers. (Would Amazon be paying authors 70 percent if publishers hadn’t come up with that number for agency? Should big publishers get some of the credit for the very good deal indie authors are getting?)

The logic that Howey offers about how self-publishing stacks up against doing deals with a big house is very persuasive, but there are two pieces of reality that contradict it.

One is that, at this time, four years after Howey did “Wool” and eight years after the launch of Kindle, there are no noteworthy authors who have abandoned their publishing deals for self-publishing. (It appeared briefly that Barry Eisler was the first such author, except that it turned out he signed an Amazon Publishing deal after turning down a Big Six contract; he didn’t go indie. And, frankly, while he’s somewhat successful, he’s not a show-stopper author for any publisher.) In fact, Amazon’s own publishing strategy has apparently switched away from trying to persuade big commercial fiction authors to do that and is focused on the genre fiction that is the core of the self-publishing done through them. Howey has been offering the same analysis for quite a few years now but so far, the publishers have lost hardly anybody they care to keep to self-publishing. And we’re now in a period where the split of books sold online (ebooks and print) to books sold in stores (where publishers are beyond helpful; they’re necessary) appears to have stabilized — at least for the time being — after years of stores losing share.

The other is that Howey’s analysis totally leaves out one of the biggest categories of publishing: big non-fiction like history or biographies or industry analyses that take years of research and dedication to complete. Unlike a lot of fiction, those books not only take time, they require serious help and expense to research. In a imagined future world where all books are self-published, aspiring fiction writers give up very little (small advances) and successful fiction authors have the money to eat while they write the next book they can make even more money on doing it the Howey way (even though none have). But big non-fiction books like Jane Mayer’s “Dark Money” (or anything by David McCullough) took years of research to put together. “Dark Money” was undoubtedly financed at a very high level by the Doubleday imprint at Penguin Random House. How books like that will be funded in the future is not covered by Howey’s analysis.

Now, that’s not to say they must be. Economic realities do rule. Howey’s thesis that things are shifting in Amazon’s direction and away from the ecosystem that has sustained big book publishers is correct. He predicts that there will be three big publishers where once there were six and now there are five. I concur with that. As that happens, maybe the big fiction writers will take Howey’s advice.

But that solution is no solution for authors like Jane Mayer or David McCullough. A world without publishers where authors do the writing and the publishing might give us an output of fiction comparable to what we have now. But the biggest and best non-fiction would need another model if publishers weren’t able to take six-figure investment risks to support them. Amazon’s not offering it and neither is Howey. If the future unfolds as Howey imagines it, we’ll never know what books we’re missing.

78 Comments »

On Amazon stores and publishers accepting standardization; two unrelated commentaries


When the “Amazon-opening-400-stores” rumor landed a week ago, many people were gobsmacked. It took me a minute to get past that, which also required getting past my firm conviction when they opened the Seattle store last year that it was an information-gathering exercise, not the opening move of a bigger retail play.

But, when you think it through, it not only doesn’t seem crazy that Amazon would open stores, it seems like an obviously compelling move.

Other retailers that started strictly online have opened retail locations, most notably the eyeglasses shop Warby Parker. (This New Yorker story mentions that. It also has an interesting disclaimer at the end because “Amazon Studios is producing a New Yorker series in partnership with Condé Nast Entertainment”. Wow.)

“Omni-channel”, which is really a new-fangled fancy term for selling both online and through a brick store, is the buzzword du jour of retailing. Actually, the online piece of that is the harder part and Amazon already had that licked.

Barnes & Noble “beat” Borders largely because they had a network of distribution centers that made stocking their retail locations extremely efficient. Amazon’s network of distribution centers is complicated because it isn’t just books, but they have many times the number of points of inventory storage as B&N. In fact, they have many times the number of storage points as B&N and Ingram and Baker & Taylor combined!

Amazon has tons of information that nobody else does that would inform their stocking decisions if they harnessed it. They know where searches are coming from for particular book titles or for generic needs, both geographically and psychographically. And they probably can detect early lifts for particular books faster than anybody else, simply because they have more data.

It is possible that if B&N and the indies had responded differently to Amazon Publishing, agreeing to stock the books rather than boycotting them, this could have played out differently. (No stronger argument could be made for the efficacy of that strategy than this post arguing that stores should stock Amazon titles to punish them because the returns would make them unprofitable! You can’t beat logic like that.) If the stores had stocked their titles, Amazon might have chosen to use their distribution center advantage to start wholesaling, rather than to support their own retail locations (as they appear to be doing).

But the determination of the brick retailers to boycott Amazon was spelled out loudly and clearly. So opening Amazon retail locations — as it increasingly appears they have every intention to do — has two strategic payoffs for them. One is that it gives them access to at least some brick-and-mortar retail locations for their publishing output, which otherwise they can only sell online. And the other is that it capitalizes on their distribution centers, delivering additional sales and margin for investments already made.

In a recent post, I suggested one specific way Amazon could get very disruptive if they had more than a handful of stores. There’s another. They are a tech company that likes to have computers make decisions that in other companies and in other times have been made by humans. I suspect they’ll figure out pretty fast that they will want to have some sort of vendor-managed inventory system to streamline and optimize the stocking decisions for what will almost certainly be a growing network of retail locations. (The part of a trade book person’s DNA that is most out-of-step with the digital age is that we like to make decisions case-by-case, rather than living with decisions made by rules we create. That’s the key to the second half of this post.)

Sophisticated but automated stocking and restocking decisions are not part of the toolkit at B&N or of any other retailer or wholesaler we know. Could that be the next battleground that Amazon retail stores create? That would certainly be disruptive, but at least in this corner of the world it would not be a surprise.

****************************************

One mantra of the book publishing world is “every book is different”. We sometimes refer to that fact as reflecting the “granularity” of the book business compared to other kinds of consumer goods businesses or other media. Even if you think in terms of categories, there are just more of them in publishing than there are for other products or media.

Perhaps, then, it isn’t surprising that publishers are often inclined to encourage that uniqueness beyond where it is required. And, frankly, it is only required for editorial development and for targeting the marketing. The objective at every place in the value chain in between should be to standardize and, as much as possible, to treat many different books the same. That’s not a creative imperative; it is a commercial imperative.

My father first experienced the tension that this insight can create at Doubleday in the 1950s when he persuaded the company to standardize the trim sizes of their books for maximum printing efficiency. That didn’t require radical changes. It simply meant that books would be an eighth- or quarter-inch longer or shorter, wider or narrower. These were differences that were really not perceptible to most people, yet it was a real internal corporate battle to wrest control from designers who believed “every book is different” and that this mystery (or cookbook) had to be published as a 6 by 9 inch book while that one had to be 6-1/2 inches by 9.

In fact, the trivial differences in trim size were not important at all to the books’ chances of success. There were other decisions — the specific paper or type face among them — that also had no discernible commercial impact on each individual book but were, nonetheless, intentionally made book-by-book as though they did. In many houses, and (admittedly I’m saying this without any supporting data) probably more in smaller houses than larger ones, they still are. And that’s true even though whether the paper is 55 pound or 60 pound or the type face is Times Roman or Baskerville can’t be shown to have any impact at all on a book’s sales.

Now the University of North Carolina Press has been funded by the Mellon Foundation to put Dad’s theory to use in the university press and academic publishing world. They’ve created a service offering through their Longleaf distribution platform that takes the design, pre-press, production, and distribution burden off the hands of university press and academic publishers so they can focus on what makes them distinctive: the books they choose to publish and the skill with which they edit them.

This fits an industry reality I identified a couple of years ago that I called “unbundling”.

On one hand, UNC Press Director John Sherer reports real success, expecting to grow that part of their business by 50 percent in the coming year. But he also reports resistance by some presses who believe that making these design and production decisions adds so significantly to the “quality” of their output that they’re comfortable losing money doing it.

My own hunch is that many directors just don’t have the heart (or courage) to get rid of staff that, with all the best intentions and capabilities but without the advantages of technology and scale, provide them with no better than average quality at a much higher cost than they need to spend. This was a battle for Leonard Shatzkin when he fought it at Doubleday in the early 1950s and apparently it is still being fought hard six decades later.

28 Comments »

An obituary last week reminded me of some family history we are proud of


Normally what is written here is about publishing’s present with a look to its future. An obituary notice last week recalled some personal family history about publishing’s past and shed some light on how much has changed in the past six decades. It’s publishing history from a highly personal point of view, but it seems an appropriate story with which to end the year.

Last week there were several reports, including from The New York Times and from Publishers Weekly, of the death of Charles F. Harris at the age of 81. Harris had been the founder of Armistad Press, now an imprint of HarperCollins, and was for a time the director of Howard University Press. He was very unusual, if not unique, being an African-American executive in the world of trade and university press publishing. It was noted that he began his career at Doubleday in 1956.

This brought back to me that my father, Leonard Shatzkin, had hired Harris at Doubleday back then. The obit triggered a partial memory: that Dad had hired two black men at Doubleday in the 1950s and was then told, by somebody in authority: “that’s enough, Len”. Dad died in May of 2002 and Mom, Eleanor Shatzkin, in January of 2007, so I pinged my sisters Karen and Nance to help me piece together more of the story. Turns out there was more in my memory that my sisters helped me to dig out (but memory turned out to be only a secondary authority).

I had met the second of the two hires in the 1970s. I believe that at that time he owned a printer of book jackets. I couldn’t remember his name — which was Ed Simmons — until I dug it up in a way you’ll be told in the postscript. Nance and I had some professional interaction with Harris in his Howard University Press days. Ed Simmons’s name was temporarily lost to memory, including with the one veteran of Doubleday at that time with whom I was able to check, until I found it later.

The family account had always been that, after he got the word from higher-ups to stop his personal integration movement that our Mom had to wrestle with him to cooperate so he wouldn’t lose his job. That shook loose another slightly-off memory, which was that Dad corrected that account sometime before he died. I recalled that he had defied both Mom and his bosses and did offer a third black man a position at Doubleday. “How come you didn’t lose your job, Dad?” The answer? Because the person to whom the job was offered turned it down!

Karen recalled that Dad had gotten help from the Urban League to find worthy candidates who would pass muster in as genteel (and in this case gentile as well, Len very much excepted) an environment as this major New York publisher in the 1950s.

Reflecting on this made me think a bit harder about Dad’s career. He left Doubleday in 1961 for Crowell-Collier Macmillan, a company that would have been presumably more hospitable to him since it was headed by a Jew. (Jews were vanishingly rare at Doubleday in the 1950s.) But Dad found reasons to object ethically to the Macmillan management too, and he resigned from there in 1963. He went on to McGraw-Hill in a much less exalted (and substantially lower-paid) position and was there for about five years before starting his own businesses: first a book production service and then a trade book distributor. During that same period, he quit a lucrative consulting gig in a company originally started by a mentor of his because he objected to the Board decisions of the founder’s children.

In other words, Dad wasn’t real good at working for other people.

He apparently passed that along to us. Sister Karen runs her own law firm and sister Nance runs her own consultancy doing data management in the health care business. But at least they have worked for other people, at least for a while. Since I left my Dad’s employ in 1978, I never have. Fortunately, the independent temperament we inherited from Dad and which was nurtured by both our parents was augmented by training in basic business skills that we got mostly from our mother. (She started as a physicist but then became a management consultant.)

We now live in times where we would not be told by any employer that would conceivably have us that we couldn’t hire a person of any particular color. There are other aspects of corporate and organizational life that would prevent a child of our parents from being a happy component of somebody’s larger scheme, but that particular problem is a relic of history. I’m so glad that our Dad’s courage in the 1950s gave Charlie Harris an opportunity that, with Harris’s own talent and application, turned into a remarkable career.

And, as I was finishing this post, I searched “Urban League” within my blog and found out I had told the story once before, back in April, 2009. Turns out I remembered back then the name of the second hire, Ed Simmons, so I was able to put it in here. And I also got the story straight about how I learned that Dad had hired a third person, which I left mangled in this recollection. AND I got at least part of the straight scoop on exactly how the word came down from Doubleday. This post adds some insight that the first didn’t, but I would also refer you to the original. And let this one and the gaps between them stand as testimony to how much we can forget in nearly seven years.

2 Comments »

Market research used to be a silly idea for publishers but it is not anymore


When my father, Leonard Shatzkin, was appointed Director of Research at Doubleday in the 1950s, it was a deliberate attempt to give him license to use analytical techniques to affect how business was done across the company. He had started out heading up manufacturing, with a real focus on streamlining the number of trim sizes the company manufactured. (They were way ahead of their time doing that. Pete McCarthy has told me about the heroic work Andrew Weber and his colleagues did at Random House doing the same thing in the last decade, about a half-century later!)

Len Shatzkin soon thereafter was using statistical techniques to predict pre-publication orders from the earliest ones received (there were far fewer major accounts back then so the pre-pub orders lacked the few sizable big pieces that comprise a huge chunk of the total today) to enable timely and efficient first printings. Later he took a statistically-based approach to figure out how many sales reps Doubleday needed and how to organize their territories. When the Dolphin Books paperback imprint was created (a commercial imprint to join the more academic Anchor Books line created a few years before by Jason Epstein), research and analytical techniques were used to decide which public domain classics to do first.

In the many years I’ve been around the book business, I have often heard experts from other businesses decry the lack of “market research” done by publishers. In any other business (recorded music might be an exception), market research is a prerequisite to launching any new product. Movies use it. Hotel chains use it. Clothing manufacturers use it. Software companies use it. Online “content producers” use it. Sports teams use it. Politicians use it. It is just considered common sense in most businesses to acquire some basic understandings of the market you’re launching a new product into before you craft messages, select media, and target consumers.

In the past, I’ve defended the lack of consumer market research by publishers. For one thing, publishers (until very recently) didn’t “touch” consumers. Their interaction was with intermediaries who did. The focus for publishers was on the trade, not the reader, and the trade was “known” without research. To the extent that research was necessary, it was accomplished by phone calls to key players in the trade. The national chain buyer’s opinion of the market was the market research that mattered. If the publisher “knew different”, it wouldn’t do them any good if the gatekeeper wouldn’t allow the publisher’s books on his shelves.

And there were other structural impediments to applying what worked for other consumer items. Publishers did lots of books; the market for each one was both small and largely unique. The top line revenue expected for most titles was tiny by other consumer good standards. The idea of funding any meaningful market research for the output of a general trade publisher was both inappropriate and impractical.

But over the past 20 years, because a very large percentage of the book business’s transaction base has moved online and an even larger part of book awareness has as well, consumers have also been leaving lots of bread crumbs in plain digital sight. So two things have shifted which really change everything.

Publishers are addressing the reader directly through publisher, book, and author websites; through social media, advertising, and direct marketing; and through their copy — whether or not they explicitly acknowledge that fact — because the publisher’s copy ends up being returned as a search result to many relevant queries.

The audience research itself is now much more accessible than it ever was: cheaper and easier to do in ways that are cost-effective and really could not be imagined as recently as ten years ago.

We’ve reached a point where no marketing copy for any book should be written without audience research having been done first. But no publisher is equipped to do that across the board. They don’t have the bodies; they don’t have the skill sets; and a process enabling that research doesn’t fit the current workflow and toolset.

So when the criticism was offered that publishers should be doing “market research” before 2005, just making that observation demonstrated a failure of understanding about the book business. But that changed in the past 10 years. Not recognizing the value of it now demonstrates a failure to understand how much the book business has changed.

What publishers need to do is to recognize “research” as a necessary activity, which, like Len Shatzkin’s work at Doubleday in the 1950s, needs to cut across functional lines. Publishers are moving in that direction, but mostly in a piecemeal way. One head of house pointed us to the fact that they’ve hired a data scientist for their team. We’ve seen new appointments with the word “audience” in their title or job description, as well as “consumer”, “data”, “analytics”, and “insight”, but “research” — while it does sometimes appear — is too often notable by its absence in the explicit description of their role.

Audience-centric research calls for a combination of an objective data-driven approach, the ability to use a large number of listening and analytical tools, and a methodology that examines keywords, terms, and topics looking to achieve particular goals or objectives. A similar frame of mind is required to perform other research tasks needed today: understanding the effect of price changes, or how the markets online and for brick stores vary by title or genre, or what impact digital promotion has on store sales.

The instincts to hire data scientists and to make the “audience” somebody’s job are good ones, but without changing the existing workflows around descriptive copy creation, they are practices that might create more distraction than enlightenment. Publishers need to develop the capability to understand what questions need to be asked and what insights need to be gained craft copy that will accomplish specific goals with identified audiences.

Perhaps they are moving faster on this in the UK than we are in the US. One high-ranking executive in a major house who has worked on both sides of the Atlantic told me a story of research the Audience Insight group at his house delivered that had significant impact. They wanted to sign a “celebrity” author. Research showed that the dedication of this author’s fans was not as large as they anticipated, but that there was among them a high degree of belief and faith in the author’s opinions about food. A food-oriented book by that author was the approach taken and a bestseller was the result. This is a great example of how useful research can be, but even this particular big company doesn’t have the same infrastructure to do this work on the west side of the Atlantic.

What most distinguishes our approach at Logical Marketing from other digital marketing agencies and from most publishers’ own efforts is our emphasis on research. We’ve seen clearly that it helps target markets more effectively, even if you don’t write the book to specs suggested by the research. But it also helps our clients skip the pain and cost of strategic assumptions or tactics that are highly unlikely to pay off: such as avoiding the attempt to compete on search terms a book could never rank high for; recognizing in advance a YouTube or Pinterest audience that might be large, but will be hard or impossible to convert to book sales; or trying to capture the sales directly from prospects that would be much more likely to convert through Amazon.

With the very high failure rate and enormous staff time suck that digital marketing campaigns are known for, research that avoids predictable failures pays for itself quickly in wasted effort not expended.

McCarthy tells me from his in-house experience that marketers — especially less-senior marketers — often know they’re working on a campaign that in all probability won’t work. We believe publishers often go through with these to show the agent and author — and sometimes their own editor — that they’re “trying” and that they are “supporting the book”. But good research is also something that can be shown to authors and agents to impress them, particularly in the months and years still left when not everybody will be doing it (and the further months and years when not everybody will be doing it well.) Good research will avoid inglorious failures as well as point to more likely paths to success.

Structural changes can happen in organic ways. Len Shatzkin became Director of Research at Doubleday by getting the budget to hire a mathematician (the term “data scientist” didn’t exist in 1953), using statistical knowledge to solve one problem (predicting advance sales from a small percentage of the orders), and then building on the company’s increasing recognition that analytical research “worked”.

If the research function were acknowledged at every publisher, it would be usefully employed to inform acquisition decisions (whether to bring in a title and how much it is worth), list development, pricing, backlist marketing strategies, physical book laydowns to retailers, geographical emphasis in marketing, and the timing of paperback edition release.

Perhaps the Director of Research — with a department that serves the whole publishing company — is an idea whose time has come again.

But, in the meantime, Logical Marketing can help.

Remember, you can help us choose the topics for Digital Book World 2016 by responding to our survey at this link.

6 Comments »

Seven key insights about VMI for books and why it is becoming a current concern


Vendor-managed inventory (VMI) is a supply paradigm for retailers by which the distributor makes the individual stocking decisions rather than having them determined by “orders” from an account. The most significant application of it for books was in the mass-market paperback business in its early days, when most of the books went through the magazine wholesalers to newsstands, drug stores, and other merchants that sold magazines. The way it worked, originally, was that mass-market publishers “allocated” copies to each of several hundred “independent distributors” (also known as I.D. wholesalers), who in turn allocated them to the accounts.

Nobody thought of this as “vendor-managed inventory”. It was actually described as “forced distribution”. And since there was no ongoing restocking component built into the thinking, that was the right way to frame it.

The net result was that copies of a title could appear in tens of thousands of individual locations without a publisher needing to show up at, or even ship to, each and every one.

To make this system functional at the beginning, the books, like magazines, had a predictable monthly cycle through the system. The copies that didn’t sell in their allotted time were destroyed, with covers returned to the publisher for credit.

Over time, the system became inefficient (the details of which are a story for another day, but the long story short is that publishers couldn’t resist the temptation to overload the system with more titles and copies than it could handle) and mass-market publishing evolved into something quite different which today, aside from mostly sticking to standard rack-sized books, works nothing like it did at the beginning.

My father, Leonard Shatzkin, introduced a much more sophisticated version of VMI for bookstores at Doubleday in 1957 called the Doubleday Merchandising Plan. In the Doubleday version, reps left the store with a count of the books on-hand rather than a purchase order. The store had agreed in advance to let Doubleday use that inventory count to calculate sales and determine what should then be shipped in. In 18 months, there were 800 stores on the Plan, Doubleday’s backlist sales had quadrupled and the cost of sales had quartered. VMI was much more efficient and productive — for Doubleday and for the stores — than the “normal” way of stocking was. That “normal” way — the store issues orders and the publisher then ships them — was described as “distribution by negotiation” by my father in his seminal book, “In Cold Type”, and it is still the way most books find their way to most retail shelves.

After my Dad left Doubleday in 1960, successor sales executives — who, frankly, didn’t really understand the power and value of what Dad had left them — allowed the system to atrophy. This started in a time-honored way, with reps appealing that some stores in their territory would rather just write their own backlist orders. Management conferred undue cred on the rep who managed the account and allowed exceptions. The exceptions, over time, became more prevalent than the real VMI and within a decade or so the enormous advantage of having hundreds of stores so efficiently stocked with backlist was gone.

And so, for the most part, VMI was gone from the book business by the mid-1970s. And, since then, there have been substantial improvements in the supply chain. PCs in stores that can manage vast amounts of data; powerful service offerings from the wholesalers (primarily Ingram and Baker & Taylor, but others too); information through services like Above the Treeline; and consolidation of the trade business at both ends so that the lion’s share of a store’s supply comes from a handful of major publishers and distributors (compared to my Dad’s day) and lots of the books go to a relatively smaller number of accounts have all combined to make the challenge of efficient inventory management for books at retail at least appear not to need the advantages of VMI the way it did 60 years ago.

And since so many bookstores not only really like to make the book-by-book stocking decisions, or at least to control them through the systems they have invested in and applying the title-specific knowledge they work hard to develop, there has been little motivation for publishers or wholesalers to invest in developing the capability to execute VMI.

Until recently. Now two factors are changing that.

One is that non-bookstore distribution of books is growing. And non-bookstores don’t have the same investments in book-specific inventory management and knowledge, let alone the emotional investments that make them want to decide what the books are, that bookstores do. Sometimes they just simply can’t do it: they don’t have the bandwidth or expertise to buy books.

And the other is that both of the two largest book chains, Barnes & Noble and Books-a-Million, are seeing virtue in transferring some of the stocking decisions to suppliers. B&N, at least, has been actively encouraging publishers to think about VMI for several years. These discussions have reportedly revolved around a concept similar to one the late Borders chain was trying a decade or more ago, finding “category captains” that know a subject well enough to relieve the chain of the need for broad knowledge of all the books that fall under that rubric.

This is compelling. Finding that you are managing business that could be made more efficient with a system to help you while at the same time some of your biggest accounts are asking for services that could benefit from the same automation are far more persuasive goads to pursue an idea than the more abstract notion that you could create a beneficial paradigm shift.

As a result, many publishing sales departments today are beginning to grapple with defining VMI, thinking about how to apply it, and confronting the questions around how it affects staffing, sales call patterns, and commercial terms. This interest is likely to grow. A well-designed VMI system for books (and buying one off-the-shelf that was not specifically designed for books is not a viable solution) will have applications and create opportunities all over the world. Since delivering books globally is an increasingly prevalent framework for business thinking, the case to invest in this capability gets easier to make in many places with each passing day.

VMI is a big subject and there’s a lot to know and think through about it. I’ve had the unusual — probably unique — opportunity to contemplate it with all its nuances for 50 years, thanks to my Dad’s visionary insight into the topic and a father-son relationship that included a lot of shop talk from my very early years. So here’s my starter list of conceptual points that I hope would be helpful to any publisher or retailer thinking about an approach to VMI.

1. Efficient and commercially viable VMI requires managing with rules, not with cases. Some of the current candidates to develop a VMI system have been drawn into it servicing planograms or spinner racks in non-book retailers. These restocking challenges are simpler than stocking a store because the title range is usually predetermined and confined and the restocking quantity is usually just one-for-one replenishment. We have found that even in those simple cases, the temptation to make individual decisions — swapping out titles or increasing or decreasing quantities in certain stores based on rates of movement — is hard to resist and rapidly adds complications that can rapidly overwhelm manual efforts to manage it.

2. VMI is based on data-informed shipments and returns. It must include returns, markdowns, or disposals to clear inventory. Putting books in quickly and efficiently to replace sold books is, indeed, the crux of VMI. But that alone is “necessary but not sufficient”. Most titles do not sell a single copy in most stores to which they are introduced. (This fact will surprise many people, but it is mathematically unavoidable and confirmed through data I have gotten from friends with retail data to query.) And many books will sell for a while and then stop, leaving copies behind. Any inventory management depending on VMI still requires periodic purging of excess inventory. That is, the publisher or distributor determining replenishment must also, from time to time, identify and deal with excess stock.

3. VMI sensibly combines with consignment and vendor-paid freight. The convention that books are invoiced to the account when they are shipped and that the store pays the shipping cost of returns (and frequently on incoming shipments as well) makes sense when the store holds the order book and decides what titles and quantities are coming in. But if the store isn’t deciding the titles and quantities, it obviously shouldn’t be held accountable for freight costs on returns; that would be license for the publisher or distributor to take unwise risks. The same is really true for the carrying cost of the inventory between receipt and sale. If the store’s deciding, it isn’t crazy for that to be their lookout. But if the publisher or distributor is deciding, then the inventory risk should be transferred to them. The simplest way to do that is for the commercial arrangement to shift so that the publisher offers consignment and freight paid both ways. The store should pay promptly — probably weekly — when the books are sold. (Publishers: before you get antsy about what all this means to your margins, read the post to the end.)

Aside from being fairer, commercially more logical, and an attractive proposition that should entice the store rather than a risky one that will discourage participation, this arrangement sets up a much more sensible framework for other discussions that need to take place. With publisher prices marked on all the books, it makes it clear to the retailer that s/he has a clear margin on every sale for the store to capture (or to offer as discounts to customers). And because the publisher is clearly taking all the inventory risk, it also makes it clear that the account must take responsibility for inventory “shrink” (books that disappear from the shelves without going through the cash register.)

Obviously, shrink is entirely the retailer’s problem in a sale-and-return arrangement; whatever they can’t return they will have paid for. But it is also obvious that retailers in consignment arrangements try to elide that responsibility. Publishers can’t allow a situation where the retailer has no incentive to make sure every book leaving the store goes through the sales scan first.

4. Frequent replenishment is a critical component of successful VMI. No system can avoid the reality that predicting book sales on a per-title-per-outlet basis is impossible to do with a high degree of accuracy. The best antidote to this challenge is to ship frequently, which allows lower quantities without lost sales because new copies replace sold copies with little delay. The vendor-paid freight is a real restraint because freight costs go down as shipments rise, but it should be the only limitation on shipment frequency, assuming the sales information is reported electronically on a daily basis as it should be. The publisher or distributor should always be itching to ship as frequently as an order large enough to provide tolerable picking and freight costs can be assembled. The retailer needs to be encouraged, or helped, to enable restocking quickly and as frequently as cost-efficient shipments will allow.

5. If a store has no costs of inventory — either investment or freight — its only cost is the real estate the goods require. GMROII — gross margin return on inventory investment — is the best measurement of profitability for a retailer. With VMI, vendor-paid freight, and consignment, it is infinity. Therefore, profitable margins can be achieved with considerably less than the 40 to 50 percent discounts that have prevailed historically. How that will play out in negotiations is a case-by-case problem, but publishers should really understand GMROII and its implications for retail profitability so they fully comprehend what enormous financial advantages this new way of framing the commercial relationship give the retailer.

(The shift is not without its challenges for publishers to manage but what at first appears to be the biggest one — the delay in “recognizing” sales for the balance sheet — is actually much smaller than it might first appear. And that’s also a subject for another day.)

6. Actually, the store also saves the cost of buying, which is very expensive for books. The most important advantage VMI gives a publisher is removing the need for a buyer to get their books onto somebody’s shelves. The publisher with VMI overcomes what has been the insuperable barrier blocking them from many retail establishments: the store can’t bear the expense of the expertise and knowledge required to do the buying. It is harder to sell that advantage to existing book retailers who have invested in systems to enable buyers, even if some buyer time can be saved through the publisher’s or distributor’s efforts and expertise. But a non-book retailer looking for complementary merchandise that might also be a traffic builder will appreciate largely cost-free inventory that adds margin and will see profitability at margins considerably lower than the discounts publishers must provide today.

7. Within reasonable limits, the publisher or distributor should be happy to honor input from the retailer about books they want to carry. It is important to remember that most titles shipped to most stores don’t actually sell one single unit. Giving a store a title they’re requesting should have odds good enough to be worth the risk (although that will be proven true or not for each outlet by data over time). Taking the huge number of necessary decisions off a store’s hands is useful for everybody; it shouldn’t suggest their input is not relevant. Indeed, getting information from stores about price or topical promotions they are running, on books or other merchandise, and incorporating that into the rules around stocking books, will help any book supplier provide a better and more profitable service to its accounts. After all, having a store say “I’d like to sell this title for 20 percent off next week in a major promotion, would you mind sending me more copies?” opens up a conversation every publisher is happy to have.

Of course, in a variety of consulting assignments, we are working on this, including system design. It is staggering to contemplate how much more sophistication it is possible to build into the systems today than it was a decade-and-a-half ago when we last immersed ourselves in this. In the short run, a VMI-management system will provide a competitive edge, primarily because it will open up the opportunity to deliver to retail shelves that will simply not be accessible without it. That will lead to it becoming a requirement. As I’ve said here before, a prediction like that is not worth much without being attached to a time scale. I think we’ll see this cycle play out over the next ten years. That is: by 2025, just about all book distribution to retailers will be through a VMI system.

2 Comments »

Better book marketing in the future depends a bit on unlearning the best practices of the past


[Note to subscribers. We have switched from Feedburner to Mail Chimp for email distribution to our list to improve our service. Please send us a note if you have any problems or think there’s anything we ought to know.]

*************************************************************************

A few years ago, publishers invented the position of Chief Digital Officer and many of the big houses hired one. The creation of a position with that title, reporting to the CEO, explicitly acknowledged the need to address digital change at the highest levels of the company.

Now we’re seeing new hires being put in charge of “audiences” or “audience development”. I don’t know exactly what that means (a good topic for Digital Book World 2016), but some conversations in the past couple of weeks are making clearer to me what marketing and content development in book publishing is going to have to look like. And audiences are, indeed, at the heart of it.

I’ve written before about Pete McCarthy’s conviction that unique research is needed into the audiences for every book and every author and that the flow of data about a book that’s in the marketplace provides continuing opportunities to sharpen the understandings of how to sell to those audiences. Applying this philosophy bumps up against two realities so long-standing in the trade book business that they’re very hard to change:

How the book descriptions which are the basis for all marketing copy get written
A generic lack of by-title attention to the backlist

The new skill set that is needed to address both of these is, indeed, the capability to do research, act on it, and, as Pete says, rinse and repeat. Research, analysis, action, observation. Rinse and repeat.

I had a conversation over lunch last week with an imprint-level executive at a Big House. S/he got my attention by expressing doubt about the value of “landing pages”, which are (I’ve learned through my work with Logical Marketing; I wouldn’t have known this a year ago) one of the most useful tools to improve discovery for books and authors. I have related one particularly persuasive anecdote about that here. This was a demonstration to me of how much basic knowledge about discovery and SEO is lacking in publishing. (The case for how widespread the ignorance of SEO in publishing has been made persuasively in an ebook by British marketer Chris McVeigh of Fourfiftyone, a marketing consultancy in the UK that seems to share a lot of the philosophy we employ at Logical Marketing.)

But then, my lunch companion made an important operational point. I was advocating research as a tool to decide what to acquire, or what projects might work. “But I could never get money to do research on a book we hadn’t signed,” s/he said, “except perhaps to use going after a big author who is with another house.” (Indeed, we’ve done extensive audits at Logical Marketing for big publishers who had exactly that purpose in mind.) “But, routinely? impossible!”

The team Pete leads can do what would constitute useful research which would really inform an acquisition decision, for $1000 a title. If the capability to do what we do — which probably requires the command of about two dozen analytical tools — were inhouse, it would cost much less than that.

Park that thought.

I also had an exchange last week with Hugh Howey, my friend the incredibly successful indie author with whom I generally agree on very little concerning big publishers and their value to authors. But Hugh made a point that is absolutely fundamental, one which I learned and absorbed so long ago that I haven’t dusted it off for the modern era. And it is profoundly important.

Hugh says there are new authors he’s encountering every day who are achieving success after publishers failed with them. It is when he described the sales curve of the successful indie — “steadily growing sales” — that a penny dropped for me. An old penny.

We recognize in our business that “word of mouth” is the most effective means of growing the market for a book. If that were the way things really worked, books would tend to have a sales curve that was a relatively gentle upward slope to a peak and then a relatively gentle downward slope.

Of course, very few books have ever had that sales curve. Nothing about the way big publishers routinely market and sell would enable it to happen. Everything publishers do tries to impose a different sales curve on their books.

A gentle upward slope followed by a gentle downward slope would, in the physical world, require a broad and very shallow distribution with rapid replenishment where the first copy or two put at an outlet had sold. But widespread coordination of rapid replenishment of this kind for books selling at low volumes at any particular outlet (let alone most outlets) is, for the most part, a practical impossibility in the world of distributed retail.

In fact, distributed retail demands a completely out-of-synch sales curve. It wants a big sale the first week a book is out to give it the best chance of making the bestseller list and, even failing that, the best chance of being worthy of continuing attention by a publisher’s sales staff, and therefore, the marketing team. Books in retail distribution are seen as failures if they don’t catch on pretty quickly, if not in days or weeks, certainly within a couple of months. And if a store sells two copies, say, of a new book in the first three months, it probably doesn’t make the cut as a book to be retained. If they bought two, they’re glad they’re gone and not likely to re-order without some push by the publisher or attention-grabbing other circumstance. If they bought ten, they’ll want to get their dollars back by making returns so they can invest in the next potentially big thing.

But that’s not the case online, where there is no need for distributed inventory (especially of ebooks!) If the first copies sold lead to word of mouth recommendations, the book will still be available to the online shopper. And there will be nothing in the way it is presented — it won’t have a torn cover hidden and be hidden in the back of the store, say — to indicate it isn’t successful. People can buy it and the chain can continue, building over time. Three months later, six months later, it really doesn’t matter; the book can keep selling. And, by the way, this will be true at any online retailer with an open account at Ingram (including for print-on-demand books), not just at Amazon.

But, in the brick and mortar world, the book will effectively be dead if it doesn’t catch on in the first three months. And the reality of staffing, focus, and the sales philosophy of most publishers means it won’t be getting any attention from the house’s digital marketers either.

If you live in the world of indie success like Hugh Howey does, you are repeatedly seeing authors breaking through months after a book’s publication, at a time when an experienced author knows a house would have given up on them.

Now park that.

I also had a chat last week with a former colleague of mine now at a periodical. He was explaining that one major conceptual challenge for his publication in the digital age was to see their readership as many pretty small and discrete audiences, not one big one at the level of the “subscriber”. No story in his publication is intended for “everybody”; what is important is for a newspaper or magazine to know whether particular stories are satisfying the needs of the particular niche of their audience that wants that topic, that kind of story. Talking to this former colleague about digital marketing and publishing was a variation on the themes that are topics with Pete.

One thing I learned in this conversation made another penny drop. Let’s say you have a story on any particular topic, from theater to rugby, my friend posited. Your total “theoretical market” within the publication’s readership is every person who ever read a single story on that subject. But your “core market” is every person who has read two stories on it. If a high percentage of those read it, the story succeeded. If not, the story failed.

And a further implication of this analysis is that seeing your audiences that way, and growing them that way, will also ultimately allow monetizing them more effectively. This wouldn’t be advertising-led, so much as harvesting the benefits of audience-informed content creation, but it is totally outside the way editorial creation at newspapers and magazines has always occurred.

And now park that.

We had a meeting two weeks ago with a fledgling publisher whose owner has a great deal of direct marketing expertise. As he heard Pete explaining what he did, looking for search terms that suggested opportunity (lots of use of the term and relatively few particularly good answers), he wondered if we could tell him through research what book to write. We’ve gotten some publishers in some circumstances to do marketing research early enough to influence titling and sub-titling. McVeigh in his ebook makes the same point under the rubric that SEO should be employed before titling any book.

Of course, we don’t sell that kind of help very often or we haven’t so far. It would require getting marketing money invoked early to pay for research like that. But we know it is useful.

And all of this together brings into sharper focus for me where trade publishing has to go, and how the marketing function, indeed, the whole publishing enterprise, needs to be about a constant process of audience segmentation, research, tweaks, analysis, and repeat. A persistently enhanced understanding of multiple audiences can productively inform title selection and creation. And systems and workflows need to be built to systematically apply what is being learned every day to every title which might benefit. Audience segmentation and constant research are really at the heart of the successful trade publishing enterprise of the future, even if we are only lurching toward them now with a primitive understanding of SEO, the occasional A-B test for a Facebook ad, and the gathering of some odd web traffic and email lists that don’t relate to any overall plan.

A publisher operating at scale ought to have the ability to provide those authors that want to build their audiences one reader at a time better analysis and tools than they would have to do it on their own.  Publishers have always depended on the energy of authors to sell their books; the techniques just have to change. Instead of footing the bill for expensive and wasteful author tours, publishers should be providing tools, data, and helpful coaching to be force multipliers for the efforts authors are happy to extend on their own behalf. The publisher’s goal should be have their authors saying “I don’t know how I could possibly be so effective without the help I get from my publisher.”

Publishers should also be doing the necessary research to examine the market for each book they might do before they bid on it. They should have audience groups with whom they’re in constant contact, and they also need the ability to quickly segment and analyze audiences “in the wild”. The dedicated research capabilities need to be applied to the opportunities surfaced by constant monitoring of both the sales of and the chatter about the backlist.

Size, scale, and a large number of titles about which a lot is known should give any publisher advantages over both indie authors and dominant retailers in building the biggest possible audience for the books it publishes. But getting there will require both learning the techniques of the future and unlearning the concepts and freeing themselves of the discipline of “pub date” timing that have always driven effective trade publishing.

The publishers creating new management positions with the word “audience” in the title would seem to be very much on the right track. It is worth recalling that my father, Leonard Shatzkin, carried the title of Director of Research at Doubleday in the 1950s. Research would be another function to glorify with a title and a budget assigned and monitored from the top of each company. Note to the CEOs: a budget for “research” for marketing and to inform acquisition should be explicit and it should be the job of somebody extremely capable to make sure it is productively invested.

15 Comments »

Amazon channels Orwell in its latest blast


Anybody who reads Amazon’s latest volley in the Amazon-Hachette war and then David Streitfeld’s takedown of it on the New York Times’s web site will know that Amazon — either deliberately or with striking ignorance — distorted a George Orwell quote to make it appear that he was against low-priced paperbacks when he was actually for them.

This recalls the irrelevant but delicious irony that the one time Amazon exercised its ability to claw back ebooks it had sold was when they discovered that they were selling unauthorized ebooks of Orwell’s “1984”. The right thing to do was exactly what they did: pull back the copyright-violating ebooks and refund the money to the purchasers. This (apparently) one-time event has often been cited as some sort of generic fault with ebooks, as though ebook vendors would make a practice of taking back what they had sold their customers. This was a case where Amazon was villified in some quarters for doing the right thing which simply adds to the irony.

However, the most misleading aspect of the Amazon piece is not the Orwellian treatment of Orwell, but the twisted metaphor in which the low-priced ebook is the low-priced paperback of today’s world. (The analogy was one I wrote about three years ago with, I think somewhat more care for the facts.) Yes, they were both new formats with a lower cost basis that enabled a lower retail price to yield positive margins. And there’s one other striking similarity: they both unleashed a spate of genre fiction to satisfy the demand for the format, largely because the rights to higher-value books were not available for the cheaper format, but also because lower prices attract some readers more than others. But that is where the similarities end.

This argument against Hachette, using authors as proxies and lower-prices-for-consumers as the indisputable public good, once again employs two logical fallacies that are central to their argument that Hachette (and its parent company, invoked to give the appearance of relative equality of size between the combatants, which is still nowhere near the case) is craven and muleheaded and that Amazon is merely engaged in a fight for right.

1. Amazon’s logic is entirely internal to Amazon. It does not attempt to take into account, or even acknowledge, that publishers and their authors are dependent on other channels besides Amazon. And, in fact, the publishers and authors know for sure that the more the sales do concentrate within Amazon, the more their margins will be reduced.

2. The price elasticity statistics they invoke (for the second time in as many public statements), which are also entirely internal to Amazon, are averages. They don’t even offer us a standard deviation so we can get a sense of what share of the measured titles are near the average, let alone a genre- and topic-specific breakdown which would show, beyond the shadow of a doubt, that many Hachette books would not achieve the average elasticity rate. See if you can find anybody with an ounce of statistical sophistication who thinks a book by Malcolm Gladwell has the same price elasticity as a romance or sci-fi novel by a relatively unknown author.

The actual history of the paperback in America contains elements of what Amazon claims. It actually begins after World War II, not before (although Penguin began in this country in 1939). During World War II, under the leadership of historian and renaissance man Philip Van Doren Stern, the military made 25 cent paperbacks available to the troops. That introduced the idea to the masses and after the war several mass-market paperback houses started.

They distributed through the magazine distribution network: local wholesalers that “pushed” copies of printed material to newsstands and other intermediaries who took their distribution of copies, displayed them until the next edition of the magazine would come out, and then sent back the covers to get credit for what was not sold. The first paperback books had a similar short shelf life in that distribution environment.

What made the cheap prices possible were several factors:

1. The books themselves were frequently formulaic and short and therefore cheap for the publisher to buy. The universe of titles for the first several years was, aside from classics from the public domain, a different set of titles than those sold by mainline publishers through bookstores.

2. There was no expensive negotiation between publishers and the accounts over an order for each shipment of books. The wholesaler simply decided how many copies each outlet would get and, in the beginning, the wholesaler pretty much distributed what the publisher asked them to. The “check and balance” was that the publisher would get worthless covers back for the unsold books and that was their constraint against oversupplying the system. Over time, that aspect of things broke down and the publisher had to work the wholesalers to get the distributions they wanted.

3. The books themselves were cheaper too: less and cheaper paper and much less expensive binding.

4. The adoption of the magazine system of covers-only for returns created a big saving compared to the trade book practice that required returns of the whole book in saleable condition to get credit.

5. The retailer took a considerably smaller share of the retail price than bookstores got on trade books.

At the same time that the mass-market revolution was beginning, conventional trade publishers also started experimenting with the paperback format. The first extensive foray of this kind was by Doubleday in the early 1950s, when wunderkind Jason Epstein (later the founder of NY Review of Books and still active as one of the founding visionaries behind the Espresso Book Machine) created the Anchor Books line.

My father, Leonard Shatzkin, was Director of Research at Doubleday (today they would call it “New Business Development” or “Change Management”) at the time. He often talked about a sales conference at Bear Mountain where Sid Gross, who headed the Doubleday bookstores, railed against the cheap paperbacks on which the stores couldn’t make any money! So, it was true that the established publishing industry and the upstart paperback business had a period of almost two decades of very separate development.

It took until the 1960s — a decade-and-a-half after the paperback revolution started — before the two businesses really started to coalesce into one. And the process of integrating the two businesses really took another decade-and-a-half, finally concluding in the late 1970s when Penguin acquired Viking, Random House acquired Ballantine and Fawcett, and Bantam started to publish hardcover books.

My own first job in trade publishing was in 1962, working on the sales floor of the brand new, just-opened paperback department of Brentano’s Bookstore on 5th Avenue. Even then, the two businesses operated separately. The floor of the department had chin-high shelves all around with what we’d call “trade paperbacks” today, arranged by topic. They were mostly academic. On a wall were the racks of mass-market paperbacks and they were organized by publisher. If you wanted to find the paperbacks of a famous author whose rights had gone to a mass-market house, you had to know which house published that author to find the book. (That was good; it made work for sales clerks!)

There was a simple reason for that. The two kinds of paperbacks worked with different economics and distribution protocols. The trade paperbacks were bought like hardcovers; everything that was shipped in was because a buyer for Brentano’s had ordered it. The mass-markets were “rack-jobbed” by the publisher. They sent their own reps in to check stock on a weekly basis and they decided what new books went into the racks and what dead stock was pulled. It was to make the work of the publishers efficient that the mass-markets were grouped by publisher.

The highly successful commercial books that became mass-market paperbacks got there because the hardcover publisher, after it had booked most of the revenue it expected to get for the book, then sold mass-market rights to get another bite of the apple.

Little of this bears much resemblance to what is happening today. Little of this is comparable to the challenges trade publishers face keeping alive a multi-channel distribution system and a printed book market that still accounts for most of the sales for most of the books.

But the most striking difference today is that a single retailer controls so much of the commerce that it can, on its own, influence pricing for the entire industry. The mere fact that one single retailer can try that is itself a signal that we have an imbalance in the value chain that is unprecedented in the history of publishing.

One other aspect of this whole discussion which is mystifying (or revealing) is Amazon’s success getting indie authors to cheer them on as they pound the publishers to lower prices. (The new Amazon statement is made in a letter sent to KDP authors.) This is absolutely indisputably against the interests of the self-published authors themselves, who are much better off if the branded books have higher prices and leave the lower price tiers to them. That seemed obvious to me years ago. Yet, Amazon still successfully invokes the indie author militia to support them as they fight higher prices for the indies’ competition! You will undoubtedly see evidence of that in the comment string for this post (if history is any guide).

The tactic of publishing Michael Pietsch’s name and email address with a clear appeal for the indie authors to flood his inbox is an odious tactic, but, in fairness to Amazon, that odious tactic was initiated by the Authors United advertisement headed by Douglas Preston which gave Bezos’s email address. This is something that both sides should refrain from and, in this case, Amazon didn’t start it.

243 Comments »

Marketing will replace editorial as the driving force behind publishing houses


One of the things my father, Leonard Shatzkin, taught me when I was first learning about book publishing a half-century ago was that “all publishing houses are started with an editorial inspiration”. What he meant by that is that what motivated somebody to start a book publisher was an idea about what to publish. That might be somebody who just believed in their own taste; it might be something like Bennett Cerf’s idea of a “Modern Library” of compendia organized by author; it might even be Sir Allen Lane’s insight that the public wanted cheaper paperback books. But Dad’s point was that publishing entrepreneurs were motivated by the ideas for books, not by a better idea for production efficiency or marketing or sales innovation.

In fact, those other functions were just requirements to enable somebody to pursue their vision or their passion and their fortune through their judgment about what content or presentation form would gain commercial success.

My father’s seminal insight was that sales coverage really mattered. When he recommended, on the basis of careful analysis of the sales attributable to rep efforts, that Doubleday build a 35-rep force in 1955, publishers normally had fewer than a dozen “men” (as they were, and were called, back then) in the field. The quantum leap in relative sales coverage that Doubleday gained by such a dramatic sales force expansion established them as a power in publishing for decades to come.

Over the first couple of decades of my time in the business — the 1960s and 1970s — the sales department grew in importance and influence. It became clear that the tools for the sales department — primarily the catalog, the book’s jacket, and a summary of sales points and endorsements that might be on a “title information sheet” that the sales reps used — were critical factors in a book’s success.

There was only very rarely a “marketing” department back then. There was a “publicity” function, aimed primarily at getting book reviews. There was often a “sales promotion” function, which prepared materials for sales reps, like catalogs. There might be an art department, which did the jackets. And there was probably an “advertising manager”, responsible for the very limited advertising budget spent by the house. Management of coop advertising, the ads usually placed locally by retail accounts that were partly supported by the publishers, was another function managed differently in different houses.

But the idea that all of this, and more, might be pulled together as something called “marketing” — which, depending on one’s point of view, was either also in charge of sales or alternatively, viewed as a function that existed in support of sales — didn’t really arise until the 1980s. Before that, the power of the editors was tempered a bit by the opinions and needs of the sales department, but marketing was a support function, not a driver.

In the past decade, things have really changed.

While it is probably still true that picking the “right books” is the single most critical set of decisions influencing the success of publishers, it is increasingly true that a house’s ability to get those books depends on their ability to market them. As the distribution network for print shrinks, the ebook distribution network tends to rely on pull at least as much as on push. The retailers of ebooks want every book they can get in their store — there is no “cost” of inventory like there is with physical — so the initiative to connect between publisher and retailer comes from both directions now. That means the large sales force as a differentiator in distribution clout is not nearly as powerful as it was. Being able to market books better is what a house increasingly finds itself compelled to claim it can do.

In the past, the large sales force and the core elements that they worked with — catalog, jacket, and consolidated and summarized title information — were how a house delivered sales to an author. Today the distinctions among houses on that basis are relatively trivial. But new techniques — managing the opportunities through social networks, using Google and other online ads, keeping books and authors optimized for search through the right metadata, expanding audiences through the analysis of the psychographics, demographics, and behavior of known fans and connections — are still evolving.

Not only are they not all “learned” yet, the environment in which digital marketing operates is still changing daily. What worked two years ago might not work now. What works now might not work a year from now. Facebook hardly mattered five years ago; Twitter hardly mattered two years ago. Pinterest matters for some books now but not for most. Publishers using their own proprietary databases of consumer names with ever-increasing knowledge of how to influence each individual in them are still rare but that will probably become a universal requirement.

So marketing has largely usurped the sales function. It will probably before long usurp the editorial function too.

Fifty years ago, editors just picked the books and the sales department had to sell them. Thirty years ago, editors picked the books, but checked in with the sales departments about what they thought about them first. Ten years from now, marketing departments (or the marketing “function”) will be telling editors that the audiences the house can touch need or want a book on this subject or filling that need. Osprey and some other vertical publishers are already anticipating this notion by making editorial decisions in consultation with their online audiences.

Publishing houses went from being editorially-driven in my father’s prime to sales-driven in mine. Those that didn’t make that transition, expanding their sales forces and learning to reach more accounts with their books than their competitors, fell by the wayside. The new transition is to being marketing-driven. Those that develop marketing excellence will be the survivors as book publishing transitions more fully into the digital age.

A very smart and purposeful young woman named Iris Blasi, then a recently-minted Princeton graduate, worked for me for a few years a decade ago. She left because she wanted to be an editor and she had a couple of stops doing that, briefly at Random House and then working for a friend named Philip Turner in an editorial division at Sterling. From there Iris developed digital marketing chops working for Hilsinger-Mendelson and Open Road. She’s just taken a job at Pegasus Books, a small publisher in Manhattan, heading up marketing but doubling as an acquiring editor. I think many publishers will come to see the benefits of marketing-led acquisition in the years to come. Congratulations to Pegasus and Iris for breaking ground where I think many will follow.

Many of the topics touched on in the post will be covered at the Marketing Conference on September 26, a co-production of Publishers Launch Conferences and Digital Book World, with the help and guidance of former Penguin and Random House digital marketer Peter McCarthy. We’ve got two bang-up panels to close with — one on the new requirement of collaboration between editorial and marketing within a house and then in turn between the house and the author, and the other on how digital marketing changes how we must view and manage staff time allocations, timing, and budgeting. These panels will frame conversations that will continue in this industry for a very long time to come as the transition this post sketches out becomes tangible.

37 Comments »

Vendor-managed inventory: why it is more important than ever


The idea of vendor-managed inventory has never become particularly popular in the book business, despite a few experiments over the years where it was implemented with great success. (And despite the fact that I was pushing for it back in 1997 and 1998.) But as the book business overall declines, with the print book business leading the slide and that portion of the print book business which takes place in retail stores falling off at an alarming rate, it is time for the industry to think about it again.

In fact, VMI for the book business began with the ID wholesalers and mass-market paperbacks right after World War II. The IDs — the initials stood for “independent distributors” — managed the distribution of magazines and newspapers at newsstands and other accounts within their geographical territory. The retailers had no interest in deciding how many copies of LIFE they got in relation to Ladies Home Journal; the ID made that determination. And since only the torn off covers were necessary for confirmation of a “return”, the “bulk” cost of distribution was in putting the copies in, not taking back the overage. And because newspapers and magazines had a disciplined frequency, it was obvious that you had to clear out yesterday’s, or last week’s, or last month’s to make room for the next issue.

When the first mass-market paperback publishers started their activity right after World War II, providing books for, among others, returning servicemen who had access to special servicemen’s editions of paperbacks (in a program created by the polymath Philip Van Doren Stern, a Civil War historian and friend of my father’s) they helped the jobbers along by having monthly lists. They also were comfortable with a book only having a one-month shelf life and having the stripped covers serve as evidence the book hadn’t been sold.

For quite some time, the initial allocations to the ID wholesalers (the local rack jobbers were called “Independent Distributors”) were really determined by the paperback publishers. Eventually, that freedom to put books into distribution choked the system, but there were a lot of other causes of the bloat. By the 1960s, many bookstores were carrying paperbacks and many other big outlets were served “direct” by the publishers, leaving the IDs with the least productive accounts. But VMI, even without any system and very little in the way of restraints on the publishers, was responsible for the explosive growth of mass-market paperbacks in the two decades following World War II.

In the late 1950s, Leonard Shatzkin, my father, introduced The Doubleday Merchandising Plan, which was VMI for bookstores on Doubleday books. For stores that agreed to the plan, reps reported the store’s inventory back to headquarters of Doubleday books rather than sending an order. Then a team posted the inventories, calculated the sales, and followed rules to generate an order of books to the store. Sales mushroomed, particularly of the backlist, and returns and cost of sales plummeted. Doubleday was launched into the top tier of publishing companies.

In a much more modest way, a distributor that my father owned called Two Continents introduced a VMI plan in the 1970s. Even with a very thin list and no cachet, we (I was the Marketing Director) were able to get 500 stores on the Plan in a year. We achieved similarly dramatic results, but from a much more modest base.

Two Continents was undone by the loss of some distribution clients. The Doubleday plan was undermined by reps who convinced headquarters years after my father left that their stores would be more comfortable if they wrote the Plan orders rather than letting them be calculated at headquarters. And the rise of computerized record-keeping systems for inventory and national wholesalers who could replenish stock quickly improved inventory performance, and store profitability, without VMI. Although our client West Broadway Book Distribution has successfully operated VMI in specialty retail for more than a decade, and Random House has worked some version of VMI at Barnes & Noble for the past several years, the technique has hardly been considered by the book trade for a long time.

It is time for that to change. What can foster the change is a recognition about VMI that is readily apparent in West Broadway’s implementations in non-bookstores, but would not have been so obvious to the bookstores using Doubleday’s or Two Continents’ services.

From the publisher’s perspective, the requirement that there be a title-by-title, book-by-book buying function in the store in order for the store to stock books purely and simply reduces the number of stores that can stock books. The removal of that barrier was the key achievement of the ID wholesalers racking paperbacks after World War II. Suddenly there were thousands of points of sale that didn’t require a buyer.

From the store’s perspective, buying — and managing the supply chain to support the buying decisions — is expensive. VERY expensive. Books are hard to buy. New ones are coming all the time; the number of publishers from which they come (and who are the primary sources of information about the books, even if you could “source” them from wholesalers at a slight margin sacrifice for operational simplicity) is huge; the shelf life of any particular title is undeterminable; and the sales in any one outlet are very hard to read.

Consider this data provided by a friend who owns a pretty substantial bookstore.

Looking at the store’s records for a month, 65% of the units sold were singles: one copy of a title. Only 35% were of books that sold 2 or more. (I didn’t ask the question, but that would suggest that 80-90 percent of the titles that sold any copies sold only one.)

Then, the following month, once again 65% of the units sold were singles. But only 20-30 percent of them were the same books as had sold as singles the prior month. Upwards of 70% of them were different titles. And upwards of 70% of the ones that sold one the prior month didn’t sell at all.

To further underscore how slowly book inventory moves, another report they do shows that more than 80% of the titles in the store do not sell a single copy in any particular month. So it is no surprise that an analysis of books from a major publisher that promotes heavily showed that more than half the new titles they receive from that publisher don’t sell a single copy within a month of their arrival in the store, which would include the promotion around publication date!

These data points demonstrate another compelling reason for VMI. When a store sells none of 80% of its titles in a month, and of the ones they do sell 80% of those sell one unit, they clearly need information about what is going on in other stores to know which ones to keep or reorder and which ones to return. Above the Treeline is an inventory service which provides its stores with broader sales data to address that issue, but the information is not as granular or as susceptible to analysis as what a publisher or aggregator could do with VMI.

Partly because of the high cost of buying and a supporting supply chain that a book outlet requires, publishers will see shelf space for books drop faster than retail demand. (The closure of Borders, which wiped out a big portion of the shelf space, is part of what is behind the recent good sales reports from many independents.) At the same time, retailers of all things will be under increased pressure to find more sales as the Internet — often, but not always, Amazon — keeps eating into their market.

This all adds up to VMI to me. We’ll see over the next couple of years whether industry players come to the same conclusion.

9 Comments »

Peering into the future and seeing more value in the Random Penguin merger


So now in addition to the Random House and Penguin merger that is being reviewed by governments far and wide, we have the news that HarperCollins is exploring a tie-up with Simon & Schuster in a deal that hasn’t been made yet. That leaves Hachette and Macmillan, among the so-called Big Six, still on the outside as the general trade publishing behemoths rearrange themselves for whatever is the next stage of book publishing’s existence.

I am not sure we really need an “explanation” for what is the resumption of a perfectly natural phenomenon. Big publishers have been merging with each other for several decades in a process that suddenly stopped after Bertelsmann acquired Random House (to add to its holding of Bantam Doubleday Dell) in 1998. We didn’t know it at the time, but that concluded a long string of mergers that had recently included Penguin’s acquisition of Putnam-Berkley, but which stretched back to the 1970s when pursuit of the paperback-hardover synergy had driven Viking and Penguin; Doubleday and Dell; and Random House-Ballantine and Fawcett into each other’s arms.

(Perhaps HarperCollins should get credit for the resumption of the era of consolidation. Their acquisition of Christian publisher Thomas Nelson, combined with their holding of Zondervan, created a powerful position in one of publishing’s biggest vertical markets shortly before Penguin and Random House announced their plans.)

But consequential events always get an explanation, whether they deserve one or not, and this merger appears to many to be driven by consolidation among the retail intermediaries and the rational concern — amply documented by recent experience — that the retailers would use their leverage to press for more and more margin. This is complicated by the fact that both of the dominant retailers — Amazon in the online world and Barnes & Noble in the brick-and-mortar space — have small publishing operations of their own that are always available to put additional pressure on publishers at the originating end of the value chain.

There is an important asymmetry to take note of here. The retailers publish and are always a threat to acquire manuscripts directly and cut the publishers out but the publishers, particularly the biggest ones, don’t do retail and there is no obvious path for them to enter retailing in any significant way. (That last sentence was written with full cognizance that we await the debut of Bookish, which is an attempt by three of the Big Six to enter retailing in a significant way. Maybe when concrete plans for it are announced there will be some reasons provided to amend that thought.)

In my opinion, the dominant position that Amazon holds in online retailing and that B&N owns in shops are impregnable on their own terms in ways that the positions of each of the big publishers are not.

The threat to Barnes & Noble is that bookstores will become unsustainable: that a retailer trying to exist at scale with books as its primary product offering will, because of ebooks and online purchasing of print, simply become unviable. The threat to Amazon is more nuanced and more distant. One can imagine a world developing where content retailing evolves into niches by subject or tastemaker. But that world is not around the corner (an environment toxic to bookstore chains appears to be much closer) and it would be far easier to imagine how Amazon could adapt to niche online retailing than to see B&N adapting to deliver retail book selections that are only viable at a fraction of their current size.

(I consulted to them a decade ago and suggested that to no interest. They were shutting down their mall stores at the time and the idea seemed totally counterintuitive.  I’ve also written about it.)

I saw recent data (sorry, can’t remember where…) suggesting that something like 38% of the book business is now done online, taking both ebooks and sales of print into account. This seems to be confirmed by a chart built on BookStats data by reporter Laura Owen of PaidContent, if you take “institutional sales” out of the equation and assume that wholesalers sold books to online and store retailers as well as libraries.

Whatever the percentage is, it is almost certainly higher for immersive reading than for illustrated or reference books because immersive works for ebooks and the others mostly don’t. So it would appear that something like 60% of the book business is still a bricks-and-mortar game, with the number being somewhat lower for straight text and higher for illustrated.

That, in a nutshell, explains why the big publishers are still extremely powerful. The 60% sold at retailers is what they’re uniquely skilled at getting and what Amazon is uniquely challenged to penetrate.

But the one thing we know for sure is that the shift to online purchasing — while it has slowed down — will continue to progress for a long time. The increased ubiquity of devices; the always-larger selection from an online merchant; the increase in availability of appealing and useful content that is either too short or too specialized for print; the steadily increasing cost and hassle of shopping by car rather than by computer; the natural results of birth, death, and demography; and the increase in online word-of-mouth and recommendation sources are among the many factors that assure that.

As the percentage of a publishers’ sales that are made through retail stores decreases, the cost of covering them increases. This has already become an issue as the big publishers view their overheads and come to the conclusion that they can’t afford to pay ebook royalties greater than 25% of receipts. Surely, some of the cost basis they see driving that necessity are really print-based (creation and distribution), which makes them calculate what’s affordable differently than a more new-fangled publisher that is planning primarily on digital and online distribution.

The publishers who are merging or thinking about merging are not doing so out of immediate desperation. The financial reports we see from trade publishers are not frightening. Top line sales are challenged — there is little or no growth — but margins have been maintained through the seismic marketplace shifts of the past few years and the pace of change is slowing. So it is probably preparing for a world a few years off that drives publishers to merge today. What will that world look like?

The world of publishing we’re going to see five or ten years from now will probably look quite different. Even if store sales only decline 10% a year against the industry total, what is a 60% share today will be about a third after five years have passed and below 20% in ten. Those are sales well worth having, of course, but they’ll be a lot more expensive to get. And if I were predicting rather than just speculating, I’d expect the erosion of retail sales to be a bit faster than that.

My expectation is that freestanding bookstores will be less and less common, and smaller book sections in other retailers (the way they’re in mass merchants today) will proliferate. We already see this in “specialty” retail: stores stock books that fit alongside their other product offerings. But as bookstores get scarcer, it will probably begin to make sense for general book selections — bestsellers, classics, and the cream of popular categories like cooking and current affairs — to be offered by other merchants. Part of the reason that doesn’t happen now is that it is too hard for the retailer not in the book business to do. A representative selection either requires dealing with many publishers or buying from a wholesaler. And the wholesalers are working on tight margins, not allowing them much room to offer expensive services (like inventory management) unless they really cut into the store’s margin.

But you don’t have to have every book — or even every bestseller — to deliver a compelling consumer offering. Book-of-the-Month Club and The Literary Guild proved that half a century ago when they competed for the general book club market. They demanded exclusives on the bestsellers, so they tended to split them. And they each had enough to pull a very large audience.

Well, the combination of Random House and Penguin has damn near half the bestsellers too. And Random House, at least, has already developed vendor-management capabilities that they can apply at the store level. So as the bookstores disappear from town after town, a Random Penguin combination (they really ought to call it that!) becomes able to offer any local retailer a selection of books that will look pretty good to the average consumer.

In addition, they’ll find that the combined lists give them a great head start on having enough titles to deliver retailers other vertical selections — cooking, crafts, home improvement — that their VMI skills will also help them serve.

Right now the challenge Amazon is having is that they’re trying to publish with a grip on no more than half the market. That’s great, as far as it goes, because that’s where they have a real margin advantage when they cut the publisher out of the chain. But because there is so much Amazon fear-and-loathing around the rest of the industry, they’re not able to build out beyond their proprietary position. (See the recent frustrations expressed by their author, Tim Ferriss, to appreciate how that’s working out in the market today.)

But if Amazon could reach 75% of the market — that is, if store purchasing declined below 25% of the total, which is in the cards for the next ten years — leverage would be reversed. (I’m eliding the format and proprietary reader device issues around ebooks here, but I’m guessing they’ll mostly go away in the next five or ten years.) Then Amazon wouldn’t want or need distribution to the stores or other online outlets. In fact, chances are they’d see it in their best interests to withhold those titles from other retailers and use them as tools to compel shopping with Amazon.

(This would not be a peculiar selfishness of Amazon if they did it. I remember well the battles my friends at Sterling had when they were first acquired by Barnes & Noble trying to convince their new owners that it was necessary to distribute the books as broadly as possible or they would start finding it impossible to sign new titles. B&N’s instinct was to want what they published available only from their stores, an instinct they acted on with SparkNotes.)

But if I’m right about where Random Penguin might go, they could play this same game. As the cost of running book departments increases as a percentage of sales, as they surely will as sales in stores decline, the mass merchants will diminish their presence. If Random Penguin has half the bestsellers, they will be able to use VMI to build secondary locations to keep their print books available. Those locations will be theirs and theirs alone. Maybe they’ll only be making 10% or 15% of the total sales this way, but those sales will be unavailable to other publishers (unless they go through RP at diminished margins.)

The proprietary distribution will give RP an advantaged position signing up the biggest books. In time, they might even have enough of the biggest books to pursue one of the current active fantasies of Amazon and a bunch of entrepreneurs: creating a value proposition for big authors that will enable a subscription library with headline titles. And that would be another proprietary distribution channel that this next generation of scale might make possible.

The resistance of the bookstores to doing anything that helps Amazon will make it difficult for Amazon the publisher to build a general trade list of bestsellers until a much bigger chunk of the market has moved online. Barnes & Noble, which had a chance to become the one dominant trade publisher if they’d played their Sterling card differently, seems not to be interested in that role. So it will be one or two of the incumbents that will be left standing ten years from now managing the most commercial titles in the marketplace. The odds are very good that one of them will be Random Penguin.

I (usually) resist the temptation to make political observations on the blog, because that’s not what people come here for. But I have to make an exception because I think one of the most important points to be made about the results of November 6 has not been made anywhere else. And it is, ultimately, a non-partisan point.

Among the many reasons that President Obama convincingly defeated Governor Romney was the superior execution of the Obama campaign around data and operations. They were simply better analysts and managers and they executed better than the Romney campaign.

So can we please put to rest the notion that “getting rich” or “running a business” is a proxy for “management skill”? The most frequently-offered argument from Romney was “I’m a successful businessman so therefore I can run things better than this guy who is community-organizer-turned-public-official.” Actually, Governor, you couldn’t. You didn’t.

The last presidents we had with business experience were (working backwards) George W. Bush, Jimmy Carter, Herbert Hoover, Calvin Coolidge, and Warren Harding. There is no historical evidence in there that shows that business success correlates with the ability to run the United States government. Or even, as we’ve just been shown, an effective national campaign.

30 Comments »