New Models

The reality of publishing economics has changed for the big players


A veteran agent who was formerly a publisher confirmed a point for me about how trade publishing has changed over the past two decades, particularly for the big houses. This challenges a fundamental tenet of my father’s understanding of the business. (And that’s the still the source of most of mine.) I had long suspected this gap had opened up between “then” and “now”; it was really great to have it confirmed by a smart and experienced industry player.

One of the things that I took from my father’s experience — he was active in publishing starting in the late 1940s — was that just about every book issued by a major publisher recovered its direct costs and contributed some margin. There were really only two ways a book could fail to recover its costs:

1. if the advance paid to the author was excessive, or

2. if the quantity of the first printing far exceeded the advance copy laydown.

In other words, books near the bottom of the list didn’t actually “lose” money; they just didn’t make much as long as the publisher avoided being too generous with the advance or overly optimistic about what they printed. (Actually, overprinting was and is not as often driven by optimism as by trying to achieve a unit cost that looks acceptable, which is a different standard fallacy of publishing thinking.)

The insight that just about every book contributed to overhead and profit was obscured by the common practice of doing “title P&Ls” that assigned each book a share of company overheads. Whatever that number was, when it was calculated into the mix it reduced the contribution of each sale and showed many books to be “unprofitable”. That led publishers to a misunderstanding: perhaps they could make more money doing fewer books, if only they could pick them a little bit better. Trying to do that, of course, raised the overhead, which was neither the objective nor any help in making money.

(Raised the overhead? I can hear some people asking…Yes, two ways. One is that publishing fewer books would mean that each one now had to cover a larger piece of the overhead. The other is that being “more careful” about acquisition implies more time and effort for each book that ends up on the list, and that costs overhead dollars too.)

For years, this “reduce the list and focus more” strategy was seen by my father, and those who learned from him, as a bad idea.

One of the young publishers my father mentored was Tom McCormack, who — a decade after Len worked with him — became the CEO of St. Martin’s Press. There, McCormack applied Len’s insight with a vengeance, increasing St. Martin’s title output steadily over time. And, just as Len would have expected they would, St. Martin’s profits grew as well.

All of this was taking place in a book retailing world that was still dominated by stores making stocking decisions independently from most other stores. In the 1970s, the two big chains (Walden and B. Dalton) accounted for about 20 percent of the book trade. The other 80 percent was comprised of nearly as many decision-makers as there were outlets. So while it took a really concerted effort (or a very high-profile book or author) to get a title in every possible store location, just about every book went into quite a few. With five thousand individuals making the decision about which books to take, even a small minority of the buyers could put a book into 500 or 1000 stores.

But two big things have conspired to change that reality. The larger one is the consolidation of the retail trade. Now there are substantially fewer than 1000 decision-makers that matter. Amazon is half the sales. Barnes & Noble is probably in the teens. Publishers tell us that there are about 500 independent stores that are significant and that all the indies combined add up to 6 to 8 percent of the retail potential. The balance of the trade — about 25 percent — is the wholesalers, libraries, and specialty accounts. The wholesalers are feeding the entire ecosystem, but the libraries and specialty accounts are both very much biased as to the books they take and very unevenly covered by the publishers. In any case, ten percent of the indie bookstores today gets you 50 on-sale points, not 500. That’s a big difference.

The other thing that has happened is that the houses are much better organized about which books they are “getting behind”. This has the beneficial effect of making sure the books seen to have the biggest potential get full distribution. But it also has the impact of reducing the chances that the “other” books will get full attention from Barnes & Noble (able to deliver more outlets with a single buyer than one would customarily get from the entire indie store network). And, without that, it takes a lot of luck or online discovery to rescue a book from oblivion.

The agent who was confirming my sense of these things agreed that the big houses used to be able to count on a sale of 1500 or 2000 copies for just about any title they published. Now it is not uncommon for books to sell in the very low triple digits, even on a big publisher’s list.

Even before any overhead charge and with a paltry advance, that isn’t going to cover a house’s cost of publication. So there definitely are books today — lots of books — coming from major houses that are not recovering even their direct costs.

This is a fundamental change in big publisher economics from what it was two decades ago. While the potential wins have become exponentially bigger than they were in bygone days, the losses have become increasingly common. And while it is still an open question how well anybody can predict sales for a book that isn’t even written yet (which is the case for most books publishers acquire), there is a real cost to getting it wrong, even when the advance being paid is minimal.

So it is no longer irrational to cut the list and focus. Obviously, every book published is a lottery ticket for a big win, and the odds in a lottery are never good. But the world most general trade publishers have long believed in, where the big hits pay for the rest of the books, is really now the one they inhabit.

I am proud to be part of the organizing committee for Publishing People for Hillary. We’re staging a fundraiser for her in midtown Manhattan on Friday, September 30, at which Senator Cory Booker and Senator Amy Klobuchar will be the featured speakers. You can sign up to join us here. Contribution levels for the event range from $250 to $2500, with a special opportunity to meet the Senators at the higher levels.

And, having NOTHING to do with publishing, but for all baseball fans in the crowd, please check out this story about Yogi’s mitt and Campy’s mitt that you will not have seen anywhere else.

24 Comments »

eBook pricing resembles three dimensional chess


The current round of reporting from major publishers contains some danger signs. Their ebook sales are declining (in dollars and even more dramatically in units) in an ebook market that is probably not declining. The “good” news for the publishers is that print sales are pretty much holding their own, or even growing. And profits are being maintained, which is probably the most important metric in their board rooms. But the bad news is that total revenues are down. And print sales have been buoyed by the consumer excitement for adult coloring books (now spreading to adult “activity” books), so the combined results for many author-driven titles don’t necessarily reflect growth and total unit sales of print plus digital for many titles are almost certainly falling behind expectations

In a complicated marketplace with large unknowns around indie authors and indie books, particularly those that are Amazon-only, it is hard to be definitive about what the cause of this is. (Author Earnings does yeoman work trying to put the two overlapping markets in context.) Certainly, barriers to entry have come down and there are many more books in the marketplace competing for readers that don’t come from the companies the publishers think they’re competing against. But the publishers’ “success” in establishing agency pricing — where the price they set is the price the consumer pays — combined with Amazon’s decision to “respect” agency (at first with no choice but subsequently, after contracts were renegotiated, with apparent enthusiasm) and offer no pricing relief from their share of the book’s sales revenue is almost certainly a major component of the emerging problem.

Amazon doesn’t need big publisher books to offer lots of pricing bargains to their Kindle shoppers; they have tens of thousands of indie-published books (many of which are exclusive to them) and a growing number of Amazon-published books, that are offered at prices far below where the big houses price their offerings. That probably explains why Amazon can see its Kindle sales are rising while publishers are universally reporting that their sales for digital texts, including Kindle, are falling. (Digital audio sales are rising for just about everybody, but that is not an analogous market.)

This is putting agency publishers in a very uncomfortable place. It has been an article of faith for the past few years that there is revenue to unlock from ebook sales if only the pricing could be better understood. Just a bit more revenue per unit times all those ebook sales units is a very enticing prospect for publishers. After the agency settlements liberated publishers from the price limitations Apple had originally insisted on, the immediate tendency was for publishers to push ebook prices even higher.

And since ebooks are sold in a less price-competitive market than we had before agency, Amazon can devote its marketing dollars to cutting prices on the print editions. This undercuts the publishers’ intention to support a diverse (and store-based) retail network and, at the same time, often embarrasses them by making the print book price (set by Amazon) lower than the ebook price (which Amazon makes very clear was set by the publisher).

The fact that this is reducing publisher revenue and each title’s unit sales is concerning. But it is also making it much more difficult to establish new authors at the same time because lots of competing indies are still being launched with low price points that encourage readers to sample them.

It is maintained by many people that there has been a reduction in the rate of surprise breakout books over the past few years because of this pricing as well. This perception would be explained by the fact that price attracts readers to try new authors, and so the new rising talent would more frequently come from the lower-priced indies. Higher ebook prices reduce the speed with which a book can catch on in the marketplace. It feels like there is a consensus in the big houses now that it is harder to create the “surprise” breakouts. (This is a very difficult thing to actually measure.) The “Girl on the Train” phenomenon is always unpredictable, but big publishers still could count on it coming along often enough to keep the sales revenue trend line rising. That doesn’t seem to be the case anymore.

High ebook prices — and high means “high relative to lots of other ebooks available in the market” — will only work with the consumer when the book is “highly branded”, meaning already a bestseller or by an author that is well-known. And word-of-mouth, the mysterious phenomenon that every publisher counts on to make books big, is lubricated by low prices and seriously handicapped by high prices. If a friend says “read this” and the price is low, it can be an automatic purchase. Not so much if the price makes you stop and think.

This puts publishers in a very painful box. When they cut their ebook prices, they not only reduce sales revenue for each ebook they sell; they also hobble print sales. (Although if they cut prices as a promotion, and they market the promotion, apparently higher-priced print will also benefit from the promotion and see a resulting sales lift.) And singling out some of their ebooks for an ebook price reduction strategy could also raise a red flag with an agent. It is easy to understand a temporary price reduction that is promoted; as an overall pricing strategy it could be seen as a bite out of the author’s ebook earnings at the same time their print sale is threatened with the low-price ebook competition. And while an ebook price-reduction strategy would probably make at least Amazon and Apple, very important trading partners, quite happy, it risks angering others, including perhaps Barnes & Noble but certainly including all the indie bookstores.

On the other hand, the current “strategy” has plenty of risk.

An unpleasant underlying reality seems inescapable: revenues for publishers and authors will be going down on a per-unit basis. This can most simply be attributed to the oldest law there is: the law of supply and demand. Digital change means a lot more book titles are available to any consumer to choose from at any time. Demand can’t possibly rise as fast and, in fact, based on competition from other media through devices people carry with them every day, might even fall (if it hasn’t already). So publishers are facing one set of challenges with their high ebook prices; they’ll create another set if they lower them.

But, unfortunately, lower them they almost certainly must. With more data, we may learn that developing new authors absolutely requires it, particularly in fiction.

Here’s a suggestion for a new pricing routine that might be worth trying in the near term recalling a prior practice from quite a while ago.

There was a period earlier in my career, probably ending in the 1980s, when publishers priced new hardcovers like this: $22.95 until October 1, $24.95 thereafter. The books had the price on a corner of the jacket that could be snipped diagonally on October 1, so that only the $24.95 price would show.

Frankly, in this case the pricing device was not primarily intended to entice the consumer to buy the book before the up-pricing deadline. It was really designed to get the store to place a bigger advance order, for which the applicable discount would be based on the promotional price.

Now big advance orders are not nearly as important as they used to be, nor nearly as common. But there is still a huge dependence on consumers taking a risk on an author, particularly in the first moments after a book comes out. Two or three decades ago, this was the “secret” behind publishers moving an author from a star doing “mass-market originals” (low prices) to a hardcover bestselling author.

So what might be worth a try from the big publishers now would be “promotional ebook pricing” on launch. Make the ebook $3.99 until date X, and then raise it to the “normal” level (which for major publishers, when the hardcover is in the marketplace, would be $12.99 and up.) This is a very painful experiment to try because it will compete against the hardcover at launch, when the publisher is trying to pile up sales to make the bestseller list. It will annoy print booksellers as well.

But publishers have to find a way to put new authors into the market without a millstone of pricing that requires a significant commitment by the reader before they know the author.

Of course, that strategy suggests an even more disruptive reality about ebook pricing: it doesn’t have to remain “set” the way print book pricing does. Because of our convention of printing the publisher’s suggested retail price right on the book’s jacket or paperback cover, it is not really practical to change a book’s price except, occasionally (and less often in these low-inflation times) when a book is reprinted. (In higher-inflation times, we did sometimes employ the practice of “stickering” to increase price, but that was clumsy and impossible to conceal.) But with ebooks, prices can change pretty much as often as you like: up, down, and up again.

In fact, that already happens with promotional pricing such as has been pioneered by the email service, BookBub. The BookBub idea — emailing a subscriber list with notice of price promotions on ebooks — has been copied highly successfully by HarperCollins with their proprietary version, BookPerk, and to a lesser extent by other publishers as well. It is becoming established practice to temporarily lower the price of a title to get it ranked higher and then to raise the price and try to capture higher-revenue sales with the hyped “branding” the promotion created. So far, this is done with a clear game plan, such as discounting the first book in a series, or the most recent book in a series when a new title is about to come out.

But uncoupling the ebook pricing completely from print pricing, which seems to be where we will inevitably go, may also mean — it certainly can mean — all ebook pricing becomes dynamic. All of this definitely raises the bar for publisher knowledge of how consumers react to prices in different situations. It has been a widespread article of faith that retailers “understand” this behavior and publishers don’t. To the extent that retailers do understand it, they see it through a different lens; they almost never care about the impact of price changes on the overall sales curve for a single title. Titles are interchangeable for retailers and not for publishers. So while it is true that publishers have a lot to learn, it is probably not true that retailers already know it.

The points I wanted to make in this post were that publishers should contemplate uncoupling ebook pricing from print pricing, learn more about consumer behavior around pricing, and master the skill of managing (strategically and operationally) LOTS of ebook price changes all the time. There is another point herein, made in passing, that is worth deeper consideration on another day. Big publishers are seeing their revenue decline but their profits rise. Does that point to a strategy? For how long can publishers cut costs faster than revenues, particularly per-unit revenues, decline? Maybe for quite a while…

40 Comments »

Barnes and Noble faces a challenge that has not been clearly spelled out


The sudden dismissal of Ron Boire, the CEO of Barnes & Noble, follows the latest financial reporting from Barnes & Noble and has inspired yet another round of analysis about their future. When the financial results were released last month, there was a certain amount of celebrating over the fact that store closings are down compared to prior years. But Publishers Lunch makes
clear that store closings are primarily a function of lease cycles, not overall economics, and we have no guarantees that they won’t rise again this year and in the years to follow when a greater number of current leases expire.

With B&N being the only single large source of orders for most published titles for placement in retail locations, publishers see an increasing tilt to their biggest and most vexing (but also, still their most profitable) trading partner, Amazon.

Although PW reported immediate dismay from publishers over Boire’s departure, there has been plenty of second-guessing and grumbling in the trade about B&N’s strategy and execution. Indeed, getting their dot com operation to work properly is a sine qua non that they haven’t gotten right in two decades of trying. But one thing Boire did was to bring in a seasoned digital executive to address the problem. This is presumably not rocket science — it isn’t even particularly new tech — so perhaps they will soon have their online offering firing on all cylinders.

The big new strategy they revealed, one they’re going to try in four locations this year, is what they call “concept stores” that include restaurants. And, although it was a bit unclear from their last call whether the store-size reduction they’re planning extends to these restaurant-including stores, they have said that the overall store footprint they’re planning will be 20-25 percent smaller than their current standard. These two facts both make the point that B&N is facing a reality which has become evident over the last decade, and which questions a strategy and organizational outlook that was formulated in another time. If this new challenge is properly understood, and I haven’t seen it clearly articulated anywhere, it would make the restaurant play more comprehensible. (Note: I have to admit that my own recent post, where I traced the history of bookstores in the US since World War II, failed, along with everybody else, to pinpoint the sea change that makes B&N’s historical perspective its enemy while trying to survive today.)

Here’s the change-that-matters in a nutshell. A “bookstore” doesn’t have the power it did 25 years ago to make customers visit a retail location. Selection, which means a vast number of titles, doesn’t in and of itself pull traffic sufficient to support a vast number of large locations anymore. This changes the core assumption on which the B&N big store buildout since the late 1980s was based.

This has been true before. One hundred years ago the solution to the problem became the department store book department. Post-war prosperity grew shelf space for books, but the department stores remained the mainstays for book retail. The first big expansion of bookstores started in the 1960s when the malls were built out, which put Waldens and Daltons in every city and suburb in America. The mall substituted for the department store; it delivered the traffic. In fact, department stores “anchored” all the malls to be sure they’d get that traffic!

(Here are a couple of additional factoids to illustrate the importance of the department store channel in the mid-20th century. When Publishers Weekly did an article about the Doubleday Merchandising Plan in 1957, the stores they used as examples were the book departments of Wanamakers and Gimbels! When I came into the business fulltime in the 1970s, there were two significant “chain” accounts in Chicago: the bookstore chain Kroch’s & Brentano’s and the Marshall Field department stores.)

Bookstore customers came in many flavors, but they all benefited from a store with greater selection. My father, Leonard Shatzkin, first noticed that selection was a powerful magnet when he was overseeing the Brentano’s chain (no relation to K&B in Chicago) in the 1960s. Their Short Hills, New Jersey store was an underperformer. They doubled the number of titles in it and it became their best performer. Whether the bookstore customer knew what they wanted or just wanted to shop, the store with more titles gave them a better chance of a satisfying result.

Over time, that understanding was followed to a logical conclusion.

By the late 1980s, it appeared that standalone bookstores outside of malls could become “destinations” if their selections were large enough, and that created the superstore expansion: B&Ns and Borders. But, only a few years later when it opened in 1995, the universal selection at Amazon mooted value of the big-selection store, especially for customers who knew before they shopped what book they wanted. Selection as a traffic magnet stopped working pretty quickly after Amazon opened in 1995 although it was not so immediately obvious to anybody.

I had some experience with B&N data that demonstrated pretty emphatically by 2002 that the action on slow-selling university press titles had shifted overwhelmingly to Amazon. (At that time, the late Steve Clark, the rep for Cambridge University Press, told me that Amazon was a bigger account for CUP than all other US retail combined.) It took the further hit of expanded Internet shopping at the consumer level, which grew with increased connectivity even before ebooks, to make what had been a great business obviously difficult. Then, as if to emphasize the point, we lost Borders…

What just doesn’t make it anymore, at least not nearly as frequently, is the “big bookstore”. Although there is no scientific way to prove this, most observers I’ve asked agree that the new indie stores popping up over the past few years tend to be smaller than than the Borders and older indie stores they are replacing. We are seeing book retailing become a mix of pretty small book-and-literary-centric stores and an add-on in many places: museums, gift shops, toy stores. These have always existed but they will grow. And true “bookstore” shelf space will shrink, as has space for “general” books in mass merchants. The indie bookstore share will definitely continue to grow, but whether their growth will replace what is lost at B&N and the mass merchant chains is doubtful. Every publisher I’ve asked acknowledges significant indie store growth in the past couple of years, but they are also unanimous in saying the growth has not replaced the sales and shelf space lost when Borders closed.

Barnes & Noble is clearly rethinking its strategies, but this is one component that I have never seen clearly articulated. Back when I had my “aha!” moment about what was happening with the university press books, I suggested to one B&N executive that they had to figure out how to make the 25,000-title store work.

He said, “that’s not where we are. We’re thinking about the million-title store!” In other words, “we want to manage big retail locations”. This is thinking shaped by what we can now see is an outdated understanding of what the value of a big store is. So now they’re trying to sustain slightly-smaller big locations with things other than books. (Whether they plan to go as low as 25,000 titles in stores that used to stock four or five times that many is not clear. But they did say in their recent earnings call that the new concept stores would get 60 percent of their revenues from books, rather than the 67 percent they get now.) They have added non-book merchandise; now they’re thinking about restaurants. All of that is to increase traffic and to increase sales from the traffic they already get.

But there is another way to attack the challenge that “books alone” doesn’t work the way it used to. Barnes & Noble’s core competency is book supply to retail locations anywhere in the United States. Nobody, except Ingram, does this as well. (Although Amazon clearly is now planning to give it a try.)

Other retailers are suffering the same Internet sales erosion as booksellers, and a properly-curated selection of books can work for just about any store’s customer profile. Might Barnes & Noble complement its own stores by offering branded B&N Book Departments to other retailers? Let them bring in the traffic (although the books will undoubtedly bring in some more) and then B&N could manage those departments. (This is a variation of a tactic I suggested for Penguin Random House some years ago.) Let other retailers play the role the department stores and then the malls played for books in the past 100 years. Let’s not require the retail customer to come to a location strictly to shop for books.

The “trick” would be for B&N merchandisers to adjust their book selection to suit the specific customer base each store attracts. But is that a harder challenge than going into the restaurant business? And isn’t extending the B&N brand for books a more sensible tactic than trying to extend it to food? Or to create a new brand for food? And wouldn’t it be a good idea to get started on this tactic to expand book retail shelf space before Amazon, which keeps showing signs of wanting a retail presence, does?

This is not an easy market to just walk in and take over. There are already wholesalers providing books to retailers who don’t support a full-fledged buying effort for them. Those wholesalers are often getting more margin from the publishers than B&N is now, but that’s actually more of an opportunity than an obstacle. Presumably, a B&N-branded book section is worth something. (If it isn’t, that’s another problem.) Presumably, B&N has buying expertise and domain knowledge that would enable them to fine-tune a selection of books for each outlet’s customer base. And, presumeably, B&N’s supply chain efficiency would be superior to anybody else’s in the industry, except Amazon’s and perhaps Ingram’s.

The big bookstore model is an anachronism. Just making it big doesn’t pull in the customers anymore. So a new strategy is definitely called for. B&N is going part of the way to one by recognizing that they need to do more to bring in customers and, at the same time, they can’t profitably shelve 100,000 titles across hundreds of stores. Taking their capabilities to where the customers already are would seem like an idea worth exploring.

It should be noted that the Indigo chain in Canada, under the leadership of owner Heather Reisman, has apparently successfully transitioned to a “culture” store where books are the key component of the offering. She has apparently found a product mix, or an approach to creating one, that is working for Indigo. Every large book retailer in the world is going to school on what Indigo has done. Because Amazon and online purchasing in general have not taken hold in Canada the way they have in the United States, we can’t jump to the conclusion that the Indigo formula could be successfully applied here. But it sure wouldn’t be a crazy idea for B&N to buy Indigo to gain the benefit of Reisman’s insights and expertise, assuming that a) Canadian law would permit U.S. ownership of such an important cultural asset and b) Reisman herself would sell and then work for somebody else. Two very big assumptions.

It is also worth nothing that the Pocket Shop chain, the small-bookstore concept chain that we’ve written about previously, is going to start opening stores in the UK. 

26 Comments »

Book publishers do not do SEO like the big guys do although they could


Partner Pete McCarthy pointed me to an article a couple of weeks ago that also introduced me to a website called Viperchill and its gifted, self-promoting SEO/Marketing creator, Glen Allsopp. The linked post, which I strongly urge you to read, enumerates quite painstakingly the techniques used by 16 online media companies with a large portfolio of brands that enable them to dominate specific search results in Google across a very wide range of topics and categories.

The example ViperChill explained in detail was how Hearst created a lot of traffic very quickly to a new site and business it had created called BestProducts.com. Judicious placement of content and links to BestProducts from the very big brands that Hearst controls (Cosmopolitan, Womens Day, Marie Claire, Esquire, Elle) resulted in Google placing BestProducts startlingly high in search results.

This is a result of three elements Google values a great deal: “domain authority” and “inbound links”, nested in “content” that seems “natural”. “Natural” suggests that Google believes the content is genuine information, not a ruse to point to an otherwise irrelevant link.

This is tricky and problematic stuff for Google, as the story makes clear. Google’s objective is to deliver the most relevant search results for a user. While Womens Day’s editorial opinion about the best nail polish would seem worthy of high “authority” (which ultimately translates into an elevated position in the search results), Google does not intend to confer that authority on a nail polish suggestion that is motivated by BestProducts’s commercial interests. How can Google tell what motivates the placement of content and a link on Womens Day’s web site? They may still be figuring that out.

In other words, what is working so effectively for these brands, enabling them to use the collective authority of many powerful domains to drive traffic to something new and different, may not work forever without some serious adjustments. But it sure is working now!

This information wouldn’t be appearing on this blog if it didn’t have application to book publishers. It demonstrates a very large opportunity for many of them. The precise size of the opportunity depends entirely on the number of individual web domains that publisher controls or influences, the authority of each of the domains according to Google, and the judicious placement of content and links among those sites to push the desired result to a specific search term.

These powerful multi-brand content organizations have such massive traffic and authority that they can influence Google search for the most searched terms on the Internet. No book publisher would have comparable capability. But for terms that are more publishing-specific — those that reference books or reading groups or book genres or authors — the larger book publishing organizations have the ability to influence search results exactly the way these big outfits do.

And so would some smaller publishers, particularly vertical/niche publishers with any meaningful consumer-facing brands.

Probably the first big insight that created the success of Google was the recognition that links to content or a website told you something valuable about the worth of that content or website. So from the very beginning of SEO two decades ago, domain owners have understood that getting links is a way to improve their rank in search and increase their discoverability. What is documented in this article is that when one entity controls a large number of authoritative domains, they can constitute an ad hoc “network” that gets them the power of inbound links without having to persuade somebody outside their family of their worth. That’s particularly important when you’re trying to launch something new, as Hearst was with BestProducts.

And which publishers do every day with new books and debut authors.

There are two big steps publishers need to take in order to put themselves in position to execute this strategy effectively. The first is that they have to enumerate and understand all the web presences they own and control. Obviously, that includes the main domain for the publisher. But it also includes individual book sites, author sites, series sites, topical sites, or any other sites that have been created and which are regularly used and posted to.

In fact, any site that has meaningful domain authority can be helpful. We’ve worked with sites that have long since been defunct but that still have “weight” in the Google-verse. Those can be revived and used to impact SEO for current projects.

The second is to enumerate and understand all the related sites, owned or controlled by others, but where there is a mutual interest in some property between the publisher and the website owner. These will largely be sites for titles or authors, but might also include corporate sites for some authors and movie sites for some others. If a movie is made of a publisher’s book and there is no link from the movie site to the publisher’s information on the book, it is not good for anybody. The publisher is missing out on referrals that could lead to sales as well as additional discovery “juice”. The fans of the movie would want to know that it came from a book and it would be useful information for them that should be on the movie website and really constitutes an unfortunate omission if it is not. If the publisher sites can also influence the sites promoting the UK or international editions of the same title, they’d be helpful too.

Once the roster of usable sites is complete, the next step is to make sure there are relatively easy ways to add “natural” content to tie to links. Authors can really help each other here. They each have “authority” and they can, in combination, add a lot of power to the site for a new author or a new book. The more an author participates in useful reciprocal linking, the more that author helps their own cause and adds search power to the other authors in the publisher’s stable.

And the more the publisher can orchestrate these links, from their own sites and tethering their authors to each other on the web, the more the publisher adds otherwise unobtainable value for the author that costs nothing but a little administrative effort.

Indeed, the value added for authors, which would be tangible and visible, is one of the most important strategic reasons why publishers should heed the advice in this post.

The complete roster of useful websites, which is being added to all the time as new titles and authors are added to the publisher’s lists, should be centrally managed for maximum impact for the new titles publishers will launch. An SEO plan for every new book or author should be created from this roster of sites, soon pretty predictably adding another useful authoritative domain that can be put behind new titles that arise in the future.

An understanding of this opportunity also makes clear why authors having their own websites with their own domains is an important marketing component. Each site needs to be competently rendered and, of course, it should be linked to from the publisher’s own site. (And it should link back as well.) But assuring that those things get done should also be part of the standard book publisher playbook for maximum discovery. In the overwhelming majority of cases, it doesn’t appear to be now.

So, why is there that hole, universally (as far as we can see) across the industry? We’ve asked ourselves that question. The likely answer is that there is no one person or already-organized group of people within any house who can both do meaningful analysis and deliver the necessary execution with expertise, and who has the credibility and authority with all the stakeholders to make it happen. What’s essential is somebody who can corral parent companies, imprints, technology groups, authors, and agents and 1) get them to understand the value we’re pointing to here; 2) persuade them to participate; and 3) provide a convincing roadmap on how it will work.

Perhaps it is not surprising that we think it will take a powerful outside consulting team to make that happen, at least in the first place to do it. (After the value is more obvious, and as quickly evident to authors as it will be, others will figure out how to follow.) That’s the kind of thing that publishers would typically go to an outfit like Boston Consulting Group (there are others, like McKinsey or PwC) to get done. Of course, a smaller firm more focused on publishing might well do the job better, faster, and cheaper.

4 Comments »

Big data matters but textual analysis really does not


I was honored today with a lengthy response to a recent Shatzkin Files post on the Digital Book World blog from Neil Balthasar, who apparently uses techniques similar to those in a forthcoming book “The Bestseller Code: Anatomy of a Blockbuster Novel”. My post had been a response to a PW article announcing the upcoming publication of that book. I reacted strongly to this sentence near the top of the story:

“In the forthcoming The Bestseller Code: Anatomy of The Blockbuster Novel (St. Martin’s Press, Sept. 20), authors Jodie Archer and Matthew L. Jockers claim they created an algorithm that identifies the literary elements that guarantee a book a spot on the bestseller lists.”

It was the “guarantee a book a spot on the bestseller lists” that got my juices flowing.

In his response to me, Balthasar moves the goalposts completely. To him, textual analysis is just one of a number of inputs he uses in his company (called Intellogo, an entity with which I am not familiar at all.) In fact, he says “The Bestseller Code” does not claim what the sentence I quoted above clearly does claim. And then he goes on to suggest that my post suggests a lack of appreciation for “machine learning” and “big data” and succumbs meekly to the “romance of publishing”.

It seems pretty clear that he doesn’t know much about Pete McCarthy or me, nor is he much aware that we have spent our careers arguing to the romantics in publishing that they need to be more data-centric.

Balthasar claims an overlap in our viewpoints but creates a total straw horse by saying “…I agree in theory with Shatzkin that an algorithm alone cannot predict whether a book will be a bestseller or not, that isn’t precisely what The Bestseller Code claims, nor what our experience working with machine learning at Intellogo defines”. (Of course, it is precisely what the sentence quoted from the PW story does claim for the book!)

While we apparently agree that big data is an essential analytical tool for publishers marketing books today, where we emphatically part company is on the relative importance of textual analysis. Compared to research into the audience, segmenting it, understanding its search behaviors and social activities, and understanding the competitive environment for a book at the moment that it is published, the analysis of the book’s content adds very little, even when it is deeply analyzed and bounced against other sources.

Or, let’s put it this way. We do lots of projects designing digital strategies for books without performing textual analysis. Maybe some of those plans would be improved if we also used a book’s text as seed data for portions of our analysis. But there’s no way we’d try doing any meaningful marketing planning without the other things we do, no matter how rigorous or skilled a textual analysis was.

I’m glad a fan of “The Bestseller Code” is moved to put the textual analysis in the category of “among the things we do” rather than “we can predict a bestseller from the text”. But that wasn’t the proposition I was reacting to when I wrote the post that provoked his response.

When Balthasar says (as he does), “Imagine a day when we take all our data about what people are reading and provide publishers (and authors) ideas of what people want to read, where to find those audiences, and better ways to reach them”, he is pretty much stating the nature of our work at Logical Marketing. We do precisely what he’s suggesting today for a wide variety of clients. Textual analysis has almost nothing to do with it.

1 Comment »

The “Big Change” era in trade book publishing ended about four years ago


Book publishing is still very much in a time of changing conditions and circumstances. There are a host of unknowables about the next several years that affect the shape of the industry and the strategies of all the players in it. But as publishers, retailers, libraries, and their ecosystem partners prepare for whatever is next, it becomes increasingly evident that — from the perspective of trade publishing at least — we have already lived through the biggest period of transition. It took place from sometime in 2007 through 2012.

At the beginning of 2007, there was no Kindle. By the end of 2011, there was no Borders. And by the end of 2012, five of America’s biggest publishers were defending themselves from the US Department of Justice. The arrival of Kindle and the exit of Borders are the two most earthshaking events in the recent history of book publishing and its ecosystem. The Justice Department suit first distracted and then ultimately strait-jacketed the big publishers so it was both difficult to focus and then difficult to react to further marketplace changes.

Paying close attention to what we then called “electronic publishing” started for me in the early 1990s, with a conference other consulting colleagues and I organized for Publishers Weekly which we called “Electronic Publishing and Rights”. This was before Amazon existed. It was when the big transition taking place was from diskettes to CD-Roms as the means of storage. And it was even before Windows, so the only device on which you could view on a screen anything that looked at all like a book was a Macintosh computer, which had literally a sliver of the market. The most interesting ebook predecessor was the Voyager Expanded Book, and it could only be used on a Mac.

In this speech I gave in 1995, I put my finger on the fact that online would change all this and that publishers shouldn’t spend too much energy on CD-Roms.

The period from then until when it was clear Kindle was establishing itself — the awareness that it was for real slowly dawned on people throughout the year 2008 — was one where the inevitability of some big digital change was generally acknowledged. But dealing with it was the province of specialists operating alongside the “real business” and largely performing experiments, or getting ready for the day when it might matter. There was a slow (and inexorable) shift from store-purchasing to online purchasing. And the online purchasing almost all went to Amazon. But even that wasn’t seen as particularly disruptive. Neither ebooks nor online purchasing called for drastic changes in the way publishers saw their business or deployed their resources.

The first important new device for books in 2007 didn’t start out as one at all. It was the iPhone, first released in June of that year. Although Palm Pilots were the ebook reader of choice for a big chunk of the then-tiny ebook community, they lacked connectivity. The iPhone was not seen as an ereader when it came out — indeed, Apple head Steve Jobs still believed at that point that ebooks were not a market worth pursuing — but they could, and did, rapidly become one when it was demonstrated that there was a market. And they vastly expanded the universe of people routinely paying for downloaded content, in this case music from the iTunes store.

Then Kindle launched in November of 2007. A still unannounced number of Kindles sold out in a few hours and Amazon remained out of stock of them for several months! Because the original Kindle was $399, it was only a “good deal” for the consumer who read many books on which they could save money by buying electronic. What this meant was that Kindle owners bought ebooks in numbers much greater than the relatively small number of devices placed would have suggested. Throughout 2008, the awareness dawned on the industry that ebooks were going to be a significant business.

And that awareness rapidly shook loose a raft of competition. Barnes & Noble saw that they had to compete in this arena and started a crash program to deliver the Nook, which first appeared almost precisely two years after the first Kindle, in November 2009. Months earlier, Amazon had released the app that put Kindle on the iPhone. Meanwhile, Jobs had become persuaded to take ebooks seriously, and, anyway, he had a store selling content downloads to devices like crazy. Now, about to launch his new tablet format, the iPad, he had what looked like the perfect vehicle with which to launch ebooks. The iPad and the iBookstore debuted in April 2010. A month later, Kobo entered the market as a low-priced alternative with their first device. And by the end of the year, Google reorganized and rebranded what had been Google Editions into Google eBooks. The original concept was that they would populate the readers that were using epub, which meant Nook and Kobo at that time.

All of this change within three calendar years — 2008 through 2010 — created a blizzard of strategic decisions for the publishers. Remember, before all this, ebooks were an afterthought. Amazon had applied pressure to get publishers into the Kindle launch in 2007. Before that, no publisher that I can recall made any effort to have ebooks available at the time a book was initially launched. There were workflow and production changes (XML FIRST!) being contemplated that would make doing both print and digital editions a less onerous task, but they were seldom fast-tracked and doing ebooks meant taking on and managing a book-by-book conversion project.

During the period when Amazon was pretty much alone in the game (the pre-Amazon market leaders, Sony and Palm, faded very quickly), they started pricing Kindle titles aggressively, even willing to take losses on each sale to promote device sales and the ecosystem. This alarmed publishers, who were seeing small Kindle sales grow at what were frightening rates and raising the spectre of undermining their hardcovers. It didn’t hurt that the retailers with whom they (still, then, though not now) did most of their business were also alarmed. Nook arrived and Barnes & Noble would never have been as comfortable as Amazon with selling these new products at a loss. But B&N also worried about the impact that cheap ebooks might have on more expensive print book sales. Amazon didn’t.

So when Apple proposed in late 2009 and early 2010 that there could be a new way to sell called “agency” which would put retail pricing power for ebooks into the publishers’ hands, it met a very receptive audience of publishers.

And that, in turn, led to the Department of Justice’s lawsuit against the big publishers which was instituted in April of 2012.

Coinciding with and enabled by all of this was the huge growth in author-initiated publishing. Amazon had bought CreateSpace, which gave them the ability to offer print-on-demand as well as Kindle ebooks. The combination meant that a huge audience could be reached through them without any help from anybody else. When agency happened (2010), they started to offer indie authors what amounted to agency terms: 70 percent of the selling price for ebooks. This was a multiple of the percentage an author would get through a publisher.

Agency pricing fell right into Amazon’s and the self-published hands. Getting 70 percent on the ebook, the indie author got $2.10 pricing at $2.99 and $2.80 pricing at $3.99, royalties comparable to what they’d get from full-priced print. Many bestselling indie ebooks were priced at $0.99. The very cheap ebooks indie authors would offer juxtaposed against the publisher’s agency up-priced (many at $14.99) and undiscounted branded books created a market opening that allowed the Kindle audience to sample (aside from the free chapter that is standard in ebooks) cheap ebook authors for peanuts. Suddenly, names nobody had heard before were on the map, selling millions of ebooks, and taking mindshare away from the industry’s output. And it also handed the publishers’ authors an alternative path to market that could only have the effect of improving their negotiating position with the publishers.

Meanwhile, Borders sent the most persuasive possible signal that the shift in sales from stores to online, accelerated by the ebook phenomenon, was really damaging. They went out of business in 2011. That took the account that sold upwards of 10 percent of most publishers’ books, and a far greater percentage of the bookstore shelf space for backlist, off the board. Or, viewed another way, publishers went from two national retailers who could place a big order and put books in front of the core book-buying audience to one.

So the authors’ negotiating position was stronger and so was Barnes & Noble’s.

And all of those events — the devices, the ebook surge, the introduction of the agency business model, and the Department of Justice suing most of the big publishers, a very noticeable rise in successful independent publishing, and the increased leverage of the trading partners with whom publishers negotiate their revenues and their costs — were head and body blows to the titans of the industry. Every one of them threatened the legacy practices and challenged the legacy organizations and resource allocations.

During this period, Random House (the number one publisher) merged with Penguin (the number two publisher) and created a super-publisher that is not far from being as big as the four remaining members of what were called “The Big Six” in 2007. If you are viewing the world from the perspective of HarperCollins, Simon & Schuster, Hachette, or Macmillan, that might have been the biggest development of all.

Compared to the sweeping changes of that era, what has happened since and what is likely to happen in the next couple of years is small beer. There are certainly clear trends that will change things markedly over time.

Amazon continues to grow its share, and they are around 50 percent of the business or more for many publishers these days.

Barnes & Noble is troubled but in no immediate jeopardy and is still, by far, the number one brick-and-mortar account for publishers. But the optimistic view is that their book sales will remain flat in the near future.

Independent bookselling continues to grow, but even with their growth since Borders went down, they are less than 10 percent of the sales for most publishers. It is true that ebook sales for publishers have flattened (we don’t know the overall trend for sure because we don’t really know the indie sales at Amazon, and they’re substantial) and don’t seem likely to grow their share against print anytime soon.

These things seem likely to be as true two years from now as they are now. Nothing felt that way in from 2008-2012.

Digital marketing, including social network presence, is an important frontier. The industry has a successful digital catalog, called Edelweiss, which has obviated the need for printed catalogs, a cost saving many publishers have captured. And another start-up, NetGalley (owned by Firebrand), has organized the reviewer segment of the industry so that publishers can get them digital advance copies of books, which is cheaper and much more efficient for everybody.

Owning and mining email lists is a new skill set that can pay off more each year. Pricing in digital seems to offer great opportunity for improved revenue, if its effects can be better understood. International sales of American-originated books are more accessible than they’ve ever been as the global network created by Ingram creates sales growth opportunities for just about every publisher. That should continue and requires new thinking and processes. Special, or non-traditional, markets increase in importance, abetted by digital marketing. That will continue as well.

Audio, which has been one of the big beneficiaries of digital downloading, will continue to grow too. The problem from the publishers’ perspective is that Audible, owned by Amazon, owns most of that market. So they have a sophisticated and unsentimental trading partner with a lot of leverage controlling a market segment that is probably taking share from print and ebooks.

And with all of this, what will also continue to grow is relentless margin pressure from the publishers’ two biggest accounts: Amazon and Barnes & Noble.

But the challenges of today aren’t about change of the magnitude that was being coped with in the period that ended five years ago. They’re more about improving workflows and processes, learning to use new tools, and integrating new people with new skill sets into the publishing business. And there are a lot of new people with relevant skills up and down the trade publishing organizations now. That wasn’t so much the case when things were changing the fastest, 2007-2012.

It isn’t that there aren’t still many of new things to work on, new opportunities to explore, or long-term decisions to make. But the editor today can sign a book and expect a publishing environment when it comes out in a year or two roughly like the one we have today. The editor in 2010 couldn’t feel that confidence. The marketer can plan something when the book first comes up for consideration and find the plan will still make sense six months later. And while things still very much in flux in sales, a blow comparable to the loss of Borders isn’t on the

Of course, there could always be a black swan about to announce itself.

This post explains why, among other reasons, I will no longer be programming the Digital Book World Conference, as I did for seven years starting with its debut in 2010. At its best, DBW anticipated the changes that were coming in the industry and gave its attendees practical ways to think about and cope with them. Future vision was a key perspective to programming although we always strived to give the audience things they could “take back to the office and use”.

It has been harder and harder over the past couple of years to find the big strategic questions the industry needed answers to. The writing was on the wall last year when most of the publishers I talked to felt confident they understood where books were going; they wanted to hear from other segments of the digital world. That was a sign to me that the educational mission I had in mind for DBW since I started it was no longer in demand.

To their credit, the DBW management, as I understand it, is trying a new vision for the show, more focused on the immediately practical and the hands-on challenges of today. I wish them the best of luck with it.

8 Comments »

The sea change that comes with the latest iteration of the book ecosystem


In the past 10 years (since the mid-2000s), the ebook has arrived and the amount of shelf space for books in physical retail has declined, as book purchasing has continued to move to the Internet. This has put pressure on publishers’ distribution costs, as we discussed in a prior post.

In the 10 years before that (mid-1990s to mid-2000s), online bookselling began at what was, we now know, the very peak of book retailing, when the superstore chains B&N and Borders had built out hundreds of 100,000+-title stores and still owned mall chains Dalton and Walden that had many hundreds of smaller stores. And that was on top of the largest-ever network — many thousands — of independent bookstores, many of which were themselves superstores.

In the 10 years before that (mid-1980s to mid-1990s), Wall Street cash enabled the two big bookstore chains to build out their superstore networks, stocking publishers’ backlists deeply. With so many enormous stores opening, publishers received a bonanza of store-opening orders that went deep into their lists and were relatively lightly returned (until the store-opening process reversed itself 20 years later).

In the 10 years before that (mid-1970s to mid-1980s), the two mall chains (Dalton and Walden) rode the growth of shopping centers to a position of great importance in selling books to the public. They became the drivers of the bestseller lists. In the same decade. Ingram and Baker & Taylor built reliable national wholesaling networks, enabling the chains and a growing number of independents to replenish stock of unsold books quickly, increasing stock turn and profitability for booksellers (and lowering returns to everybody’s benefit).

In the 10 years before that (mid-1960s to mid-1970s), the department stores started to yield their strong position in book sales, victims both of their own structured discipline (open-to-buy rules) about inventory control that reduced their title selection and of the growth of the malls. The malls inadvertently doomed the department store concept (even though department stores were the “anchors” that made malls possible) by enabling specialty retailers of all kinds, including bookstores, to provide a better shopping experience than the department that sold those goods in the department store.

In the 10 years before that (mid-1950s to mid-1960s), an increasingly affluent society saw an ever-expanding number of bookstores while, in that era, mass-market paperbacks became ubiquitous in drug stores and newsstands, vastly increasing the number of places where Americans could find and buy books.

And the 10 years before that, which takes us back to the end of World War II, saw the birth of mass-market paperbacks and the development of modern publishing sales forces in trade houses. This was, in retrospect, the beginning of a half-century of uninterrupted growth for American book publishing. It has not necessarily now come to an end, but the growth of the segment controlled by big publishers may have ended.

What happens now? The online book market is likely more than half of the total book market. That is, books purchased online — print and digital — exceed the number sold in retail stores (obviously all print). Amazon is the single most powerful retailer, and they have also made themselves the first stop for any self-publishing or small-publishing entity that wants to reach readers. Ingram and the bigger publishers offer full-line distribution services to the most ambitious of those and to everybody else who wants to reach the whole book market.

Until the last 10 years, all the developments that affected book publishing tended to grow the availability of books relative to the availability of other media. When mall stores or superstores grew, there was no associated lift for television shows or movies (although recorded music also benefited from the malls). That is no longer the case. For the first time, really, books are competing with everything else you might read or watch or listen to in a way they never did before. Online doesn’t care what is in the file it displays. This is a qualitative difference in the nature of book availability growth compared to everything else that has happened in the lifetimes of anybody in the business.

The fact is that books used to live in a moated ecosystem, independent of what was going on in other media and book readers’ communication streams. Since ubiquitous broadband, that is no longer true. This presents publishers with two challenges they never faced before.

One is to take advantage of the opportunity to promote books to readers by tying them in to other media and events in ways that were never before possible. This is digital marketing promoting discovery. In fact, a greater percentage of the potential audience for any book should know about it within a few months of its publication than ever before, if publishers do their jobs right.

But the other is that publishers need to be alert to changes in book reading habits that are bound to occur because of integrated media. Yes, the publisher can promote the book to somebody watching a related video or reading an email on a related subject. But it is also true that promotion for movies and emails from friends can interrupt a reader in the middle of a chapter if they’re reading online. This is probably changing the way people read books and might even change how they want their books edited and shaped. Publishers who pay attention will see those changes as they occur.

We can interrupt people doing something else now to tell them about a book. But they now, in turn, can easily be interrupted while they’re reading the book. Digital change and media integration cut both ways. It is very early days for this reality. It only really occurs because of the combination of broadband and reading on an Internet-enabled device. That’s a relatively recent and still-growing circumstance. We don’t know where it will lead.

12 Comments »

Things are calmer than they were in the book business, but change is a constant


Among the shifts that have been taking place in publishing houses over the past decade is an increase in the head count dedicated to marketing and a decrease in head count dedicated to sales. This reflects the reduction in the number of bookstore accounts and the transfer of “discovery” from store shelves to digital search.

The reduction in bookstores and the concurrent and related reduction in print books sold in stores also affects how publishers view the economics of the sales departments and the entire support system for print distribution. The big houses still need sales forces and warehouses and sophisticated systems to track inventories and payments and returns but the “throughput” of print from their own publishing programs is declining. For many, that means that distribution clients are increasingly important. They provide the volume to support scaled operations without requiring the publisher to invest in publishing more titles. For at least four of the big five (HarperCollins being an apparent exception), distribution of other publishers’ books, with or without providing the sales force effort, is a critical component of maintaining the volume that keeps unit costs in line.

But that adds risk. Distribution contracts vary in length, but they generally only extend two or three years out. With four major publishers plus Ingram, which has, effectively, five different full distribution options to offer, on the prowl for clients, there is a plethora of choices for any publisher seeking to shed their own fixed-cost distribution or to switch distributors. Indeed, the percentages being charged for distribution services have dropped drastically over the past two decades. The competitive environment is likely to perpetuate that trend.

While the big publishers doing distribution have (so far) tended to insist on fairly large clients, Ingram is using its multiple configurations to try to serve publishers of all sizes and entities that aren’t primarily publishers at all. Today a publisher that is really a literary agency or, before long if not already, a bank, an advertising agency, or a not-for-profit with a mission, can put a book or a list of its own into the book publishing arena with sales and distribution capabilities competitive with the biggest and most experienced publishers. So a revolution that began with Amazon enabling indie authors, starting about ten years ago, to reach a big percentage of the total book market through Kindle and CreateSpace, is being dramatically extended. Going after real bookstore distribution definitely requires incremental investment and marketing savvy, even with the machinery in place to help.

But incremental investment and marketing savvy were always far easier to come by than the machinery has ever been for the small or occasional publisher.

While this levels the playing field in a major way, there are still distinct advantages to size and a B2B publishing brand. The diminishing bookstore shelf space has made the also-diminishing mass merchant (Walmart, Target) shelf space relatively more important. Between the chains — primarily Barnes & Noble and Books-a-Million — and independent stores, there are only about 1000 to 1200 points of purchase for books provided by bookstores. There were three to five times that many two decades ago. So the additional thousands of opportunities to put a book in front of the public through the mass merchants are critical, particularly to move bestseller quantities.

But relatively few titles can make the cut for those outlets and the pressure on them to perform quickly is immense. Returns are high. These slots are simply not available to publishers who aren’t recognizable B2B brands with a solid reputation for backing their books effectively. These outlets represent the competitive advantage that remains for the Big Five publishers.

For the past few years, pretty much since the demise of Borders in 2011, the number of bookstores has been going up a bit each year. (It is not clear that the bookstore shelf space has been going up; indie stores seem to be smaller, on average, today than they were two decades ago, or at least there are fewer mammoth ones.) It could well be that, aside from Borders, the indie revival is also fueled by the reduction in shelf space for books at the mass merchants. If so, that is good for smaller publishers and it is good for backlist, both of which are seriously challenged getting in front of the public through mass merchants.

So, while it is definitely true that the dizzying pace of change we saw during the early years of ebooks has subsided, and it is true that the print format has not yielded much share, if any, to ebooks in the past couple of years, it is not time to celebrate a new stability. The marketplace itself is still changing; the online share when you combine print and digital is still growing and the ratio of shelf space available for backlist and slower-sellers is still declining. The smallest publishers are getting better and better market access and the biggest publishers are seeing escalating risk in how they place the books they publish and in the danger they’ll face a sudden decrease in distribution volume that would turn their fixed costs into a burden.

This is a great time in the book business to be very big (among your peer group) or very small and focused. It is a challenging time to be anything else.

A very frequent point of contention when negotiating distribution arrangements is how Amazon will be handled and compensated. Amazon is almost always the single largest account and it is not uncommon for it to represent — on many books and even some publishers — 50 percent or more of the sales. Although sophistication definitely helps in dealing with Amazon, it is also true that Amazon provides incentives to give up the “other half” of the market and just work through them. Any sophisticated businessperson is likely to get more money out of Amazon working it themselves than any distributor can get for them, even before distribution fees. (IF, and this is a big if, you discount the marketing value of books throughout the supply chain which, counterintuitively but frequently, will raise the level of sales at Amazon from what they would have been without books broadly distributed.) In any case, being able to really add value to Amazon sales would be a Holy Grail. Right now, most of the time, distributing publishers really have to make the argument that you can’t effectively split things and that they will add so much value in the rest of the world, and do the work around Amazon, that the overall relationship is worth the trade-off.

10 Comments »

A great step forward by Sourcebooks which we expect other publishers will imitate


Since I started working with Peter McCarthy, he has been impressing me with the importance of publishers doing “research” in the digital age, by which he means “audience research” done with a variety of online tools. That audience research should inform what publishers do to market their books by identifying, segmenting, locating, and understanding the potential buyers for those books. That enables publishers to “aim” their marketing efforts where they are likely to do the most good.

Indeed, everything we do at Logical Marketing, the suite of services we have built around Pete’s unique knowledge and talent, is informed by the research we do. Sometimes it is clear that the deliverable really is the research itself. At one point in the course of my learning from Pete, we published a piece in this space suggesting that every publisher really needs to have a dedicated research function.

What we were already beginning to see then (and more since) is that many publishers, and by now most of the big ones, have created an executive position with the word “audience” in the title or job description. The responsibilities to address audiences required research as a prerequisite, but it has seldom been framed that way.

This week we were delighted to see that Sourcebooks, a legitimate contender for the title of “most innovative company in book publishing”, has created a “data and analysis” department. As reported by Shelf Awareness in its newsletter (and also reported by Publishers Marketplace and Publishers Weekly):

Sourcebooks has created a data and analysis department that brings together “experts from supply chain, editorial, and sales” to streamline data functions and offer a higher level of analytical support to departments, partners and customers.

The only part about this that is disappointing is that the word “research” is not in the department name or description. But the separate department to specialize in “data and analysis” is exactly what we were advocating when we called for creation of research departments.

It is important to keep the connection between “data and analysis” and “research” in mind because, historically, “data and analysis” in publishing have meant “post mortem analysis” of specific marketing efforts. Indeed, many publishers have “analytics” roles already, but they are not cross-functional and they tend to be focused on analysis of time-honored activities, not applying new techniques on audiences as is enabled in the digital age.

As an industry, we have usually used “data and analysis” to measure the effectiveness of prior activities rather than to understand what we’re aiming at in the future. Being explicit about the fact that “research” is the core function means you are also being explicit that the primary purpose of that function is to aim future efforts, not evaluate the successes or failures of prior ones. Research is seeking to be predictive as well as to inform rapid response to an ever-changing landscape. With most of their existing capabilities and activities, in Pete’s words, “publishers don’t look out; they don’t look forward; and they don’t look ‘big'”.

This is not to say that it isn’t worth knowing whether an ad or a promotion that was tried last week paid off. Indeed, knowing that could influence whether you try that same promotion again. But it is far more useful to be better informed before money and effort are expended than after. And what useful audience identification and segmentation research delivers is the knowledge that enables marketing efforts to be aimed at the right audiences and with the right messages to have a greater possibility of succeeding, and doing so more efficiently.

Publishers will always be interested in knowing whether the front-of-store placement they bought or the author tour they paid for moved the needle on sales. But it is actually more important to figure out before they spend the money whether the customers they’re looking at are good candidates for an impulse buy at Barnes & Noble or likely to be affected by the media exposure an author tour would bring. And the same research that will uncover answers to those questions will also tell the publisher what messages to stress on their cover copy or in media opportunities. And it will tell them which search terms are both revealing of “intent” (to buy, to learn, to know) and occur in enough volume to be worth going to extra efforts to rank for them.

We applaud the Sourcebooks approach to staffing their data and analysis group, which acknowledged that “editorial, sales, and supply chain” needed to participate. (We would, emphatically, add “marketing and publicity” to the list.) Audience research and understanding can be used productively across a range of publishing house activities: acquiring the rights in the first place; shaping the book from proposal to completion; creating all the marketing copy, from that on the book itself to what’s in the catalogs or ads; the geographical placement of physical copies in the retail channels; the timing of reducing stock levels in the supply chain; and the identification and execution of newly-arising opportunities on the backlist.

All this covers the “who”s (staff members with what skills and what in-house knowledge) and the “what”s (the tasks research can inform), but not the “how”s of doing this work. The research itself is done with a set of digital tools. Some — like Google Trends, Moz, SimilarWeb, and Facebook Audience Insights — are known to a lot of marketers and we could almost say they are “commonly” used. (They should be.) But a super-expert digital marketer — like my colleague, Pete McCarthy — work with many more. Pete uses over 150 tools that help him get insights from just about every platform and understand search in a highly nuanced and targeted way.

Educational seminars are a component of our Logical Marketing suite of offerings and we are comfortable introducing fledgling audiences to very sophisticated digital tools. But learning more than a 100 of them — that they’re there, what they do, and how to use them — is not something that is done quickly or casually. It might not require the 20-plus years of experience in the industry Pete has, but it’s not something you do in a month, or even a year. And then understanding how all these tools and insights are best applied to the book business is another important requirement that also takes time and application to achieve.

We’re delighted to see Sourcebooks taking the lead at recognizing the cross-functional requirement of data and analysis and we fully expect that effort by them to be a leading indicator of where the industry will go.

The Logical Marketing team has worked with just about all the biggest publishers and, of course, that includes Sourcebooks. We have done a seminar on how to think “audience-first” with them. Currently we’re working on a project helping them create landing pages to improve traffic to two of their websites. We had absolutely nothing to do with their decision to create a department for data and analysis, but we’re not surprised they’ve taken that initiative. We’ve seen up close how seriously they take both digital change and innovation. We’re proud of the fact that the companies we work the most with are the most sophisticated and advanced at digital marketing. Sourcebooks is a prime example of that.

No Comments »

In an indie-dominant world, what happens to the high-cost non-fiction?


I first learned and wrote about Hugh Howey about four years ago. At the time, he was one of the first real breakthrough successes as an indie author, making tens of thousands of dollars a month exclusively through Amazon for his self-published futurist novel, “Wool”. As soon as I could track him down, I invited Hugh and his agent, Kristin Nelson, to speak at the next Digital Book World, which they did several months later, in January 2013.

In the years since, Hugh has had a very public profile as a champion of indie publishing and as a critic of big publishers. When I first encountered Howey, he and his agent had already turned down more than one six-figure publishing deal. Nelson ultimately did a print-only deal for “Wool” with Simon & Schuster, a deal consummated before the big publishers made the apparently-universal decision that they would not sign books for which they didn’t get electronic rights.

This week there was a lengthy interview with Howey done by DBW editor Daniel Berkowitz published on the DBW blog. In this piece, Howey reviews many of his complaints against publishers. According to him, their royalty rates are too low and they pay too infrequently and on too much of a delay. Their authors are excluded from Kindle’s subscription revenue at Kindle Unlimited. Their ebook prices to consumers are too high. And, on top of that, they pay too much rent to be in New York City and they pay their big advances to wealthy authors who don’t really need the money, while aspiring authors get token advance payments that aren’t enough to give them time off to write.

Howey’s observations are not particularly welcomed by publishers, but he has a deep interest in indie authors and, by his lights, is always trying to help them by encouraging them to indie-publish through Amazon rather than seeking a traditional deal through an agent. He has organized the AuthorEarnings website and data repository along with Data Guy, the games-business data analyst who has turned his analytical skills to the book business whom we featured at the most recent Digital Book World this past March.

Howey and I have had numerous private conversations over the years. He’s intelligent and sincere in his beliefs and truly devotes his energy to “industry education” motivated by his desire to help other authors. Yet there are holes in his analysis of the industry and where it is going that he doesn’t fill. Given his substantial following and obvious comfort level doing the marketing (such as it is, and it appears Howey’s success as an author hasn’t required much) for his own books as well as his commercial performance, it is easy to understand why he would never consider publishing any other way but as he has, as an indie author who is “all in” with Amazon. But he seems to think what worked well for him would work best for anybody.

In this interview, Howey says that any author would be better off self-publishing his or her first book than going the route of selling it to a publisher. And he actually dismisses the marketing effort required to do that. Howey says the best marketing is publishing your next book. He thinks the best strategy is for authors to write several books a year to gain success. In fact, he says taking time away from writing to do marketing is a bad choice. Expecting most writers, or even many writers, to do several books a year strikes me as a highly dubious proposition.

It is impossible to quarrel with the fact of Howey’s success. But he makes a big mistake assuming that what worked effectively for him makes self-publishing the right path for anybody else, let alone everybody else.

Howey also has an unrealistically limited view of the output of big publishing. If you read this interview (and I would encourage anybody interested in the book business to do so), you see that he thinks almost exclusively about fiction or, as he puts it, “storytelling”. Books come, like his did, out of an author’s imagination and all the author needs is the time to write. Exposure through Amazon does the rest.

He gives publishers credit for putting books into stores (although he would have them eliminate returns, which would cut down sharply on how effectively they accomplished that). But he thinks stores will be of diminishing importance. (We certainly agree on that.) He gives credit for the indie bookstore resurgence to Amazon, which would be true if you credit Amazon with the demise of Borders that wiped out over 400 big bookstores and created new opportunities for indies. But the idea that Amazon is allied with indie bookstores is contradicted by two realities. One is that the indie stores won’t stock Amazon-published books. The other is that Amazon, now in the process of opening its second retail store, may plan dozens, hundreds, or thousands more to come! We really don’t know. Certainly, very few indie bookstores would be applauding that.

Here’s how Howey sums up his advice to authors.

“Too few successful self-pubbed authors talk about the incredible hours and hard work they put in, so it all seems so easy and attainable. The truth is, you’ve got to outwork most other authors out there. You’ve got to think about writing a few novels a year for several years before you even know if you’ve got what it takes. Most authors give up before they give themselves a chance. It’s similar to how publishers give up on authors before they truly have a chance.”

This seems like sound advice, but it isn’t how it appeared to work for Howey. He published a novella which was the start of Wool and his Amazon audience asked for more. Three more novellas later, over a period of just a few months, and the four combined became his bestselling novel. Six months after he started, he was making $50,000 a month or more and had an agent selling his film rights. Then his agent started selling his book rights in non-US territories and in other languages. Meanwhile, Howey continued to earn 70 percent of the revenues from his ebooks, in a deal Amazon offered that matched what they paid to agency publishers, the biggest publishers. (Would Amazon be paying authors 70 percent if publishers hadn’t come up with that number for agency? Should big publishers get some of the credit for the very good deal indie authors are getting?)

The logic that Howey offers about how self-publishing stacks up against doing deals with a big house is very persuasive, but there are two pieces of reality that contradict it.

One is that, at this time, four years after Howey did “Wool” and eight years after the launch of Kindle, there are no noteworthy authors who have abandoned their publishing deals for self-publishing. (It appeared briefly that Barry Eisler was the first such author, except that it turned out he signed an Amazon Publishing deal after turning down a Big Six contract; he didn’t go indie. And, frankly, while he’s somewhat successful, he’s not a show-stopper author for any publisher.) In fact, Amazon’s own publishing strategy has apparently switched away from trying to persuade big commercial fiction authors to do that and is focused on the genre fiction that is the core of the self-publishing done through them. Howey has been offering the same analysis for quite a few years now but so far, the publishers have lost hardly anybody they care to keep to self-publishing. And we’re now in a period where the split of books sold online (ebooks and print) to books sold in stores (where publishers are beyond helpful; they’re necessary) appears to have stabilized — at least for the time being — after years of stores losing share.

The other is that Howey’s analysis totally leaves out one of the biggest categories of publishing: big non-fiction like history or biographies or industry analyses that take years of research and dedication to complete. Unlike a lot of fiction, those books not only take time, they require serious help and expense to research. In a imagined future world where all books are self-published, aspiring fiction writers give up very little (small advances) and successful fiction authors have the money to eat while they write the next book they can make even more money on doing it the Howey way (even though none have). But big non-fiction books like Jane Mayer’s “Dark Money” (or anything by David McCullough) took years of research to put together. “Dark Money” was undoubtedly financed at a very high level by the Doubleday imprint at Penguin Random House. How books like that will be funded in the future is not covered by Howey’s analysis.

Now, that’s not to say they must be. Economic realities do rule. Howey’s thesis that things are shifting in Amazon’s direction and away from the ecosystem that has sustained big book publishers is correct. He predicts that there will be three big publishers where once there were six and now there are five. I concur with that. As that happens, maybe the big fiction writers will take Howey’s advice.

But that solution is no solution for authors like Jane Mayer or David McCullough. A world without publishers where authors do the writing and the publishing might give us an output of fiction comparable to what we have now. But the biggest and best non-fiction would need another model if publishers weren’t able to take six-figure investment risks to support them. Amazon’s not offering it and neither is Howey. If the future unfolds as Howey imagines it, we’ll never know what books we’re missing.

78 Comments »