April, 2009

Have we got a show for you!


Here is the lineup for this year’s Making Information Pay session on “Shifting Sales Channels”, to take place at McGraw-Hill on May 7. 

The first half of the show is about “the state of the market.” The second half is about “what publishers are doing about it.”

We’ll start off with a report on the state of publishing from Leigh Watson Healy, Chief Analyst of  Outsell, based on conversations she has had with many leading publishing CEOs in the past several months. Leigh is the lead for Outsell on the BISG Publishing Trends report, which will be coming a couple of weeks.

Then I’ll do a review of what we learned from our industry survey and interviews Ted Hill and I did preparing for this conference.

The highlight of the first half of the show will be a coordinated presentation from Jim King of BookScan and Kelly Gallagher of Bowker, showing how BookScan’s POS data and Bowker’s consumer data can be used in tandem for greater analytical insight. This is, we believe, the first coordinated presentation ever by these two companies, which in many spheres are competitors.

The second half of the show will kick off with BISG co-chair Dominique Raccah, CEO of Sourcebooks, talking about the many changes that have taken place in her company over the past couple of years. Dominique herself has become a dedicated online marketer (which anybody following her in Twitter knows very well), but she has installed a digital workflow, changed her company’s title mix, and, in general, tried to react quickly to changes in trthe digital age.

One of the trends we’ve found is a de-emphasizing of printed trade catalogs. Leaders in this effort are HarperCollins, and the President of Harper’s sales division, Josh Marwell, will describe his company’s moves toward their own e-catalog as well as their participation in an industry effort called Edelweiss. Harper will not be issuing a printed catalog this Fall.

We have also observed big changes in how publishers are spending their marketing dollars, but none are changing more than Sterling. CEO Marcus Leaver will describe those changes: where he has put additional spending and where he found the dollars to cut to pay for the growth he saw was necessary.

The program will conclude with a lesson from Random House’s VP, Sales Analysis Dave Thompson, who will pick up where King and Gallagher left off. Thompson’s focus will be on using the available data — and he sees great value in the combination of POS and consumer data — to educate buyers in accounts that are notsteeped in the book business, like mass merchants. This is one of the growth areas some publishers have identified for the years to come.

All in all, a jam-packed program that should be of value to any publisher trying to improve its sales in difficult times. I’m proud of what we’ve put together and I hope any of you who haven’t signed up yet will grab one of the remaining tickets.

No Comments »

Ideas triggered by Amazon buying Lexcycle


The acquisition of Lexcycle by Amazon sure got all the digerati’s creative juices flowing. What is becoming increasingly clear is that general trade publishers have a card to play here that the niche publishers can only join in on: creating a collectively-owned ebook “store” that can provide an economic baseline for the emerging ebook marketplace.

Michael Cairns suggests this possibility in his piece this morning. But his focus is on epub and interoperability standards. Mine is, and I believe the publishers’ will be, on pricing throughout the supply chain.

Through laziness, thoughtlessness, carelessness, or inertia (or whatever combination thereof), the ebook supply chain has adopted discount structures that imitate the physical book supply chain. This is daft. There is no comparison between the retailers’ costs and risks associated with physical books and those associated with ebooks. There is no economic justification to providing the same level of discounts. But that’s where we are. Amazon may be arm-twisting to enforce this discount equivalence, but they didn’t think it up. It’s pretty much universal and it came from the publishers in the first place.

As I suggested in my ebook post from London last week, now is the time to change this, before ebook revenues become too great. The college publishers with CourseSmart have mapped out the way to do something about this legally, but the play is pretty obvious. The publishers need to jointly fund and substantially own a virtual retailer whose mission would be to deliver all conceivable ebook formats (whether epub or not!) The store should be competitive with other offerings as to interoperability, lightness of DRM (I favor social only), and customer service.

Establishing such a business would force publishers to figure out how much discount off retail is required to enable the retailer to be profitable. I suspect that number is about 20% and, at that level, would allow modest discounting (5% or 10%) on some titles to the consumer. To stay on the right side of the law, publishers would sell to the new entity on the same terms they sold to everybody else. But the objective here is to limit the ability of retailers to force higher discounts through boycotting publishers or titles with impunity. That is what his happening now. Sometimes the book you’re looking for now on Kindle isn’t there because the publisher won’t agree to Amazon’s discount schedule. I know specifically of one medium-sized trade house for which that is true and, if there’s one, there are probably more.

If publishers don’t do this, the excessive discounts they offer retailers will turn into high standard discounts for consumers that will create inexorable downward price pressure. Amazon may be subsidizing that $9.99 price point they like, but the publishers are subsidizing it too.

This idea can work because six publishers control the lion’s share of bestsellers, which is a big chunk of ebook sales in the short run. Bestsellers is the one “niche” in which the general trade houses have critical mass.

And if this idea can work, another one waits in the wings.

It has been bemoaned that Google and Amazon are on a path to control both discovery and delivery of books in the future. This isn’t even a particularly competitive situation between the two of them, since Google is much more interested in discovery and Amazon is much more interested in delivery.

Because Google is more interested in discovery, they are also not particularly interested in books. They are about “all the world’s information”, not “all the world’s books.” So as robust as Google Book Search is, the company is not focused on making it a competitive book discovery tool compared to Amazon. They are about incorporating the book information to make a superior information discoverytool.

That leaves another opening for publishers where the Big Six have a strong collective position: the metadataassociated with the biggest books.

What if the Big Six also jointly owned a book discovery site: Allaboutthebookyouwant.com. The play there would definitely not be to act as a retailer, but rather to help consumers find the book and the retailer that is best for them. All retailers that “play” would have their inventory and pricing made transparent on the site which would contain a publisher-assisted best possible aggregate of enriched metadata (excerpts, stories behind the book, video support, etc.) 

This initiative could solve a number of problems for the big publishers. It would create a “Switzerland” for enriched metadata: a place to make it available which would help all the online booksellers. To the extent that it grew in consumer acceptance, it would reduce the danger of being being buried or victimized by bad data on a retailer site (think of the recent brouhaha about the “adult” books on Amazon). It would enable the consumer to shop across many book retailers at the same time. And the referral fees the site could earn would reduce the degree to which it needed to be subsidized as a marketing expense.

The current effort by several general trade publishers to drive traffic to their own house-branded web sites is misguided and doomed. But Amazon (and Shelfari, GoodReads, LibraryThing, and our new entrant, Filedby.com) have demonstrated that sites with information across the trade book spectrum have real consumer appeal. With the support of the big publishers from the earliest possible moment to make the high-profile general trade books visible, at least a large portion of the discovery traffic could be liberated from being captive to Amazon, Google, or anybody else. And the consumer could be assured that the information she is getting on purchasing was being offered in her best interest, not based on what a retailer is trying to push.

Worth noting: the ebook site Smashwords already sells ebooks at 15% margin, returning 85% of the publisher- or author-set retail price to the content owner. Up until now, Smashwords has been about author-generated ebooks; it has not pushed out an offer to publishers. And there are elements of Smashwords’s solution — DRM free, working from PDFs and doc files — that might not be exactly what publishers would want . But they might be the ebook solution, and it might be close to being in place. Smashwords may be the game-changer but the publishers and public haven’t discovered it yet.

24 Comments »

From a book to a 1.0 website: the story of BaseballLibrary, part 1


This is the first of what will be 3 or 4 posts about the birth and development of BaseballLibrary.com, a sterling Internet 1.0 site still chugging along (barely) deep in the Internet 2.0 era. It shows that a good idea can sustain itself for a long time, even in the face of erratic and sometimes incompetent management (and both the idea and the mismanagement are mostly mine.)  This first post tells the story of the creation of the book The Ballplayers, which was the key building block of Baseball Library. The next installment relates some interesting history about how the model for compensating for content changed in the late 1990s, but this foundation is needed first.

In late 1985, at what was one of the more difficult times of my consulting career, I was invited to a meeting to brainstorm the commercial possibilities for “The World Classics of Golf”, a book club. One person who was supposed to come to the meeting couldn’t make it. “Oh, Rodney’s working on his baseball encyclopedia.”

I pondered that as I walked home. What could that be? And then an idea hit me (although I still don’t know what Rodney’s idea was!).  The Baseball Encyclopedia, then published by Macmillan and also known as Big Mac, was the complete statistical compendium, player by player, of baseball history. And what struck me was that, because of Big Mac and its power, nobody had created a normal, regular, plain vanilla  baseball encyclopedia: one where you could look up a player (or a team or an umpire or a baseball announcer) and read about him.

The idea of creating such a thing fascinated me and felt like something I could do. I had spent more hours of my waking life on baseball than on any other single thing. I knew (and know) a lot. I was also a member of a young organization called SABR, the Society for American Baseball Research, and I knew there were lots of people who knew even more than I did who could be rounded up to help. But I also knew this was a big project and I’d need help to figure it out.

So I went to Jim Charlton, an experienced book packager and a fellow baseball aficionado with the idea. My startlingly naive notion was that Jim would just execute my idea for half the take. Jim probably just didn’t believe that I meant he’d do all the work, so he agreed. And together, we planned out the book that was later called (somewhat misleadingly) The Ballplayers.

At that time, if memory serves, there were about 14,000 people who had been active major leaguers in the 20th century (which is when the “modern era” of baseball begins.) By eliminating hitters who had fewer than 500 at-bats and pitchers with fewer than 100 innings of work, we got the number down to a manageable one, about 5,000. With the addition of some other players (from the 19th century and some who were worthy of inclusion despite having missed the cut-off), teams, leagues, announcers, sportswriters, owners, and umpires, we developed a list of 6,000 individuals who would get bios. By ranking all of them for their historical value (arbitrarily), we divided them into groups so that the most important players would get proportionately longer listings. And that enabled us to estimate the total length of the book, which was around 700,000 words and (we thought this was smart) about 500 photographs.

Thanks largely to Jim’s contacts and sales skills, we sold the book in a mini-auction to Arbor House, an imprint of William Morrow, for $165,000, a pretty huge sum at that time. And then we got to work, paying a small per-entry fee to a long list of writers we recruited, mostly through SABR. We then recruited two recent college graduates, Shep Long and Steve Holtje, to help us coordinate and manage the project. Holtje stayed until the end and became Managing Editor.

But by this time, Jim had figured out that I really was serious about him doing all the work and, for half the money, that wasn’t a very good deal. So we had to renegotiate. I cut the pie and then offered him his choice of slices. We’d split the take 85-15. The one who got the 85 would do all the work and have to pay all the expenses. Jim decided to take the 15. So this project became my baby.

We completed the manuscript on time (with the help of an extended schedule) in the Fall of 1989. John Thorn, a veteran baseball book creator with an extraordinary list of credits, was commissioned to provide the photographs, which he did with his colleague Mark Rucker. And in Spring 1990, The Ballplayers, a 7-pound, 1330-page tome, hit the bookstores with a retail price of $35.

By this time, Morrow had shuttered the Arbor House imprint. The Ballplayers may have been the last book released with that colophon. That meant nobody in the shop had a stake in the book. So they printed 35,000 (probably the number required for the house to break even; that was an even more common practice in those days.) They advanced about 20,000 and ended up selling about 22,000. And by the end of 1993, the book was ready to be remaindered and for the rights to revert to me. 

The work had achieved a little bit of fame: a kind review in Sports Illustrated and an appearance by me to promote it on Good Morning America were the highlights (thanks to an independent publicist I hired; we got almost no PR from our publisher.) It was still the only reference book of its kind. In the meantime, Jim Charlton had created The Baseball Chronology, which was a day-by-day account of baseball history. That was really the only competition to The Ballplayers. It was published by Macmillan which should have given it a better chance. But a couple of years later, it joined The Ballplayers on the remainder table and out of print.

I was aware that we had made a major publishing mistake with The Ballplayers by including the 500 photographs and setting it in a pretty loose design and a big trim size. These things made the book bigger and heavier than it needed to be. A straight-text rendition with smaller type might would have been more portable and could have been priced at $20. But that was water over the dam.

On the other hand, I now owned several hundred thousand words of baseball biography text and the internet era was just beginning. That would create a new opportunity, which will be where we will take up this subject again in a subsequent post.

1 Comment »

London Book Fair 2009; pretty personal observations


I love the London Book Fair. It is my favorite of the three book fairs I visit every year (BEA and Frankfurt being the other two) and I have even more fun there than at Tools of Change. Book Fairs, for me, are about seeing publishing people from all over the world, catching up with their thoughts, and, most important from a selfish point of view, having them vet mine.

Getting some work done for clients and finding new ones is what justifies the expense, of course, and there was plenty of both of those.

I spoke at two events at this year’s LBF. On the Sunday before the Monday morning that LBF actually opened, there was an all-afternoon session which was a “report from America” on digital change put together by Michael Healy of BISG and (as we now know, revealed publicly in that very event, the likely new head of the Book Rights Registry, if the proposed settlement of the Google lawsuit is accepted next month.) It was a beautiful Sunday in London — bright and sunny — and apparently good weather has been in short supply (although you couldn’t prove it by me: was there from Saturday to Thursday and it was lovely every single day.) 

Despite the attractions of the weather, about 100 people came to the Sunday session. It was supposed to run from 1:30 to 6. It started a bit early and ended a bit late (last speaker off the stage at about 6:20) and the crowd at the end swarmed the speakers afterwards with individual questions.

On Wednesday, I spoke at the Supply Chain meeting (my presentation being about this year’s BISG effort for Making Information Pay: “Shifting Sales Channels.”) That session had been moved from its customary spot on the last afternoon (Wednesday) to the morning. Michael Holdsworth, one of the organizers, expressed just a bit of concern about whether the change would affect attendance. It didn’t. The room was packed.

I tried to go to one other session. Our StartWithXML effort has a London partner to stage a full-day Forum on September 1. That’s the Publishers Licensing Society. So when their Executive Director, Dr. Alicia Wise, asked me to attend their session on ebooks for the visually impaired, I said I’d do it. Despite being a two-senses handicapped guy myself (glasses and hearing aids), this was not something high on my interest list and I figured it wouldn’t be on other people’s either. Wrong! I got to the session 10 minutes late and couldn’t get in because the room was jammed. But I didn’t feel too badly because I found my longtime colleague, Mark Bide, also waiting outside. He couldn’t get in and he was on the organizing committee for the event!

So even though there were fewer people in the hall than in prior years — I don’t know the official count but I do know what I saw and what everybody else saw and said — there was a real appetite for future-oriented programming. There was a session featuring four UK CEOs which I read about in the show daily. That one was also well-attended and attempted to be future-oriented, although from the account in The Bookseller show daily, it would appear not particularly successfully.

What I kept thinking about as I walked around the Fair was “who won’t be here next year?” My top nominee would be Publishers Weekly; it is hard to understand how they manage to keep the doors open except by burning through the parent company’s money. Right behind them would be BookExpo America, another longstanding operation which is exemplifies what happens to the horizontal publishing infrastructure as we build an increasingly vertical world. Although it is a popular pastime to “blame” PW editors or management for their predicament, I wouldn’t be inclined to do that. I don’t have a formula to suggest that would have saved them, nor do I for BEA. (Although if BEA goes down, an idea Michael Cader came up with that I joined him to put forward a few years ago called “Frankfurt in New York” might be something old that will suddenly become new again!)

No Comments »

Some ebook observations


Just had a very busy day at the London Book Fair. It is hard to post from here; I don’t have my normal 12 or more hours a day at the keyboard of my laptop. But what Book Fairs are all about is the compressed opportunity to encounter smart and knowledgeable people and I had the chance to check out and validate some thoughts I’ve been having about ebooks.

1. The proliferation of formats, devices, screen sizes, and delivery channels means that the idea of “output one epub file and let the intermediaries take it from there” is an unworkable strategy. Here are two simple reasons for that (I’m sure there are many others):

*Epub can “reflow” text, making adjustments for screen size. But there is no way to do for that for illustrations or many charts or graphs without human intervention (for a long while, at least.) Even if you could program so that art would automatically resize for the screen size, you wouldn’t know whether the art would look any good or be legible in the different size. A human would have to look and be sure.

*The link between text and footnotes, and the easy ability to jump back, is a huge variable among ebooks in different formats. There is apparently some sort of manual work and quality control here that isn’t necessarily done by a downstream converter.

Publishers will find that they must do a QC check on every version of their ebooks which is offered, and a “version” can occur every time a component of the supply chain changes.

2. The branding of ebooks is a mess. The publisher brand is being obliterated. You are buying a Kindle ebook or a Stanza ebook or an Iceberg ebook or an eReader ebook and not Random House, HarperCollins, or Hachette. Publishers are apparently just allowing this to happen. This is pretty ironic because most of the same publishers are mistakenly trying to imbue their brands with consumer significance. For the general trade publisher, that’s not actually possible (since they are not distinguished by their content or their audiences). But if it were possible, the quality of their ebooks should be a big part of it going forward and they’re relinquishing the role of “owning” that voluntarily.

In some ways, they’re also relinquishing their primary responsibility as a publisher, which is to control the quality of the product they deliver for their authors to the authors’ readers.

3. The evolving discount structure for ebooks can’t possibly be sustained. Retailers always use margin to gain share. If publishers sell ebooks to eretailers for 50% off, consumers will soon be buying them at 40% off.  On the one hand, we are ten years into a paradigm of imitating brick-and-mortar pricing and terms and it is difficult to change it. On the other hand, ebooks are still only 1% or so of most publishers’ sales, so any change made now will be “early” in the overall scheme of the ebook business.

Somebody’s got to start building a glide path to a sensible structure. This will be complicated, because publishers in the long run will be much more likely to sell digital downloads direct to consumers than physical books. That means that just going to net pricing wouldn’t be much of a solution. With the publisher selling the books online, any intermediary would be able to calculate what percentage of the retail-to-consumer they were being asked to pay.

The conversation about the prices of ebooks have centered around the costs that publishers don’t incur: printing, binding, cash tied up in inventory, warehouse, returns. But publishers say the manufacturing cost of a book is only about 10% of the retail price and we still have to maintain the operation to do all the printed book stuff and we are still investing to build the infrastructure to do the estuff.

Everybody’s right, but we’re ignoring the retailer side of it.

Retailers also avoid a lot of cost: rent, clerks, cash tied up in stock, shelving, returns. They also have front end investing to do to build an infrastructure to process a digital download business.

I think if I were a big publisher, I would make it clear that the era of 40, 50, 55, 60% off retail for digital downloads is one that must come to an end. I’d lean to a phased reduction and, in the short run, all kinds of support (including additional margin) to help “retailers” (Stanza, B&N Fictionwise, Apple’s and RIM’s App Stores, and every store served by Ingram and Content Reserve) build their offering and their capability. 

The big publishers will have extraordinary leverage to recreate the paradigm. When there’s an ebook market of a size that matters (getting close), people will search Google for their favorite title if the search at their favorite ebook retailer doesn’t deliver the title. There will definitely be retailers that will take the business at lower margins, as can the publisher itself. Boycotting high profile books will be a very dangerous strategy for a retailer.

24 Comments »

The Google settlement, answering some of the questions about the windfall


The post from Thursday about the Google “windfall” provoked a lot of information sources to help me understand the settlement, large parts of which I clearly did not. We’ll go over the answers I got (as I understand them; my understanding seems to be a moving target…) to the questions Michael Cairns and I posed and then I have a few more thoughts about where this leaves us.

1. There are no “rules” about what BRR can keep for its own operations. The notion is that since the Board is composed of representatives of the net recipients, who will benefit from tight cost control, that there is incentive for the Board to manage expenses well. 

2. A share of the money from the Google licensing activity that goes unclaimed because it is attributable to orphan works is first available to pay “inclusion fees” for the opters-in work of $200 a title. (This is not to be confused with the $60 per title scanned before this May 5 which is paid as part of the settlement.) Beyond that, the licensing money is divided among the opters-in on some to-be-determined formula based on usage.

3. The allocation of money to c/r holders is a little bit by volume of material (the up to $200 per book mentioned above) and then thereafter by a to-be-determined measurement of use.

4. The “costs” Google incurs, including sales costs, are all included in the 37% deduction. That 37% is actually arrived at because the split is 70-30 after a 10% expense allocation. There is a provision in the settlement to enable rightsholders to claw back their share of that 10% out of unclaimed revenues. So Google keeps 37% of the total it processes, but declared rightsholders could still get 70% of the revenue attributable to their books with the difference coming out of orphan revenues (before additional payments that could occur out of additional unclaimed revenue.)

The answers to the rest of our questions would be purely speculative. Whether new models are clearly contemplated (like print-on-demand or downloadable ebooks) or not (like licensing orphans for press runs), deals for orphan books can only occur by mutual agreement of Google and the BRR.

And therein lies one big rub. We must assume that each of the three entities with decision-making power: Google, the Authors Guild, and the Association of American Publishers, will act in the best interests of its principal stakeholders. For Google that would be its shareholders; for the others it would mean their author and publisher constituents.

For those (like me) whose primary interest in the settlement is the liberation of all this stranded (orphan) IP, this is discouraging. I don’t believe Google would have reason to object to seeing old books published again, although, for those few that would be, their search “exclusive” could conceivably be compromised. In the overall scheme of things, that would be small beer.

And publishers would also have an interest in allowing those books to be relicensed and published again because, after all, publishers will be the ones relicensing.

Authors, on the other hand, would have no interest in seeing thousands of books come back to active promotional life. If you’re working on a new biography of Franklin Roosevelt, do you really want to see 25 of them published over the last 70 years and long since buried suddenly come back to compete with yours? I see no upside for today’s author in liberating the orphans and I would expect that to be an important consideration for Authors Guild representatives on the BRR board.

What that means is that this settlement does not eliminate the need for legislation to further break up the logjam blocking complete access to the orphans. It makes it important that Google be sincere in its statements that it still supports orphan legislation. My understanding is that it is representatives of non-book IP that have a lot to do with blocking such legislation. Publishers would have reason to favor it. Would authors?

And will approval of this settlement or its rejection make new and constructive orphan legislation more likely? It’s only a guess, and I know more about politics than I do about orphan works legislation, but I’d imagine that the game-changer of this settlement would be a spur to action and rejection of it would leave the matter in the courts.

In the conversations I had yesterday bringing me up to whatever speed I’ve been able to attain, it was pointed out to me that the number of orphans may be large but the usage would be greater of those where copyright is claimed. The books in the database that will get the most use will be those academic publications which don’t even have reversion clauses so the copyright owner is not obscure. Think: you’re doing a book or paper on paranoia and you need to read what people thought in 1937 and 1951 and 1986. It will be situations like that which drive the page views.

Trying to estimate how many titles are involved (the $60 fees will be paid only on those that have been scanned until May 5, though many more will be scanned thereafter) is almost impossible. But even if 25-to-50 percent are claimed, and they amount to a somewhat higher percentage of the usage, the “windfall” is likely to be in the low eight figures annually. A big number.

As to the big question we posed: what happens with that windfall, the answer is that, primarily, it goes to the opters-in and primarily based on usage. How many of them will there be? A million? Several million? Cairns and I have to go back to our economic model in light of what we’ve learned, but we know those books will be sharing windfall revenues of some tens of millions of dollars annually. Assuming some sort of Pareto distribution of the revenues, that could result in some significant found money for select publishers — likely ones that are academic, sci-tech, professional, and have a very lengthy backlist. We’re likely to be talking about a handful of multi-million dollar windfalls.

The brand new position of the BRR will be as a licensor of the opt-in titles for whatever uses it can persuade the copyright owners to allow. BRR will be trying to demonstrate value here, both to copyright owners (so they keep putting new books into the system) and to potential licensors, based on BRR’s position as an aggregator of a massive number of copyrights (imagine if it is a million or more: that’s a lot in one place!) 

From what we see here, the two most important questions (about which reasonable people can certainly disagree, and nobody can really know yet):

Can BRR, given its assets and revenue by fiat and its restrictions by structure, serve a significant function beyond maintaining the database and adjudicating book rights disputes? (Or will it simply serve a 1-time clearance function and then process checks?)

Would acceptance of this settlement be a spur to get more far-reaching stranded IP liberation through legislation? Or would stopping it make it more likely that the politicians would act?

6 Comments »

The Google settlement and unanswered questions, particularly about the windfall


Michael Cairns and I have both been frustrated with most of the conversation surrounding the Google Book Search settlement. The principal concerns of most of the participants in the dialogue seem to be: 

1. Has Google unfairly captured a monopoly on some content?

2. Has the “class” of “orphan authors” been dealt with fairly, since they aren’t really “represented” in the negotiations?

3. If this case doesn’t adjudicate questions of “fair use”, does that ipso facto mean that a settlement is a bad idea?

4. Can any settlement of broad public-interest questions about copyright and use be legitimately resolved in any way other than through legislation, since, after all, copyright rules are created through legislation?

We believe it is unfortunate that the attention has been focused there because there are some very real commercial questions that we think need answers to fully appreciate the practical implications of the settlement. We’ve been doing our best to build a model of what revenue will be and where it will go. Trying to do that makes it very clear how much important detail has been omitted from the debate we’ve heard so far (and we’ve both heard a lot of it.) Here’s a starter list of questions that need answers to forecast this business which we hope that people more familiar with the terms of the settlement than we are might be able to answer for us.

By far the most significant questions we have concern how the revenues are divided,  and these are significant questions because the preliminary financial projections we have done indicate that this database of content will produce hundreds of millions of dollars for Google and BRR.

1. We understand that revenue flows from the books in the database to Google and then 63% of that to the BRR.  Are there any rules set yet about what BRR can keep of these revenues for its own operations before it passes on the remainder to rightsholders? We might logically assume that BRR would require a diminishing percentage as revenues rise, but we wonder how those controls will be established.

2. We understand that future orphan claims can be compensated going back five years from the time of the claim. That suggests that the BRR has to hold the orphans’ money in escrow going back five years. The key question we have not heard discussed is: what happens with the money older than five years? We’ll expand on that below.

3. How is the allocation of revenue determined for the copyright owners in the database? Are they paid by the amount of content in the database? Or by the number of pages viewed of their work in the databases licensed? Or on some other basis? Or is that something still to be determined by the BRR?

4. We believe that any sales costs Google incurs, such as hiring another organization to help them sell licenses, would come out of Google’s 37%. Is that correct, or can Google deduct sales costs before dividing the money?

And we have a bunch of questions to which the answer might be, Book Rights Registry (BRR: the entity with a Board of eight — four from the AAP and four from the Writers Guild — that can therefore deadlock) just decides. We want to know if there are any barriers or constraints on any of the following within the terms of the settlement.

5. We know that the database will have greater value and greater use if it is curated and merchandised. Is there a plan for this? Is there even a concept for how a third party could be compensated for doing this curation and merchandising? 

6. We see opportunities for services & solutions providers such as SharedBook and to add value by providing the ability for customization, personalization, and annotation of the IP and then perhaps to have the end product  sold both as a book and as an ebook. Is this a deal that BRR would just be free to make on whatever terms they deemed appropriate?

7. Does BRR get to retain a larger percentage of revenues for ‘home-grown’ product initiatives such as the ones we are describing?  This revenue doesn’t come from Google like the institutional licensing and ebook sales money does, so does Google still get its full 37%?

8. To leverage non-database (non-Google) revenue opportunities we see three primary functions that need building: a storefront, an assembly technology (which could be much simpler than SharedBook: what if you wanted to put five Dickens novels together and print them?), and actual printing and delivery. Do we assume that BRR is free to put these capabilities together however it likes? Could it grant this as a sublicensed monopoly to Amazon or Ingram or Barnes & Noble? 

9. We puzzle over the pricing of POD. May we assume that BRR would be free to pursue any model? We can see two immediately: one is that BRR gets a percentage of the book’s retail (or wholesale) price and the other is that BRR charges a flat rate for the book content and the packager-reseller then charges whatever they want for the resulting book. Is BRR free to make these deals as it likes?

At the core of the important discussion about the settlement which has not occurred is  the question “what happens to the money the orphan books earn?” If it is divided among all the opters-in, which seems at least as reasonable as letting BRR just keep it, then there is a huge potential windfall to the copyright holders who stay in this database. That has not been mentioned by anybody (as far as we know). By consensus, 5 million of the 7 million books that are going to earn many tens, if not hundreds, of millions of dollars annually are orphans so, by definition, they have no copyright owner to pay! Either BRR keeps the money or they give it to the contributors to the database.

Not to have discussed this strikes us as a startling omission. Somebody gets a windfall much larger than the one going to Google. Who is it?

This post is an intellectual joint effort with Michael Cairns, who did a very helpful editing job on the first draft as well.

36 Comments »

A serious issue for big publishers


The Google settlement brings into bold relief what has been a quiet issue for book publishers, particularly the biggest ones.

They are largely in the dark about what rights they own.

It is not really hard to understand why they’re in this position and it isn’t really anybody’s “fault”, but it sure is a mess. The “rights database” or “contracts database” for most publishers consists largely of paper contracts in file drawers. That’s because all the big publishers gain a substantial portion of their income from backlist that was acquired years, decades, or even many decades ago, long before electronic rights databases were even conceived of.

There have been big improvements in the possibilities for storing this data in recent years. We’ve written extensively about StartWithXML processes and the idea that the rights information could “travel along” with the content in an XML document. That’s new stuff. So is the Klopotek system that actually builds the publisher-author contract from a rights database; the workflow had always had this work the other way around.

But these solutions, even for publishers farsighted enough to employ them, don’t solve the problem of thousands of legacy contracts in file cabinets. The Google-related issues primarily revolve around whether the rights to an inactive book (or, in the settlement lingo, what they would call “not commercially available”) have reverted to the author or are still held by the publisher.

Publishers also have problems with books on which they unambiguously have the rights to print and sell copies. What they don’t know, without looking at the original contract, is whether the language in it gives them a shot at an ebook, a print-on-demand edition, or allows them to include some of the material in that book in an electronic database. Even looking at the book contract might not tell them if they have the rights to use artwork that is in the book in any other edition. 

We are working on a future post on “business development”, which we figure is a big opportunity for publishers who have digitized a large amount of legacy content, which many (if not all) of the big publishers have. But any hopes of business development are stopped in their tracks if a publisher doesn’t know what rights are controlled.

The challenge of building a comprehensive rights database for the many tens of thousands of titles the big publishers control is probably cost-prohibitive. And even if money were no object, there would be a lot of empty cells in that database where information should be because contracts would be lost or incomplete. Figuring out how to attack that problem cost-effectively is one of the most important puzzles facing the senior executives of the major houses.

11 Comments »

Times Book Review on advances, and related thoughts


The NY Times Book Review published a piece on advances online today to which I was first pointed by Twitter early this morning. I couldn’t tell whether author Michael Meyer was “for ‘em or agin’ ‘em”. On the one hand, he seemed to suggest that publishers are inclined to overpay, and he cites Public Affairs head Peter Osnos very forcefully saying that it just isn’t necessary for publishers to get sucked into a high advance by market pressures. On the other hand, Meyer demonstrates through author testimony how little even a $100,000 advance is in relation to the time and effort required to write a book. 

Advances against royalties paid by publishers to authors, like returns (one of last week’s topics), are often misunderstood and subject to flawed analysis. Here are a few general thoughts about them.

1. It is critical to understand that an “unearned advance” (that is: a book on which the advance paid by the publisher exceeds the royalties earned by the author) is not equivalent to an “unprofitable book.” Author royalties of 15% of retail (the top “standard” hardcover royalty for a book of narrative writing) amounts to about 27-32% of the publisher’s receipts after trade discounts. Since unit manufacturing cost is about 15-20% of receipts, and the publisher has other direct costs that aren’t based in units sold (design and the 21st century equivalent of “typesetting”, book jacket creation, marketing expenses, and returns and overstock), it is roughly true that the author shares profits with the publisher 50-50. So if the author’s advance ends up delivering a royalty of 17% or 20% or even 25% of receipts, which is the net effect of an unearned advance, the publisher might well still have made money.

2. What publishers really care about (or, at least, really should care about) is how fast their cash turns over. That portion of an advance paid “on publication” might actually only be floated for a very short time. In the case of a book where a publisher has foreign rights to sell, it is even possible for the publisher to make deals that recapture the advance before it is paid. Those situations aren’t common, but they do occur. Shifting the advance payments so that they occur later make advances much easier for publishers to bear. I was involved in one deal where the advance was in quarters and the last quarter was paid on paperback publication, which occurred over a year after the hardcover publication. Some “advances” aren’t paid in “advance.”

3. The publisher quoted as being skeptical of the need to be sucked into paying outsized advances, Peter Osnos, runs a small house that is owned and distributed through a larger network. PublicAffairs doesn’t have to “feed the beast” — provide sufficient volume to cover the high fixed costs of publishing operations: warehouse, infrastructure, and the biggest part of overheads. The CEOs of the major houses have to be sure that enough volume will go through their operations each year to sustain them. That means that “guaranteed” volume is of premium value and agents, knowing that, can command a premium price. The sales coming from mega-books from mega-authors (on which mega-advances are paid) keep the big house’s doors open for everybody else. In other words, a house that pays fixed costs for its operations has a different strategic stake in big books than a house that is distributed on a fee-for-volume basis. Osnos’s advice is very sound for the many thousands of publishers who are smaller than the giants, but it would be suicide for any of the Big Six.

4. Peter Mayer gets the history right about how big money came into the game; it was led by the large advances paid by paperback houses in the late 1960s and early 1970s. That also led to the combining of what were, for more than a quarter century after World War II, two different and separate businesses: trade publishing and mass-market publishing. It isn’t mentioned in this piece, but Mayer (and his marketing director at that time, Bill Shinker) were responsible for moving full-sized books into mass market channels when they sold gazillions of copies of a trade paperback through the rack jobbers (memory unsupported by research says it was  ”The People’s Pharmacy”.) Bantam then sold the hardcover “Iacocca” the same way and, in another decade, there was no longer a distinction between “trade” and “mass.”

In 1979, Crown sold the paperback rights to Princess Daisy to Bantam for $3.1 million. That remans, today, the highest price ever paid by a paperback house for the rights to an original hardcover; it was the high water mark. So the account of the genesis of large advances is accurate, but trade houses have been on their own on this for three full decades. I see great irony in the history Peter Mayer reminds us of.  It was the sub rights departments of hardcover houses that turned this into a big money business, and the agents followed. I know that at the same time, standard practice for agents was just changing from submitting a manuscript to one house at a time, consecutively, to the multiple submissions which are a pre-requisite to competitive bidding and auctions.

So if Peter’s history is right, corporate greed drove entreprenurial greed, not the other way around. I wonder whether there were editors at publishing houses complaining to agents about this dastardly new practice of multiple submissions at the same time that the sub rights department down the hall was setting up an auction for the next big book? (No bloggers at the time to call them on it if they did!)

On much the other end of the continuum, I invented a technique on the very first book I published in 1974 which I am a bit surprised I have never seen since (which doesn’t mean nobody else has done it!) The book was “Amnesty: The Unsettled Question of Vietnam” and it was a 3-author debate (“Now”, “Never”, and “If…”) including Senator Mark Hatfield. The authors each did their part for no “advance”, but instead got a $500 guaranteed first royalty payment, giving us time to get the money from sales to pay them. As things turned out, they would have earned about $350 each on the first payment and ultimately all earned out the $500. And even though they would have earned $350, I had taken in enough to pay them the $500 from receipts. Paperback rights were never sold.

I saw notice of the TBR piece on Twitter this morning, read it, and wrote this piece. Figured it would be Monday’s post…But then an hour later I went back to Twitter and saw that my friend Evan Schnittman, who just started a new blog called Black Plastic Glasses, had already published his rant on the Times piece and it wasn’t even 2:30 on Saturday afternoon!

We have different takes. His is publisher-centric. Hey! He’s a publisher! Enjoy it.

Oh, and this is Monday’s post. It might even have to hold the prime position until Wednesday.

4 Comments »

A technology that could unlock a door to the future


Michael Cairns blogged yesterday about a deal SharedBook has just made with ourenergypolicy.org to use an annotation technology SharedBook has. SharedBook is a client and I spent some time this morning getting updated by CEO Caroline Vanderlip about this new technology.

This is wikipedia-type capability with a spin that publishers and authors will really like. With wikipedia, the edits and annotations from “the crowd” (or from whomever is allowed to mess with the wiki) actually change and revise the content itself. With SharedBook’s annotation technology, the original published content remains locked, and the changes are appended as footnotes! The footnotes can be associated to a chunk, a paragraph, a word, a symbol, a diagram, a picture. Whatever you like. And using the capability to manipulate content into a one-off book that SharedBook is known for, a reader can order up a printed book with whichever of the footnotes the reader wants in their own copy of the book. They’re then numbered consecutively and gathered at the back of the book.

The possibilities here are endless. A professor could pepper a textbook with his or her own annotations. Or the class could use the technology to add their own annotations. A professional organization can (as ourenergypolicy.org will) restrict annotations to approved experts; then the “reader” can select which of those to include in their own unique version of the book. A mystery writer or sci fi writer could use this technology to capture thoughts from other writers or fans. 

The “platform book” concept described by Joe Esposito might be handled differently using this technology, but perhaps that would be for the better. Certainly any author who believes in his or her own rendition, but also believes in the value of crowd-sourced input, would be more comfortable with SharedBook’s annotation technology than with a wiki.

Caroline told me today that it was this annotation technology that first attracted her to the company five years ago. At the time, she found it impossible to explain the benefits to any publisher. Thanks to wikipedia, that’s no longer a problem.

It is easy to imagine this annotation technology becoming an important tool in moving us toward a much more dynamic concept of what a book can be.

3 Comments »