March, 2009

Getting to vertical: two disparate examples


Two examples of the shift from horizontal media to vertical have caught my attention in the last week, although both of them have been around for a while.

Monday’s “Online Media Daily” has a story about AOL hiring laid-off journalists for its new(ish) cluster of vertical channels. There are 70 such vertical channels already launched, with 30 more being planned. They’re going for the biggest verticals (duh…) and some, like TMZ, have already become major success stories.

This is an example of what horizontal book publishers have to do. AOL as it was originally became a dinosaur. They built a huge audience by making Internet 1.0 simple and accessible to most people. When Internet 2.0 came along,. the big opportunity would have been to become Facebook, but they missed it. However, they still had a huge audience, a legacy audience. They’ve been able to use the human bandwidth they have from that to build these verticals.

This is analogous to what big trade publishers have to do. They are still placing millions of hard copy books each year in people’s hands. They can drive people to URLs, just like AOL could. Their power to do that will wane, just like AOL’s has. How many of them will have a TMZ when their main franchise is no longer powerful?

The second example is from an agency called Verso Advertising. Verso recognized that what is true of web content — that it works best organized by market niche — is also true for advertising. So they invested the effort to build vertical “channels”: aggregations of web sites that serve a particular interest.

Verso reports that they have 13 channels “built” with more to come. Pop Culture, Teen, and Parenting are the three biggest. Pop Culture includes 650 web sites and touches 21.4 million unique visitors a month. The Teens channel of 300 sites is sub-niche customizable by gender, subject matter, and age range. The Parenting channel includes 120 sites. 

Verso started thinking about these niche aggregations two years ago. They saw the haphazard way Internet advertising was being purchased and the big opportunities in targeting. They worked out a partnership with ad aggregator Burst Media (they have as clients the sites that receive the ads; we use them — among others — for our BaseballLibrary.com site) and planned to start the service in early 2009.

Talking to their clients, though, resulted in their just starting faster. Farrar Straus used them for a new Thomas Friedman book last September. And they’ve had notable successes already for Vanguard Press (“Bad Dogs Have More Fun” in the “pop music” chanel) and Berkley Books (using Military History and the Science Fiction and Fantasy channels to put their author Jim Butcher on the hardcover bestseller list for the first time.) Altogether, Verso reports having conducted 40 campaigns for 20 publishers, delivering 55 million highly targeted impressions in the process.

One aspect of Verso’s targeting is that it gets more refined as each campaign progresses. The response loop from Internet advertising allows Verso to shift spending within the vertical collection of sites for each book they’re promoting as they go. So, presumably, the last quarter of the money is spent more efficiently than the first quarter. That kind of refinement, of course, is impossible with print space advertising.

We see the Verso niche site aggregation as a smart strategy, but using it for advertising is only the start. Many of the sites on which they’re placing ads are also potential hosts for content (which should be linked to a “buy” button, of course) and they are home to blogs that take comment posts that open up all kinds of other possibilities. Verso is putting publishers on the right track, but using the same strategy for PR might yield even bigger results than it does with advertising.

It is worth making a distinction here. Neither the AOL nor Verso examples are about “community”. They are about “vertical”. Community right now, oddly enough, is still mostly a horizontal exercise (Facebook, Twitter, YouTube). But that’s temporary; communities require network effects and tools, and the two have not been in place in verticals yet. But that’s temporary. The right vertical strategy today will lead to the right community strategy tomorrow, and both AOL and Verso are putting themselves into a good position for the next turn of the Internet wheel.

2 Comments »

Third old publishing story: tracking POS, and the explosion of backlist sales in the 1970s


In an earlier post, I told the story of Ingram’s introduction of the microfiche reader in the 1970s and what it did for backlist sales. There was another new technology introduced at the same time by the B. Dalton bookstore chain, based in Minneapolis and owned by the Dayton-Hudson Company.

At this time, B. Dalton and Waldenbooks were pioneering the concept of many-outlet national bookstore chains. There were some national and regional chains: Brentano’s and Doubleday had a dozen or two dozen store nationwide; Kroch’s & Brentano’s had a cluster of stores in Chicago, Lauriat’s in New England, Books Inc. in California. But Dalton and Walden built businesses by becoming the tenants in the shopping malls in the era of their great expansion. They were each on their way to hundreds of stores nationwide; this was an entirely new opportunity for publishers.

Dalton was particularly innovative. They assigned each book an SKU number. When the purchase order for the book was issued, which would be for many or all of the hundreds of stores, the requisite number of stickers with the correct SKU would go to the stores. When the books arrived, they’d get the stickers, which could be read (by the cashier, not by the cash register). The punched-in numbers, usually correct, created a record. For the first time, buyers in a far-flung book operation knew exactly what was selling. (Or almost exactly, there were more than a few holes in the system.)

If memory serves, when there were about 300 Dalton stores, the sale of 6 copies a week constituted a “hot list” book and 6 copies a month was “warm list.” This was my first lesson in how few books sell enough to create statistical significance in any one store. That’s a critical thing to understand.

Soon, Dalton had established the concept of “model stock”, books that were automatically reordered based on sales. Smart sales reps learned quickly that getting a model was more important than getting a big quantity buy to the sales health of most books.

Meanwhile, Walden had no such system. They relied on the diligence of their store managers, with limited coordination by the central office in Stamford, CT, to do the reordering. And in pretty short order, Dalton was eating their lunch. Particularly on backlist.

But then technology made another twist and turn. In the late 1970s, “machine-readable fonts” were perfected. Harry Hoffman, who had been the CEO of Ingram who introduced the microfiche, was by now the CEO of Walden. And he put out the edict that Walden would only stock books that had the ISBN and price in one of the OCR fonts. Compliance was pretty rapid.

Now Walden could do what Dalton could do — capture the POS data — but they didn’t need a clumsy sticker system to do it.  The playing field between the two chains was essentially leveled.

There is an ironic coda to this story. The two mall store chains still exist, but in a very diminished state. They are essentially being liquidated by today’s book retailing behemoths: Barnes & Noble, which owns B. Dalton, and Borders, which owns Walden. The irony is that B&N has the modern supply chain and, of course, needs no stickers on the books. But Borders does have a requirement to sticker books, and therefore has to do another “touch” at a warehouse before books can hit the stores, a costly enterprise in both time and money.

In some ways, the Dalton model stock system can be seen as the first growth of what became The Long Tail.

1 Comment »

More on the Google settlement


OK, so what I thought I had figured out earlier isn’t so simple.

In a prior post, I “discovered” (for everybody) that it is likely that the biggest revenue opportunity in the pile of books being scanned by Google would be the republishing possibilities among the orphans. I posited that with all those books about subjects still of interest from Babe Ruth to Eisenhower, there must be (to paraphrase an old joke Ronald Reagan loved to tell) “a pony in there somewhere.” MANY ponies, actually.

But, here’s the problem. It is not clear that anybody can publish those books without substantial risk.

There is, as I understand it, a potential liability under the copyright act (perhaps unlikely to be assessed, but possible) for publishing a book one doesn’t have rights to, even if one is diligent about looking for the copyright owner. Apparently, holding a normal royalty payment “in escrow” doesn’t eliminate that liability. 

The big win for Google in this settlement is that all parties are agreeing that Google is excused from any liability for the uses specified in the agreement. Those uses explicitly include streamedebooks; but not “downloaded” ebooks.

There are also, in section 4.7 of the agreement, contemplated “new revenue models” that the BRR and Google can agree to, which include: 1) print-on-demand; 2) custom publishing (“helpfully” defined as “per page pricing for the educational and professional markets”); 3) downloadable PDFs; 4) consumer subscriptions, which are defined as individual sales of the databases intended to be sold to institutions; 5) summaries, abstracts, or compilations.

Apparently nothing else is “protected.” The liberation of the stranded — orphan — IP is not accomplished for any uses not contemplated here.  My colleague Michael Cairns suggests that the Registry itself could create a presumption of diligent search for a copyright holder and mitigate the chances that a court would find statutory damages applied, but it might take legal cases playing out to determine that.

Let’s say there are 5 million orphan works and 1/2 of 1% of them are worthy of a press run of 5,000 or more. With a few bigger winners in there, let’s say that’s an average of 6,000 press run across the 25,000 estimated titles. That’s 150 million units. Average retail of $15, average discount of 50%, conservative royalty of 5% of retail calculates to $1.125 billion in revenue to publishers and $112.5 million in royalties.

Cairns says that maybe these numbers are too high by a factor of ten. If he’s right, we’re still talking about $112.5 million in revenues to publishers and $11.25 million in royalties to authors. I have to believe those numbers are still larger than licensing revenues will be, although Cairns and I have not explored that more complicated question seriously yet. And the truth of the press run potential probably lies north of Cairns’s number (although perhaps south of mine.)

Why was that element left out of the settlement? Did the negotiating parties even contemplate it? And exactly how useful is the “orphan” relief if this huge portion of the potential revenue (and public value) is omitted? Were the parties so fixated on electronic exploitation that they just didn’t notice this? 

It looks like the need for Congress to act is about as urgent as it was before. The Copyright Office has long noted the need for Orphan Works legislation in a host of contexts and has been unable to goad our legislators to take the necessary steps. It had been my hope that the Google settlement cut the Gordian knot, but it would appear that the problem of true public access is a long way from being solved.

1 Comment »

The University of Michigan Press announcement


Day before yesterday (Tuesday), the University of Michigan Press announced that it was no longer doing press runs of scholarly monographs. Henceforth, says the announcement, 50 of the 60 monographs published annually will be done “only as digital editions.”

What a retro way to position a progressive decision!

Publishing with no offset press run (or with one short offset press run) is a totally sensible way to deliver niche books, which scholarly monographs certainly are. But why make a big deal out of the fact that none are being printed in advance of orders?

The reason Michigan is going to this strategy is that so few of these books get sold. So why say you’re stopping anything? There should be no change in Michigan’s publishing and launch strategy (except possibly to go to a no-returns policy on monographs, or on certain parts of their list including monographs.) Why make an announcement that makes some people believe that the “book” they want might not be available to them anymore? Or that it might be available, but in something called a “print on demand” edition which, although they wouldn’t be able to tell the difference, suggests the possibility that it somehow isn’t as good as what they would have gotten before?

The end user doesn’t need to know how many copies were printed and bound along with the one s/he bought. There is no reason to confuse the consumer, or the supply chain, with irrelevant information. Do you tell them what size press you printed on? What size roll of paper?

Smart digitally-based publishing, where most sales are made of an all-digital product and marginal add-on sales are of a printed (on demand) version, is going to be the most common model in a very short time. Nobody suffers. Everybody still gets exactly what they want. An announcement positioning this as some kind of a “cutback”  is totally unnecessary and actually is probably counterproductive.

2 Comments »

Riffing on Tamblyn’s “6 Things”, Part 1


Michael Tamblyn, the smart and dynamic leader of Booknet Canada who has performed minor miracles with the Canadian supply chain, gave a talk at his company’s tech forum a fortnight ago that has gotten a lot of deserved attention. It’s 30 minutes long, but it flies by and the presentation is great fun: very much worth watching.

I want to remark briefly on Michael’s six ideas. I’ll devote a later post to greater detail on one of them.

Michael begins by making the point that previous periods of financial difficulty have been nurturing times for new technology and new companies and new ideas. In troubled times, resources are cheaper and competitors are preoccupied, opening the door for new successes to be launched and nurtured. So, he asks, “what do you want your revolution to be?”

His first idea is that bibliographic data should be collected in the cloud and made available very cheaply or free to new or non-commercial users. Booknet has now started to acquire that data from publishers and will be crunching and distributing it. Although BNC’s activity is, for now at least, exclusively in Canada, this must be a very threatening notion to companies that make data collection a business (Bowker) or have data as a competitive advantage in a larger business (Ingram or Baker & Taylor.) The first requirement for a data aggregation service is that the sources of data must be in regular touch with it. That has been a handicap for Bowker in relation to Ingram or Amazon: publishers will more surely report a price change to an account than they will to a data aggregator. Booknet already has established very regular data exchanges with the entire Canadian publishing industry. They can pull this off and they are innovative enough that we would expect, when they have all the data, they will do more with it, and enable users to do more with it, than competitors do. This is potentially a game-changer for a lot of people. Michael hid it — the most disruptive idea he had to present — in plain sight by putting it first.

The second idea is that publishers need a StartWithXML workflow that doesn’t “kill people.” Michael lays out the problem very well, including showing that O’Reilly and Wiley, who have addessed it, have solutions most publishers can’t follow. (I have the “Wiley-O’Reilly Rule” for publishing, which is that those two companies always do things in the smartest way, but, for many reasons, it is usually impossible for other companies to imitate them.) From our work on the StartWithXML project I’m quite aware that this problem has been seen by others. Jouve North America and Value-Chain International, to name two, are working hard at making XML user-friendly for authors and editors now working in Word and for designers now working in Quark or InDesign. That’s essentially what Michael is suggesting. So this is a good idea but not an original one and smart people are working on it, although until this problem is solved we could certainly use more.

The third idea is that Michael wants to see a “DRM-free” ebook reader, but, by this he means “Date Repulsion Mode” rather than “Digital Rights Management.” This is a plea for an ereader that doesn’t itself look geeky and makes its user look sexy. This one isn’t up to the standards of the rest of the talk but it does provide some nice comic relief.

The fourth idea is the one we will explore in more detail (in a subsequent post): that publishers need a better tool than the present print catalog to help their reps help buyers reach the right frontlist buy decision.

The fifth idea is that the presentation of books online needs to improve. As Michael put it very well, we “search online” but “browse in stores.” He shows a number of interesting alternative presentations to the online bookstore standard pioneered by Amazon, but makes a crucial point, I think, when he talks about “curation” as the key. He wants online bookselling to move on from “we have all the books” to “we’ve distilled your interest down to this manageable number of choices”. As Michael said, “maybe it’s about presenting less.” There is great food for thought here (but no specific idea.)

The sixth idea is that publishers integrate generalists who know tech into their business more, so that technology is not isolated from the business practice. He lampoons the way tech is usually done in publishing companies, where a complete set of specs and an ROI are often needed before tech requirements for a new idea can be developed. This leads to the great advice that publishers need to place many little bets, learn from the ones which fail and “double down” on the ones that appear to succeed. This is the culture of innovation approach that is absolutely essential. Whether publishers can actually do it, of course, is another question.

Everything Tamblyn said in this address is thought-provoking, The initiative on data could be an industry game-charger. We’ll have some more thoughts on the frontlist buying component of his address soon.

Note to my readers: The first two weeks I did this blog I posted from Monday to Saturday. Last week and the week before I cut Saturday out. Now I’ve decided quality and sanity require me to go to 4 days a week, which will routinely be Monday to Thursday (when I think people are most likely to be paying attention.) Of course, inspiration or breaking news warranting commentary are always possible motivations for a post out of schedule There already was one day early on when I did two.

1 Comment »

A slightly different take on the Google settlement


I have read and listened to a lot of dialogue about the Google settlement. I’m not a lawyer and I’m not a librarian or archivist and I’m not a scholar who would be interested in those “non-consumptive uses” I didn’t know about before this all happened. To the extent that I had a horse in the race, it was about liberating orphan books. I worked with a current executive who was inside a different big company 10 years ago. We were analyzing the whole world of out of print and the opportunities therein. We figured out pretty quickly that a lot of the good stuff we’d find would present devilish problems trying to locate somebody to pay royalties to, and determining definitively that something post-1923 was not under copyright was not an airtight propositon either. 

A few years ago, at the Frankfurt Book Fair before the Google Library project was announced, Michael Holdsworth, then at Cambridge University Press, related an observation from somebody he’d talked to who said “when we come back 30 or 50 years from now, most of the IP from the 20th century will have vanished. We’ll reach a point where if Google doesn’t report it, it doesn’t exist. Everything from before 1923 will have been scanned by somebody and everything post 2000 was born digital. Just about everything in between will be missing.”

That was very fresh in my mind when Google began to scan all those orphan works, breaking a logjam (one way or another; it now appears by this settlement) that the Congress had not resolved. In fact, legislation since the 1970s extending terms of copyright had actually made the problem worse. Under the laws I grew up under, I believe anything older than 56 years would have been in the public domain. That law today would liberate anything born before 1953. I would personally be out of copyright.

As a responsible member of the community, and a consultant who wants to help clients think through the implications of change, of course the Google settlement becomes a tennis tournament where I have to attend every match.

The part that interests me most is the potential revenue beyond the settlement. Where is the revenue for this going to come from? Who will buy what from the material Google has digitized and what will the revenue opportunities really be for those who “opt in”? And what will Google really have to sell?

I went to Michael Cairns, former CEO of Bowker with this question and he and I are starting to think it through.

All the focus on revenues in the conversations I’ve heard, including a very stimulating seminar at Columbia ten days ago, has been about digital revenue. And that’s what Cairns and I were thinking about too. What, besides the pre-1923 PD stuff do they have in the databases they can license to libraries? So how much can they charge? We saw Google’s pricing idea for ebooks. What will copyright owners do about pricing? And will copyright owners give Google books under this program, or under the Google Partnership Program? These are complicated questions.

Distracting, even.

Because that’s not where the money is. (This next part is purely a hunch; we haven’t done any numbers yet.)

Let’s remember that 99% of the consumer book business is still in print.

Think about how many orphan books would be worth a printing of 5000 copies or more. Start with this as a list from which to find probable candidates:

Any book that was made into a significant Hollywood movie.

Any book about FDR, Babe Ruth, Dwight Eisenhower, John Kennedy, Winston Churchill, etc.

Books about movie stars, TV stars, TV shows, pop musicians.

The number 1 fiction or non-fiction bestseller of any year (this could be a set used as birthday presents for special birthdays: 60, 65, 70, etc.)

My hunch is that the biggest revenue generator across the entire load of copyrights that the settlement will liberate for at least the next ten years will be books printed in press-run quantities. Who ever thought that the biggest beneficiaries of the Google settlement in the medium term could be agents and packagers? If somebody has previously mentioned the possibility, I hadn’t noticed. It only occurred to me day before yesterday.

Cairns reminds me that our friend (and fellow Michael) Cader thinks that the chances of any real “gems” being found in this orphan pile are remote. Of course, things that are remote possibilities happen from time to time over enough occurrences, and there will be a lot of books liberated. Surely there are many, in the categories mentioned above and others, that will warrant a first printing of  3,000 or 5,000 or 10,000, or with the right packaging and promotion, even more than that. Even in these troubled times, there might be some additions to staff at packagers or publishers to sift through these opportunities. Assuming these deals are to be made by the Book Rights Registry, let’s hope they have an agent on the staff along with the database sales manager.

4 Comments »

This ebook thing is just going to get more complicated


Adam Hodgkin at the Exact Editions blog posted a piece that explains the ebook strategies of Apple, Amazon, and Google in simple terms. Hodgkin’s piece really helps think things through, but I think his analysis is a bit oversimplified (which is part of why it helps think things through.)

Hodgkin sees brilliance in Apple’s move not to enter the proprietary ebook wars, but simply to be a facilitator of sales to iPhone users (iPhones being, at least currently, the most widely-distributed handheld device deemed suitable for ebook reading.) He takes special note of Amazon’s 30% “market maker” fee, which he posits might help drive down the accepted price for middle services in the ebook supply chain.

And, as Hodgkin sees it, Google and Apple are pursuing directly opposite strategies to bring the ebook business to themselves. Google is betting that the future is licensing whole libraries in the cloud and Amazon is betting that it is buying ebooks one at a time to download to your device.

Hodgkin also notes that Apple’s 30% fee makes the 37% share Google will take before paying Book Rights Registry and the 55-65% discounts Amazon takes on Kindle ebooks (I actually doubt the discounts are quite that high on the vast majority of the Kindle books sold and Amazon discounting practices sharply reduce the percentage they are taking of actual selling price, which is, presumably, what Apple’s 30% would be based on) look very aggressive.  By this move, he says,  “Apple will thus appear to most publishers and authors as a reasonable partner, a less monopolistic partner, than either of the other West coast web giants.”

Hodgkin concludes the piece by seeing ebooks as a 3-company race (these three) and says he is “tempted to call it for Apple” although “there are quite a few laps to go.”

That last sentence is the absolute truth.

This piece took no note of Sony, Stanza, or the potential impact of broadly-distributed epub files. Perhaps Sony is considered part of the Google strategy, except that the 500,000 public domain books Google has made available for the Sony reader are free (aren’t they? I am happy to be corrected if I have that wrong) and they are downloaded, not left in the cloud (unlike the PD books that can be read directly on the iPhone, with the toggling between the OCRd version and the original print, which Google announced two weeks ago, and which do remain in the cloud.)

It also took no note of Barnes & Noble’s recent purchase of Fictionwise or the fact that Waterstone’s has teamed with Sony Reader for distribution in the UK.

And if Apple’s strategy is to capture 30% of the ebook revenue for everything that goes to an iPhone, they have a big hole in it already. One buys Kindle ebooks from the Amazon store, not the App Store. They download directly into the iPhone from the net (no intermediating PC necessary). I don’t see how Apple gets any of that revenue. (I am not sure about the “why” of this from Apple’s POV, except that some smarter people have told me that it will be much harder for Kindle to repeat this trick on other phones, so it could be a competitive move by Apple against Nokia and RIM.)

But I think, most of all, this analysis omits full consideration of the discrete functions served by the retailer in the supply chain. 

The online book retailer needs to do these things: 1) secure a customer’s attention 2) aggregate titles to choose from, 3) merchandise, which is enabling discovery through “shopping” 4) provide search, which is enabling discovery through “asking”,  5) transact, which includes delivering the file and accepting the money, and 6) provide customer service.

If a publisher or retailer or ebook platform provider sets up to sell through the App Store, Apple gives them a head start on number 1, nothing on number 2, nothing on number 3, nothing on number 4, presumably all of number 5, and probably nothing on number 6.

Amazon provides it all. I am still trying to understand what Google provides; I don’t think we have all the answers on that yet, except that we know they’re providing a ton of free econtent that will make selling other ebooks at substantial retail prices that much more difficult for everybody. This should not surprise anybody and it is not a knock on Google. They are primarily in the free content business. They are not in the “merchandising” business. And they don’t have the most saleable titles to sell; they actually, title for title, have the least saleable titles. The value of what Google has is in the aggregate and was always intended to be. 

It is also critical to keep in mind that the ebook market for consumers has not happened yet! Publishers are seeing sales of about 1% of their revenue. I am a bit abashed about how over-optimistic I have been about ebooks for the past ten years (a by-product of having personally read more books on devices than on paper, by a factor of about 4 to 1, in the 21st century, and about 40 to 1 since I got my Kindle.) I can see ebooks getting to 7-10% of the units sold for consumer books in the next 3-to-5 years and I’m the optimist.

And with 85% of even that incipient market having not happened yet, most of which will be read on devices that haven’t been delivered yet (including future versions of Kindle, Sony Reader, iPhone, etc.) and, further with whole business models (subscriptions, book-of-the-month plans, bundling of titles together, offers by publishers to give ebooks away with print or audio books) which have hardly surfaced yet, we can only imagine what more changes we might see between now and then.

When there is a real ebook market, there will have to be real ebook merchandising. That means complete metadata on the titles, including reader reviews and information about the printed book publication. (Amazon, because they have it for their regular store, has it for Kindle books. Nobody else comes close, although one presumes Fictionwise will get that printed book metadata once they’re integrated with Barnes & Noble.)

Michael Tamblyn pointed out in his widely-circulated “6 things” address that book merchandising on the web hasn’t really made much progress since Amazon invented it in the mid 1990s. What Kindle has got, what Stanza has built for the iPhone, and even what Fictionwise has,which might be the best presentation of ebooks even before being enhanced by B&N (and even without the book information as mentioned above), are not really well suited for presentation on the smaller screen of a device.

Apple is not providing the full suite of retail services. If you assume that somebody has to be the bookseller here: pull the titles together, curate them, group them, put the right stuff out “in the window” or on the virtual “front table” on a daily basis (or, on the web, a more sophisticated basis than “daily” suggests) and handhold the customer through any further questions (I’ve gotten great customer service attention for ebook problems in the past from both Powell’s and Diesel Ebooks), then there will be a lot of costs to pile on top of Apple’s 30% take for providing the venue and ringing up the sale. Apple is providing the real-world equivalents of “rent” and “shipping”. Looked at that way, 30% doesn’t seem so cheap, even if it is a very high-traffic location.

This is going to get a lot more complicated before it gets simpler. I didn’t mention Scrollmotion, another ebook format that can handle illustrated material better than any of the others so far. I didn’t mention publishers selling direct, which they are definitely going to be doing more and more. I didn’t mention that every phone manufacturer and cell phone network is going to go all out to compete with Apple and AT&T and their devices will handle ebooks too and they’ll have app stores too. I didn’t mention that directing you to your choice of format — any ebook or a printed book which could be in different formats — is (one of) the real end game(s) here. Neither of us mentioned Adobe Reader format, which is still the market leader in ebook units sold.

It isn’t just too early to predict a winner; it is too early to declare the finalists.

13 Comments »

Music stories: a bit about The Drongos


My wife, Martha Moran, and I managed a rock and roll band 25 years ago. They were called The Drongos. They were four intrepid young New Zealanders who had come to America with an itinerant theatrical troupe and stayed when the itinerants moved on. They made pretty close to a living playing on New York street corners through little Mouse amps and passing the hat. They’d been a band for a couple of years when we started to help them in 1981.

There were four Drongos. Stanley John Mitchell, the drummer and principal songwriter, now lives in Brooklyn with his wife Alice Barrett, a film, TV, and commercial actress. Richard Kennedy, the lead guitar player and a lead vocalist, has stubbornly made his living as a solo performing guitarist and singer, based in Frome, England. Tony McMaster, the bass player, and Jean McAllister, keyboards/guitar/vocals, are the married parents of four children in Auckland, New Zealand, and still very much involved in music there. 

The Drongos were established performers on a circuit through upstate New York: Woodstock, Albany, Ithaca, Rochester, Binghamton over the 4 years or so we worked with them. We never made the match for a record deal with a major label — there was a lot of conversation but it never quite jelled. So we put out our own records.

Fortunately, but quite coincidentally, I was consulting at the time for a UK-based company called Proteus Books, which had bought into my idea for a niche strategy. We published books, mostly bios, on pop music and film. Only. The idea was that we’d do books in an assembly-line way that could sell in all English speaking markets and through bookstores, music stores, and record stores. That allowed us to have an integrated, rather than a book-by-book, marketing campaign. It also gave me a passable front for our self-produced, self-delivered records (and they were, primarily, vinyl records at that time.)

There are two reasons I’m telling you about The Drongos.

One is that I am proud of the promotional flyer I slipped onto the back cover of every copy of their first record. At the top it says, “If you like this record be sure to call your local radio station. It helps.”

And below it says, “The very best sound qualify of The Drongos Album is available only on Proteus Records or Tapes. No home taped version may lawfully be offered for sale. However, home taping to spread the word about this album is encouraged. Please buy your blank tape in a store carrying this album.”

That was my doing. How many of us have such a well-documented record of seeing through the folly of self-defeating copyright protection before there was digital distribution? (And this is documented. Our old friend and major Drongos fan Ira Nonkin has a reproduction of the flyer on his Facebook page. I’m not hip enough on Facebook to know what deal you have to make with Ira to see it, but it’s there!)

Here’s the other reason.

Richard Kennedy is an amazing guitarist. He’s a lefty who plays a normally-strung guitar upside down. You have to see it to fully appreciate it. I just discovered this YouTube video of him playing and singing Don’t Touch Me, which was probably the Drongos’ most popular song. It was a rocker back then; it isn’t in this version, but it sure is amazing. I hope you’ll enjoy it. (If you do, here’s a bit more.)

34 Comments »

Talking to the agents, and introducing Filedby


I was flattered to be asked to speak to the AAR last night as part of a very distinguished group. My fellow panelists were John Sargent, CEO of Macmillan; Morgan Entrekin, the CEO of Atlantic Monthly Press; the agent Larry Kirshbaum, who was CEO of TimeWarner’s book division (now Hachette Book Group);  and Susan Katz, the CEO of Harper’s juvenile division. The topic was the “future of publishing.” We each got ten minutes to introduce our thoughts about “the future of publishing”. I went feeling the need to make three points:

1. The shift from horizontal to vertical is inexorable, unstoppable. People need to understand what that means and, as uncomfortable as it is for many leaders of today’s trade, they need to start adjusting their business to meet that shift. I wasn’t expecting any agreement, or even any recognition of this fact, from my fellow panelists. It’s still sort of my own private little point in trade publishing circles (but I’ll keep making it).

2. The impression I was getting from our BISG research for “Shifting Sales Channels” is that a) big publishers are feeling the pain more than smaller ones, b) people are seeing backlist erosion they hadn’t seen before (although that was contradicted at a lunch I had yesterday with a publisher who follows BookScan numbers closely and said backlist was holding pretty firm); and that the pain was much worse in Q408 than in Q109. Publishers are feeling excess pain at the moment, of course, because they’re taking returns from the Fall against smaller frontlist buys. But, in any case, books are down a lot less than a lot of other discretionary things.

Short conclusion: books may not be recession-proof, but they might be recession-resistant.

3. Trade Books live in an ecosystem. The publishers and agents in the room last night were mostly in the business of fiction and narrative non-fiction and juveniles. But if sales of travel books, craft books, and cookbooks go down, it hurts the stores. And if a store loses 10% or 15% of its business, it could close. Whatever publishers are seeing in growth of online sales, they should never forget that retailers give priceless exposure of their books, and only fullline bookstores give that exposure to just about all their books. The agents and writers and publishers can be just as smart as they’ve ever been, but if the bookstore shelf space shrinks, and it is doing that, the results will not be the same as they’ve always been.

All of my fellow panelists had useful contributions to make but I took most note of John Sargent’s points. He made it clear that big publishers are in troubled times. He pointed out that all big publishers work with borrowed money and want to be working with less of it. So they’ll be “de-leveraging.” That means smaller advances to authors, smaller printings, and tighter financial controls all around. He also reported that Macmillan had invested many more millions in ebook infrastructure last year than they had realized in ebook sales (in response to suggestions from some, including publisher-turned-agent Kirshbaum, that perhaps ebook royalties should rise.)

I made one point at the end that I was a bit surprised seemed new to just about everybody. Very few had taken on board that the difficulties in the trade book business are partly due to the Long Tail: the fact that Amazon’s retailing and Ingram’s Lightning Print (particularly) is making it easy for people to buy books that would have been dead a decade or two ago is just increasing the competition for every book that is newly published tomorrow. (And, of course, throw used books in there too, part of the Long Tail and largely enabled by Amazon.)

This is the same phenomenon that has made it harder for new bands to break out for years: a kid today can still “discover” the Beatles or Bob Dylan and have dozens of songs to listen to and learn without any regard to what is “new”, because the Beatles and Dylan are new to them! We haven’t (yet) had the situation where a multi-book novelist from the 1880s or the 1930s becomes a new addiction, but we’re bound to eventually. And in the meantime, all those Long Tail units are just making the slope to success a little steeper for every new book.

I also told the agents (and, because I did, I want to tell you) about a brand new business I’m involved in called Filedby which, I’m happy to say, is addressing the Long Tail question from another direction. Filedby is now live with a web page for 1.8 million authors — every single one with a live ISBN in the US or Canada. The pages, already mounted, are “claimable” by the authors, providing a big head start on a personalized web page that Filedby has provided largely through  automation. We see an enormous opportunity in helping authors help themselves. There are a lot of them not getting much help from their publishers. Frankly, except for Morgan Entrekin — who explictly spoke about working the internet finding the audiences for books that would sell between 6,000 and 25,000 copies — nobody was offering much hope that the publishers would be doing more for the authors in the days to come. Everybody seems to be looking to authors to do more for themselves. I think my co-founder Peter Clifton and I picked a very good time to be starting this business.

9 Comments »

A few thoughts, some near heretical, about DRM


I got a call today from Laura Sydell of NPR in San Francisco to have a conversation about DRM. I found myself telling the story this way.

From the beginning, there were multiple ebook formats, the leading ones being Adobe, Palm, and Microsoft Dot Lit for a time, with Mobi originally intended to be the format that bridged the gap (at that time) between devices. Then Amazon smartly took Mobi out of play, blocking anybody else from peddling a device-agnostic solution. And now we have e-readers…

From the beginning, there has been a reluctance of people to read BOOKS (goodness knows they read many other things) on screens, or at least on the screens that were presented to them for the purpose. This distinctly separates the book business from the music business, which I know I wrote about last week, but which also applies here. Your ears don’t care whether the speakers or headphones got the sound from a download or a record. It all works the same to you. But, as we all know, reading a screen for most people is a sufficiently different experience than reading on paper that they’re likely to have an opinion about it (often whether they’ve actually tried it or not).

From the beginning, some people in the book business (mostly, I suspect, agents for very big authors and their publishers, who have the most at stake) have been concerned that there would be a spread of unauthorized digital copies if they didn’t “protect” them. They were apparentely learning a lesson from the music business. But the music business was “stuck.” The format they sold music in was a “gold master.” They distributed digital copies.

From the beginning, there has been a romantic notion called “interoperability”, which says it is a wonderful thing if the same file can work on lots of different devices. So you should be able to  read the book on your PC, or on your Sony- or Kindle-like device, and on your iPhone and/or Blackberry and your Sony Play Station, for that matter. Believe it or not, there are not only quite a few of the publishing digerati who think this is very important, there are many who actually blame the slow growth of the ebook market on the fact that the industry hasn’t accomplished the ability to deliver it. (Seems preposterous to me.)

The multitude of formats presented costs and hassles to the publishers. They had to do more work to put each book in shape for each format, and they had to do pretty meticulous quality control because a lot could go wrong. With ebooks not selling much at all, the difference between spending $250 to convert to one format, say (starting with a PDF print file), and then adding $50 or $100 more for additional formats created a whole decision-making cascade. This all choked off books from the ebook stream, on one format or another or at all, as publishers needed to “decide” to publish each book in one or more formats.

The multiple systems also prevented interoperability and restrained piracy. The DRM was actually a bit of window dressing; even unprotected files wouldn’t have traveled very far.

But then the industry, through the IDPF (International Digital Publishing Forum) developed the epub standard, which was code that could be read by many different systems and/or converted inexpensively to other systems. So the publishers could provide just one file, the epub file, and the distribution channels could do the conversion to different formats. A giant step toward interoperability (and efficiency.)

So now DRM is the one barrier to interoperability and so the drumbeat to get rid of it gets louder and louder.

Also from the beginning, people have noticed that, in most cases, the more of a book you give away digitally, the more you sell. This would almost certainly not be the right strategy with high-value scientific reference, or a directory, but it is the experience of many people over a long period of time. Tim O’Reilly has famously pointed out that obscurity is a much more prevelant problem for books and authors than theft through piracy. Cory Doctorow is certainly the most vociferous and among the most eloquent expressing contempt for the whole idea of DRM, the insult it constitutes to the audience of book readers, and its self-defeating nature. He has given away huge amounts of digital content and he credits doing so with growing his sales as a novelist.

My officemate and colleague Brian O’Leary of Magellan Media has been doing an ongoing study of the effects of free distribution with O’Reilly Media and Random House. They are documenting both the fact that there is no significant piracy of ebooks and that free distribution, even the limited piracy, seems to have a stimulative effect on sales.

We are at a moment where publishers are noticing this and taking it on board. O’Reilly and Thomas Nelson are the first I’ve noticed to start offering ebooks in multiple formats, with Nelson doing so to any buyer of a print book who registers on their site for it. (A nice way to capture names, too.) Others, noteably Hachette’s unit Orbit, and Random House, have started giving away ebooks (for free or, in Orbit’s case, a buck or near-free)  to promote books and authors. The ROI on these is close to infinity if it sells one more book!

I hope that this is an accurate summary of events so far, except that I left out the Kindle (on purpose). Now I’d like to offer some forward-thinking and observe an enormous irony.

1. Forward-thinking. This notion of giving away ebooks has a tragedy of the commons built into it. It’s free and it works. So everybody’s going to do it. The choice of ebooks you can legitimately download for free or under a buck will grow by leaps and bounds (it already has.) At just the moment that the ebook market is growing, and lots of new people are coming into it, many people will be able to form the habit of choosing from what is free or near-free. Ultimately, this will have two negative effects. One is that it will depress the pricing across all titles. And the other is that the giveaways will lose their stimulative effect.

I would not suggest that anybody voluntarily try to save the commons. It would not be in their own best interests to do that and they would not succeed. 

2. Because there is going to be a culture of free or almost-free, piracy might well become an issue for the most popular ebooks as takeup of ebooks grows. It clearly has never been a problem, but that doesn’t mean it never will. Things change. (See number 1.)

3. The Kindle. Amazon not only steered clear of the epub collaboration, they are aggressively blocking people from selling content that would be compatible with the Kindle. Everything about what they do is closed. The problem is that they’re defying history so far: growing faster with a closed system than all their competitors for ebook eyeballs combined.

That’s ironic.

But it’s not what’s most ironic.

I personally never got the thing about interoperability until now, when I am reading the great new biography of Abraham Lincoln by by Ronald White on both my Kindle and my iPhone. Whenever I switch over from one to the other, it knows my place and asks me if I want to advance to it. This is great! I love interoperability. I have no use for it between any other two devices, but between my Kindle and my iPhone? Terrific!

Of course, Amazon is probably able to deliver this functionality so seamlessly partially thanks to the fact that they have a closed system and more control.

That’s really ironic.

18 Comments »