There is a core point that Pete McCarthy made clear to us when we first started working with him on digital marketing challenges a year or so ago, which, critical though it is, seems extremely difficult for publishers to take on board.
For all our careers, descriptive copy — catalog copy, title information sheets, press releases — about any book was written by somebody who really knew the book. That normally meant it was drafted by a junior editor or marketer who had read every word of the manuscript, and perhaps even worked on developing it.
But in today’s world, where the most important job of descriptive copy is to make the book “discoverable” through search to the person likely to buy it, it must be written with knowledge of the potential audiences, and that knowledge can only be gathered through research.
The reason for this change is not hard for anybody to understand. Almost all publisher-generated copy until the past ten years was intended for B2B intermediaries: buyers at accounts, book reviewers and editors, or librarians. It was their job to translate an accurate description of what was in the book for their audiences. Most consumers never saw publisher-generated copy except if they were browsing a shelf and chose to pick up a book and read its flap or cover copy, which usually differs only slightly from the B2B copy.
And whether or not consumers today see publisher-generated copy on a product page, search engines do, and consumers are increasingly driven by what search engines tell them. Writing copy without knowledge of the potential audiences, the language they use, the frequency with which specific search terms arise, the ability to interpret what they mean about consumer intent, and the other people, places, and things (let alone books!) competing for those terms, is not going to achieve the desired results for discovery, no matter how accurately and eloquently the book’s content is described.
Even if the logic is fully absorbed and appreciated, the challenge for most publishers to change their process for creating descriptive copy is substantial. We’ve now replaced “knowledge of the book”, which has usually been routinely gained through work that takes place before the copy is needed, with “research into the audience”, a separate task that can take a couple of hours or more and requires a dedicated effort.
(A parenthetical point here: if that audience research were done before the book was completely written, it could inform what content would sell best, not just what descriptive copy would be most readily discovered. That’s where publishers have to go in the long run, which would actually suggest that editorial staff needs to learn the audience research techniques as urgently as marketers. And we will add the massive understatement that knowing what this research would tell you can be extremely helpful in gauging the true potential audience for a book or author, which would influence the amount you’d calculate would be a sensible advance.)
The research exercise we’re suggesting is a prerequisite doesn’t just take time: it takes knowledge and skill, as does applying what is learned to the copy. Even if the knowledge were there and distributed across all the people who write descriptive copy today — and there is no publisher on the planet in which it is — the time required for the research would tax the resources of any house.
And that’s before we get to the distractions that can make publishers forget the core point, and they are plentiful.
The most recent one we’re aware of arose twice recently, two weeks ago in a piece by Porter Anderson and then again last week in an article in Publishing Perspectives which featured the new tech-driven book deconstruction and analytical capabilities developed by an ebook distributor called Trajectory. They acquired the assets of another auto-analysis engine, described in the piece as a “book discovery site”, called Small Demons.
What Small Demons and now Trajectory do — somewhat like BookLamp, which was acquired by Apple — is use natural language processing and semantic indexing to identify characteristics of the book that can be discerned by examining the writing. Small Demons seemed to focus on proper nouns, so it could find all the books that had action taking place in Paris. Trajectory and BookLamp focused as well on writing style, sentiment analysis, and story construction.
The logic is that if you like books set in wealthy suburbs with handsome 34-year old male protagonists who break four hearts before falling hopelessly in love and who speak eloquently with the frequent use of five dollar words, and then get chased by bad guys until the heroine comes to the rescue in the last chapter, we can find them for you.
Even before I met Pete McCarthy, it seemed dubious to me that the kinds of similarities these analyses could document really predicted what a person would want to read based on what they’d read before. This logic would only make sense if the objective were to recommend a “next book” to a reader, assuming they liked what they were reading and wanted their next book to provide a similar experience. (There clearly are readers like this and they are very visible in fiction genres, but I’m quite skeptical that most readers are like this.)
But if the point to the analysis is to create copy that will promote “discovery”, off-page keywords or even “comps”, and you buy Pete McCarthy’s premise that delivering solid SEO (search engine optimization) depends primarily on “understanding audiences”, it is clear that calling this kind of analysis a tool to aid “discovery” is a massive misnomer that mostly leads to a wild goose chase.
In fact, it is doubling down on the very thing the industry needs to rethink. It is not nearly as important to develop a deeper tech-assisted understanding of “the book” as it is to do research into the audience. And analyzing a book’s text doesn’t deliver that understanding.
The promise of BookLamp, Small Demons, and, presumably, Trajectory, is that they can deliver an analysis that requires little or no staff time because they use sophisticated technology. And the main barrier to wider adoption of Pete McCarthy’s SEO techniques is that they require research that, even using the best tools, will take 2-to-4 hours of human investigation before the first word of copy can be written.
If you’re looking for books that are similar in style and content, the tech can help you and you should use it. But if what you want is to make your book pop in the searches of likely readers, you can’t dodge the work. And finding a book that is similar in writing style, pacing, and story construction really won’t help you at all.
“Discovery” is often discussed by publishers as though it were a problem consumers consciously have. I don’t think they do. My own unproven paradigm is that there are people who are always reading a book and people who are not. The former group knows well how to find them, but using search is part of many of their arsenals. For the latter group, the books tend to find them rather than the other way around, but today the best way for the book to find them would be when they’re searching for something else and a book would be a relevant return to the query. In either case, publishers have a vested interest in showing up for the right searches for the right people at the right times.
The Logical Marketing Agency we’ve built around Pete’s knowledge of digital marketing offers a variety of ways to help publishers with this challenge, including both having us do the audience analysis for particular books and delivering training seminars that can teach a publisher’s staff what it needs to know.