As we near the end of 1995, a few things about new technologies and New Media, as they pertain to the book publishing business, are considerably clearer than they were a year ago.
Last year at this time, reasonable people among us nurtured the belief that high production-value CDROMs for the consumer market would shortly become, if not a thriving business, certainly a profitable one.
Last year at this time, very few of us had heard of the World Wide Web.
Last year at this time, email, for those in publishing who used it at all, was likely to be an intra-company communication device.
But, oh, what a year 1995 has been.
The emergence of new opportunities for publishers to make use of online seems to have been as sudden as the collapse of hope that electronic products that cost hundreds of thousands of dollars to develop can be economically viable.
This shift in our understanding needs to be matched by a shift in strategy. A year ago it would have seemed reasonable for a big consumer publisher to be allocating its New Media learning dollars more heavily to developing bombshell hard copy products than to developing online skills. It doesn’t make sense now.
Evidence of this change arises regularly. The October 23rd issue of ABA’s Bookselling This Week, reporting on October’s Northern California regional trade show said, and I quote, “last year’s technology talk focused on stocking multimedia, this year it was all Internet.”
This morning I want to discuss what I think are the three most important, useful, and actionable truths about new technology and New Media that have been clearly revealed in the past twelve months as far as book publishers are concerned. All were, at best, obscure matters of conjecture 12 months ago.
ONE: The six-figures-to-create content-based New Media product, which usually means a high production value CDROM, is not a sensible business for book publishers, if it is a sensible business at all.
TWO: Whatever you do in book publishing, if you’re not online today, you’re behind. This is not mostly, or even largely, a question of interest because we can be selling books over the Internet. We probably can’t, in any important numbers, for quite some time. It is because online capabilities can enhance every editor, marketer, sales person, designer, production person, numbers tracker, or corporate communicator. Online will change the way we do everything in our business; it provides efficiencies, economies, and an essential and otherwise inaccessible parallel world.
THREE: A New Media Division or department is exactly the WRONG way to prepare a 1990s book publishing company for the technology of the 21st century. That approach will fail to develop successful New Media products; it won’t best help the company employ online communications, and it undermines the basic strengths of any prospering publishing enterprise.
Getting a Handle on Change: Looking Back at What Has Worked
The pace of change is now so quick that it helps to gain context by examining some events in a broader time horizon. A conceptual framework to contemplate large scale change can be gained by one important clue from the past two decades.
By our count, there have been four great consumer new technology adoptions in the past 20 years. In the order that they occurred, they were:
The videotape recorder, the audio CD player, the fax machine, and the personal computer.
If you want to think about why they succeeded, you need to consider how they were used when they were first made available.
The videotape recorder was first used to time-shift existing television programming, and shortly thereafter to view feature films. Even the very first people who bought them knew exactly what they would use them for.
The audio CD player was simply, but obviously, a better way for every consumer to archive and access recorded music; their purchasers knew what content they would play on them before they bought the machines.
The fax machine offered a better way to send hard copy communication and, because it worked on ordinary phone lines everybody owned, it was a new technology everybody was almost instantly comfortable with.
And the personal computer succeeded by making two essentials of any enterprise, word processing and spreadsheet management, easier than they were with typewriters and calculators.
There is an evident common denominator to what made these technologies gain rapid acceptance: they provided improved utility for current behavior. No change in goals or functions was required to make use of them. They all provided a better way to do what was already being done.
The CONTENT didn’t change!
It has been a couple of years since I first made this observation, and it could be argued that the cellular telephone legitimately adds a fifth new technology to the list. Another dynamic of new technology adoption is certainly demonstrated by them, as some third world countries seem to be bypassing the wired phone system entirely to go straight to cellular. Whether or not its ubiquity is yet sufficient to put the cellular phone in the same class as these others is debatable; the fact that it fits the utility and constant-content patterns can hardly be disputed.
The failure to grasp this basic lesson: new technology adaptations must require very little change on the part of the consumer, and may require absolutely no change in the content, is the biggest component of the answer to the question, “What Went Wrong with CDROM?”
Groping for a Hard Copy Product Model
In some areas of professional publishing, there is already hard-copy product in the marketplace that does reflect that concept of practical utility, does makes sense, and does makes money. It is clearly easier for an attorney to find and apply appropriate citations with electronic retrieval than to manipulate the citations in print. Professional directories of almost any kind instantly become more functional in electronic form. Markets where it is obvious how to make it easier to do what you were doing already are inevitably the first to be served. Some of the groups that require more sophisticated graphics, like architects, get served a little later than those that can make use of large amounts of text.
Lengthy references – multi-volume encyclopedias – also have succeeded commercially in CDROM, or at least have done mortal damage to their print competitors. This success is at least partially driven by institutional decisions. Libraries have invested in hardware, so their governing authorities, quite sensibly, insisted they follow by investing in software. Library book budgets have shrunk in favor of expanded budgets for electronic product.
Bernie Rath, the executive director of the American Booksellers Association, likes to point out that Microsoft Encarta, the most widely-distributed encyclopedia in the world, gets its content from the Funk & Wagnalls encyclopedia which was sold for 99 cents a volume in America’s supermarkets. Meanwhile, the proud Britannica, for which Encarta is no scholarly match, is threatened with oblivion because it was too slow to move to electronic forms, or at least to develop workable economic models to permit a transition. Unless Microsoft or somebody else both takes over Britannica and elects to maintain the quality of its content, the intellectual capital of civilization will be reduced.
Of course, neither the Encarta nor Britannica model seems likely to prevail as an encyclopedia in the end, given the growing intrusion of online as a place to “look it up”. One easily summons up a
concept of an encyclopedia that is dynamic, interactive, and perpetually being updated, by organizing modern online tools to solve the age-old need. It is a challenge to measure how much value exists in either the Encarta or Britannica franchise in a world where each of their entries competes against the entire world of digitized information.
If poor quality were the key to market success, there would be grounds to defend it. It hasn’t been. The consumer new media market has been an equal-opportunity disaster for good products and bad. Many smaller development firms have already failed; some relatively big names in this new game are rumored to be on the verge.
Grasping the reasons for this marketplace failure is important for all publishers, even those who may not see themselves in the consumer market. For one thing, digital information, driven by online distribution, will increasingly blur the distinctions between consumer and professional channels, even between consumer and educational channels. But, as we will see, consumer product standards can also drive professional product development.
There are many reasons for the disaster in today’s consumer new media market, which is characterized by the continuing supply of products that lose big money. It certainly does not help that much of the product is, literally, use-less. Too many consumer CDROMs are products made to explore, rather than to teach or to use.
So the product has failed on two levels: it has generally failed to create something engaging, fresh, and useful, and it has failed to establish a commercially-viable model. Both of these failings are largely the result of an obsession with high production value. Most CDROMs make no sense in the way publishers make book products make sense, with a balance of projected sales revenues to projected creation costs that can result in a profit. How did this happen?
We lost touch with our roots.
Publishing: The Business of Content and Markets
Whatever else it is and whatever delivery form it takes, at its base all publishing is the business of content-and-markets. That’s why we have subject specialist editors; that’s what houses mean when something is “not for our list” or “not something we do well.” In all publishing houses, subject sensitivity prevails beyond the editors to the marketers. And it goes beyond the publishing houses to the store sections and subject specialists among the booksellers and in the libraries.
In professional and academic publishing, the importance of subject specialization is even more pronounced. Mathematics editors spend more time at conventions and meetings of mathematicians than they do at conventions and meetings of publishing people, or writers. Which they should.
What publishing product creators, whether we mean editors or authors, know about the page design or typesetting process, or how the book is printed or bound varies widely. What they must know is their subject matter.
The application of specialized knowledge is what makes our business go around. It is amazing to see that coping with the impact of technology threw us so far off our normal thought processes that many publishing companies abandoned publishing’s first principles for new media.
Who has made the new media product decisions since normal book editors have not? Whether inside the company or outside, the reliance has been on “new media developers” who are, in essence, technologists. They certainly aren’t subject information specialists.
They have been guided by CEOs and high-level executives, not by subject-oriented editors and marketers from lower down in the ranks.
They have created product without the restraints of measuring each investment decision against an expected return, partly because so much of the product development publishers have done so far has been seen as research and development; partly because the potential market has been so breathtakingly overestimated; partly because the publishing organization’s honed instincts to keep making those calculations were disengaged from the process.
The technologist’s background is often in the software industry. The superficial similarity between the businesses creates a highly misleading perspective. In both businesses, products are developed and launched; some succeed and most fail. Reviewers are key to the process and influence retailers, who are not expected to carry or promote nearly everything they are offered.
But there are big differences that were too little appreciated. A lot more books are published than software products issued. For that and other reasons, each book title gets a much smaller total sale than one expects from an individual software product. In 1992, we did some research to estimate that the average book did $250,000 at retail, the average software product in excess of $2 million, about 8 times as much.
This means a software developer anticipates far greater revenue for each product, leaving him comfortable with far larger product development expense than would be the instinct of a book publisher to accept.
The reason there are so many more books than software applications is that software provides a shell, or a tool, not content. Applications software programs are blank books. How many titles do you need?
And software veterans grew up in a highly technology-sensitive world. Their output was always judged by the review community and their peers for its modernity, its features.
Publishing’s normal content-and-market contexts have been suspended for too much of the new media development. And without the right governing perspective to guide their thinking, New Media developers accepted two widely-held and persistent misjudgments that contributed further to inflating the true potential of the content-based CDROM.
First: the installed base of CDROM drives has consistently been a vast overstatement of their accessibility and, consequently, of the market for CDROM products. Estimates today claim a CDROM installed base as high as 50% or more of computer users, which leads to inflated expectations for a number of reasons.
For one thing, the fact that CDROM drives are built into all desktop computers sold for the past year or more, as they have been in almost all commercial desktop PCs, does not speak to how many people are actually able to use them We know that initially only a small proportion of the pre-installed modems were activated, although that has changed. It seems likely that there was a corresponding lack of use of the pre-installed CDROM drives.
Another way the installed base overstates the accessibility is that people who use notebooks aren’t usually attached to a CDROM. Even if one has another computer with a CDROM drive~ even if the laptop sometimes comes to rest in a docking station that has one, there are many hours of modem computer use where a CDROM drive is not accessible.
There has been a casual and mistaken acceptance of CDROMs as a data delivery conveyance as useful as diskettes which is fueled by this concept of the installed base.
A dirty secret about Windows-environment CDROMs, pre Windows95, is that many of them have presented real headaches to the user. Sometimes they are merely difficult to install. Worse, they can corrupt data in the hard drive of the innocent computer user. These problems will apparently disappear with the new plug and play features ofWindows95, but we know now that alone won’t instantly change the marketplace. The required hardware upgrade to use Windows95 assures that a full transition will take some time. How fast Windows95 will reverse a jaundiced consumer perspective toward CDROM remains to be seen. Many new computer owners suffered frustrations with the CDROMs provided with their new machines and are discouraged about trying again.
Parenthetically, these are problems that have not existed with CDROMs for the Apple Macintosh. But Windows, of course, has the lion’s share of the market.
So whether we have 20 million or 50 million Windows-based CDROM drives out there is hardly defining; we have never had an equivalent number of potential customers.
The second great misjudgment goes back to the “practical utility” rule for new technology adoption, evidently ignored in most CDROM development. The new information CDROM products do not address old habits or desires in their attempt to succeed. It often seems that sound and motion and interactivity have often been added to high production value information products almost exclusively to employ the technology, not because utility was enhanced by their addition. This is a real case of the tail wagging the dog when it is the expensive assets which are low-utility.
The first time I personally encountered this phenomenon was two years ago when a major global publisher put a scholarly encyclopedia of mammals, 12 volumes in print, onto a single CDROM. This high level product had very pronounced utility in a digital form: searchability, cross-referencing of Latin and English names, and so forth. And there was so much data involved that CDROM was necessary to handle all the information. In a professional and library market, for which this product was intended, the CDROM accessibility barriers are sharply reduced as well.
I was stunned at the time to learn that more than 65% of the cost of that product was in the sound and video, features which its professional market would almost certainly ignore. But the developers of the product felt that reviewers would hoot them out of the market if they peddled an expensive CDROM without sound and motion. Maybe they were right, since they have to jump the hurdle of a technology-oriented review community that thinks this way.
A recent check with the same company revealed that they would feel less pressure today to deliver video or sound for a scientific market. That represents progress in the past couple of years, and an indication of where all markets, including the consumer market, are heading.
A Consumer Market That Hasn’t Happened. and May Never
So there is has been a persistent misconception about the size of the market for CDROM product in general. This was compounded by the companion error of assuming that an entirely new kind of product, where information is presented in a way that maximizes the ability to integrate text, images, sound, moving images, and software, could gain rapid consumer acceptance even though there had been no consumer knowledge of such a product before.
And the consequences of those two misconceptions were multiplied by suspending the rigorous examination of cost-value relationships in product development, so product that either has to sell unrealistic quantities or sell for unrealistic prices, or both, became, for a short time now near an end, the product standard.
Virtually every publisher involved in consumer CDROM products has experienced the disappointment. The lucky ones are seeing their licensees struggle as huge product development costs, which include rights advances and fees paid to publishers, are nowhere near being recouped by sales. Most of the more adventurous publishers who developed their own products instead of licensing are now writing down the value of their own investments.
Optimism for consumer CDROMS costing hundreds of thousands to develop is very difficult to justify today. A year ago, the lack of sales wasn’t too disturbing. Christmas 94, following a year of an apparently exploding CDROM drive base, was when sales were supposed to start becoming significant. They didn’t.
The Distribution Channels: They Add Up to “Not Enough”
The software and OEM channels, which were expected to provide the bulk of the distribution opportunities for this new generation of products, are glutted. They clearly cannot provide the
necessary artery to the public for information CDROMs while applications software and games, which constitute their core product base, continue to expand their offerings.
OEM (which stands for “original equipment manufacturer” and is what bundling software with hardware is called) revenues for CDROM have dried up dramatically. Through the middle of 1994, 80% or more of the information CDROM units distributed to the consumer went through this channel. It was a rate that couldn’t possibly keep pace with an expanding product base, and it hasn’t.
Bookstores are actually gravitating to the new product fairly quickly, by historical standards. After all, they were slow to accept audiobooks. Going back a generation or so, conventional bookshops were slow to accept paperbacks. But still, recent PW surveys suggest that only half the stores in the US will carry new media product by end of 1995. What must be more discouraging to CDROM developers is that the sales profile of what bookstores sell looks like books, with each outlet stocking and selling product in quantities of 1’s and 2’s. Enough sales data are now in to make it clear that reasonably-anticipable sales don’t add up to anywhere near what is needed to recoup routine six-figure product development investments.
Given how wide of the consumer mark the product development has been, the bookstores’ caution may simply reflect the wisdom of merchants who insist they understand the value of products they sell. In fairness, there are some categories that can find a measurable consumer market: reference, as we’ve mentioned, and juveniles to some degree. They are also the most competitive. Juveniles benefit in the consumer market from a force similar to the driver for library sales. Like libraries, parents own the hardware and like library administrations, they want to use it. Whether kids will ultimately prefer a big stack of CDROMs to a big stack of books is another question and, frankly, to me, seems highly unlikely.
All this paints a very scary picture for the many publishers now developing some kind of information new media product for the consumer market. It is very uncomfortable to be midway through an 18-month, half-million dollar product development exercise as the evidence that it will never pay off comes pouring in.
That leaves publishers with a knotty problem. If high production value CDROMS can’t be viable in today’s marketplace, what kind of electronic books can publishers make to address what we all believe is an emerging market?
Of course, if I had “the answer” to this question, I’d offer to sell it to the highest bidder in the room. I don’t, but I do have some suggestions about how to look for it, which we’ll get to in a little while. First, let’s spend some time discussing the world of online.
What We All Want to Know: How Many How Soon?
We must begin by acknowledging that the size of the online audience is very difficult to measure, although the fact of very rapid growth can not be disputed.
You all saw reports last week about the new Nielson survey, which says that 16.6% of US and Canadian adults have access to the Internet, and that 10.8% have used it in the last three months. That’s about 37 million people with access, 24 million users in the last three months. And that’s the latest in a series of recent studies, some more bullish and one very much more bearish than what Nielsen just said.
When I prepared my presentation for the Vista Conference in London in July, I wanted to hazard what seemed then the too-shocking prediction that America would reach virtually “universal connectivity”, where everyone above the poverty line was online, by the year 2000. Cooler heads prevailed on me to scale back the prediction a little bit, so what I said was that universal connectivity would come sometime between the year 2000 and 2005.
By the September Vista Conference in London, new information made it look like I had given myself the right margin of error, but in the wrong direction and universal connectivity by 1997 seemed possible. Then the much more restrained estimate of today’s online population reflected in the recent O’Reilly survey suggested that the original projection of the year 2000 is the reasonable one, after all.
Note that the restrained survey justified the original projection of universal connectivity by the year 2000; it just didn’t support advancing the date to 1997. I think the Nielsen numbers make 2000 look excessively conservative again.
Going Online and the American Advantage
Today’s cost for going online is about $800 worth of new computer and $20-a-month. The cost is dropping and it will continue to. The United States today is far ahead of the rest of the world in people connected online because of the Internet and the impact of having had three competing commercial services for nearly a decade. But despite America’s head start, no part of the developed world will be terribly far behind as connectivity develops its inevitable momentum.
As the numbers of people online increase, more and more resources are devoted to developing sophisticated capabilities to make connectivity more valuable. Online is already seen as a better way by many people to get some information that previously came bundled in newspapers or television newscasts, like weather, sports scores and summaries, and stock market prices, even local movie listings in big cities like New York. Ultimately, the flexibility of online delivery will assert itself. For example, it permits local news from anywhere, so a husband and wife can each get news from their own hometown, however geographically separated was their youth.
Of course, virtually everybody who can connect online has an email box. As the numbers who are connected grow, email will increasingly substitute for much of what now travels by first class mail: bills and payments and routine correspondence.
The biggest restraints on those who have email are that too few people have had email addresses, and that the mechanisms to catalog and report addresses are not as developed as they are for streets, phone, and fax. But that is also changing.
It is worth a moment to contemplate how easy it is to reply to an email message. The recipient just hits the reply button, keys in what he has to say, hits the return key, and the response is in its cyber-envelope, with a cyber-stamp on it, ready to be posted immediately or automatically the next time the computer goes online. It is easy to add the feature of repeating the text that triggered the reply, so the context of the reply is always fully appreciated.
It seems obvious that people will sometimes choose to answer a letter by sending email. Given how the response mechanism works for email, it hardly seems likely that an email communication will draw a letter in return!
We are indeed fortunate that our part of publishing is books. Newspapers, and what the Internet community calls snail mail will become vestigial when connectivity becomes universal, even though they will continue to survive for many years thereafter. Their survival will be due to the inertia of their existence, but their utility will decline rapidly. Magazines will also have drastic adjustments to make, although their raison d’etre is not as thoroughly compromised. But most books, and particularly the majority of consumer books sold that are meant to be read, not consulted, are not immediately threatened, even by universal connectivity enhanced by very sophisticated tools.
Online Now: Explosive Growth
That the online world today is much further advanced in the United States than anywhere else in the world is due in some part to America’s inherent commercial advantage: 250 million still largely affluent people using one currency with business transacted in one language. But even more than that, it is primarily due to the fact that the Internet was invented and nurtured by the Pentagon and American universities.
I would take a moment here to make a political and social observation that is prompted by this fact. There have been several cases in American history where a substantial government initiative directly enabled great wealth-building in the private sector. The first one was the Louisiana Purchase. Others of great significance were the building of the railroads in the late 19th century, rural electrification under Franklin Roosevelt and, relatively recently, the building of a large interstate highway system beginning in the 1950s.
It is ironic that at this moment in history, when the Internet, thank you Pentagon, is about to join their illustrious number as perhaps the greatest private and public wealth-builder of all, the total fiction that government creates no wealth is such a widely-accepted economic principle among American politicians. No Pentagon, no government investment, no Internet. Without the Internet, one wouldn’t suggest that the world would never have gone online, but it is very hard to imagine how far behind where we are we would be.
An important dynamic operating here is that more people online beget more people online. It is clear that the relatively large numbers in the United States have a great deal to do with the smaller, but much larger than they would have been, numbers in the rest of the world. By the same token, the
millions of people placed online by the Internet quite aside from the commercial online services, also help to spur the accelerated growth.
There are over 10 million people enrolled today in the three biggest commercial online services:
CompuServe, Prodigy, and America Online. Their number was already growing by about half-a-million a month before the advent of the Microsoft Network with the release of Windows95. There were early estimates that the Microsoft Network alone could add nine million additional online connections in a year, although recent developments, including Microsoft’s self-imposed limitation of 500,000 initial subscribers, makes that seem inflated. America Online claims to have tripled its subscriber base to 3 million in the last 12 months to last July, after having tripled from just over 300,000 it the year before. The most recent AOL numbers push 4 million.
In June at the ABA we projected 10 million Web browsers by year end; 20 million by a year from June. That still seems reasonable, probably too conservative. Since the telephone infrastructure enabling online is already in place, the speed of growth is entirely dependent only on consumer motivation. Apparently the American consumer is extremely well motivated.
The USA Today report of August 28, 1995
I said earlier that shortly before the September Vista Conference in London the rush to online seemed to be coming even faster. On August 28, USA Today reported in a front page story that Yankelovich Partners, a well-established US polling company, had found one in seven US adults online in May! The same survey indicated that the number had doubled since October 1994.
No connectivity numbers yet published should be taken too literally, and this report was suspect since the study seemed more focused on defining cybercitizens than counting them. And the report did not make clear how many of these connections were email only, and how many were capable of being content explorers, equipped with a Web browser or at least a full Internet connection.
But what I find more reliable, and more startling, is the trend line. It shows a doubling in online penetration in just seven months. If that growth trend is accurate and it continues, the US could well have more people online than voting in the Presidential election in November of 1996, even if one takes the view that the online population was half or less than the 14% they thought it was in May.
Using the probably more reliable Nielsen estimate that there are about 11% Internet-users, 16% with access about 3 months ago and applying the Yankelovich growth rate would get us at or near universality by early 1997.
What is so Special about the Web?
Why do we seem to be accelerating toward this nearly-universal online population? What is, relatively suddenly, motivating large numbers of peop1e to go online today when they lived quite happily without being wired yesterday?
A big part of the answer is the World Wide Web.
The Internet was created as a communications device, not a content storage medium. Its content was created, over time, in the form of postings to the bulletin boards of use net newsgroups. And, in the beginning, access and navigation of the Internet were incredibly cumbersome, requiring the command of arcane computer codes.
The commercial online services rose to prominence by eliminating a lot of the problems from the user’s end. They specialized in making connecting and navigating easy, which they accomplished by building closed systems where they could control the graphical user interface when the Internet was an all-text enterprise. Prodigy rose to challenge CompuServe precisely on the basis of providing a more intuitive and easy-to-use interface when hope first arose that computers would move from the office to the home about ten years ago. America Online raised the ante by being even more graphical and more user-friendly than Prodigy.
The brand new Microsoft Network hoped to trump the other online services with the ease of connection from Windows95. There was a great deal of concern expressed in the months before Windows95 was released that the one-button connection to MSN would push the other major services to the wall. Whether it was driven by technical or business considerations or as a political tactic, Microsoft decided, for an unknown time period, to limit to 500,000 the number of connections it will allow to MSN.
Content Migration to the Web
The technical and business structures of these closed systems, which were created to provide a more user-friendly interface than the Internet, have always posed problems for the content providers. To begin with, providing content requires some kind of a “deal” between the content owner and the online service, often working through a bureaucracy at the online service that can frustrate publishers. And the computer protocols unique to each service slow the process of content generation and require that the commercial online service itself be involved in every change or addition to what is offered.
Not only that, the big services have historically negotiated for exclusivity on content they mount, meaning that content providers could address only about a third of the commercial online market with their product. For example, the three major weekly newsmagazines in the US for years have divided with Time on America Online, Newsweek on Prodigy, and US News & World Report on CompuServe. This is not the kind of head- to-head competition the magazines themselves prefer.
Suddenly, with the Web, many of the problems for the content creator inherent in the online service went away. Web pages permit graphics and file downloading, and they work in easy-to-use Hyper Text Markup Language (HTML) which puts control back in the content creator’s shop. Web pages are technically easy to mount and change, and maintaining control of the environment is very attractive at a time when so much of what content holders want to do is experiment.
The online services do offer an advantage to the content holder which the Web has just begun to emulate and will never be able to match. They bill their customers for time and, by metering their entire system, are able to pass payments on to the content holders in relation to the time spent by users accessing their content. Web metering capabilities are in their infancy, and even when the technology to enable them becomes commonly available, it will still be orders of magnitude more difficult for one content holder to establish a charge for access on his own than it is for a commercial service to conduct that relationship with its customers across a range of content offerings.
But with the number of Web browsers growing by leaps and bounds and all the major online services offering Web access, it is also clear that a larger audience is available now through the Web than through anyone of the Services. The Web market today is probably larger than all three of the big online services combined. There is no doubt that the content available on the Web already dwarfs what is offered on a proprietary basis by all the online services combined.
It is the ease and flexibility for the content owners that generates such a fast-growing wealth of content on the Web. For example, the prior versions of this speech were there, as this one will be, because it is easy for us to put it there, much easier than it would be to make hard copy available to anybody who might want it.
In September our office figured we have something more than 4 million browsers, although USA Today had suggested in a front page graphic in July that there may then have been as many as 15 million. There were estimated to be 50,000 active web servers, or computers holding web pages for distribution. On those servers were well over 100,000 domains, or individual address networks. There were 250,000, if not two or three times that many, home pages, or individual groups of content. And there were well over 8 million web pages, or individually accessible collections of information.
That’s 2 Web pages for every browser, or 2 publications for every reader!
Activity at this level is obviously not all generated by big companies like Time Warner and MCI. They are joined, literally, by the butcher, the baker, the candlestick maker, and the local real estate agent. It is the fact that the small merchant on the comer has access relatively equal to the biggest multinationals that is the magic of the Web and the ultimate key to its success.
Simple web pages can be very easy and inexpensive to make. Prodigy became the first commercial online service to offer free pages on a mass level. Any subscriber can have one or more, but you have to work within the templates Prodigy has created. America Online announced that it was following suit, although they have been slower than promised at implementation. Many internet access providers had already been providing a similar service. I
The Web browsers, or the software the permits navigation of the World Wide Web, are continually being improved. This is an explosive part of the business, as the eye-catching run-up of the share price of the Netscape IPO demonstrated. Browsers have achieved a level where they are now relatively simple and intuitive, although they have a long way to go.
Once a person learns to handle a browser for any purpose, local or global, they will find it easy to locate any web address they find, whether on a business card or in a newspaper story. Technology civilians, just plain people, will initially find their way to the web for any variety of reasons. When they arrive, they will find plenty more to do that satisfies their current information and amusement desires, whatever their interests.
Meanwhile, corporations are further expanding the ranks of the Web-comfortable with increasing use of the Web as an internal communications device. One indication of we’re headed is the projected growth in the Web server market, which Forrester Research in Cambridge, Massachusetts estimates will go from $5 million this year to $664 million by the year 2000.
An increase of about 135 times in the annual output of computers making Web pages available suggests hundreds of millions of additional Web pages annually within the next few years.
For Publishing People, Even More Motivation
There are many informed people, progressive in their approach to technology, skeptical of all these numbers. Universal connectivity might not come until well past the year 2000. Maybe people in general will be far slower to adapt to its use than we enthusiasts for online would like to believe.
If this view is correct, does it relieve publishers of the need to get online now? I don’t think so.
However quickly the whole world goes online, the world of words including authors, reviewers, and other publications, will be online sooner, if you couldn’t call it online already. And whatever possibilities online offers to change the way doctors, lawyers, or dry goods merchants work, for those of us who work on developing words and pictures into designed pages and then selling the results, the possibilities for us are much greater.
The situation right now can be summed up simply: If you’re not online, you’re behind. And you’re also being left out of some conversations you want to be a part of.
Working online isn’t difficult, but tricks and techniques do have to be developed and learned. Publishers need to be ahead of the public in their online facility. They need at least to be even with their authors and reviewers. That time is past if you’re not now online.
Book Marketing: An Online “Natural”
Almost all book marketing is about reaching clusters of people identified by their interest in a subject. So we promote to senior citizens or mutual fund investors or lovers of the theater through the magazines, broadcast media, or organizations that are likely to reach them. Since all of the online world is organized by interest group, it is natural to promote books online. No small benefits are that printing is unnecessary and that postage is free. Publishers are beginning to discover that online is a very cost-effective way to reach those pockets of interest that they are challenged to find book after book after book.
lnterest group marketing online is mightily labor-intensive and relationship-driven. It takes time to find all the right usernet newsgroups and bulletin boards, post the right messages, and constructively service the responses.
It is less labor intensive to send a general bulletin using ubiquitous list serve technology to a mailing list of emailboxes. Doing that obviously requires first identifying the email addresses of the right people, but it also requires the permission of the box owners. Junk email often provokes a very nasty and counterproductive response.
It is an underappreciated fact of online marketing that collecting permissions to put items in people’s email boxes is a coin of the realm that can only be built up over time. There will be whole new skill sets needed and developed to create and maintain emailing lists as the online world grows.
Publishers and Booksellers on the Web
In addition to whatever efforts are being made by online marketers making forays into clusters of the interested, several hundred American publishers, large and small, do have Web sites. And so do perhaps as many bookstores, by the way.
Counting Web sites is another way to measure the US lead in connectivity. In mid-June, when we found 300 American sites, we also found 38 British, 31 Canadian, and a handful of Australian.
Publishers found it relatively easy to get started creating web sites because they own prodigious amounts of digitized content, although the approach used by these early sites is inappropriate to the medium in ways we will discuss. Generating content is the biggest stumbling block for everybody else. lndeed, the glorified catalog approach, which is so unsatisfying when it is done by a non-publisher, can have utility when a publisher organizes his offerings into a searchable database. The web sites are also being used to offer free samples, tables of contents, and extracts, albeit in a somewhat unfocused and haphazard way. And they occasionally provide a way to link marketing efforts with authors or brand name licensees with web sites of their own.
There is a temptation presented to publishers to think they don’t need booksellers anymore with the advent of cyberdistribution. This thinking would constitute dangerous folly if it were not doomed to be contradicted quickly by disappointing sales. For one thing, it is going to be a long
time – much longer than the relatively few years it will take for most newspapers and regular mail to become endangered by online information – before books aren’t an important information medium and bookstores an important delivery mechanism for them. Much more important than cyberspace for as far as the imagination can see. So poaching your intermediaries’ sales in cyberspace may encourage retaliation on the ground which would not be good for business.
But a more progressive reason to be restrained comes from considering the value that bookstores will offer in the online revolution. As the online community grows and techniques for marketing to it improve, people will be more and more careful about whom they permit to enter their mailboxes. For the reader of any book category, the bookseller will be correctly perceived as a more objective and full-service information provider than the publisher.
Technology doesn’t immediately change our missions, it changes our techniques. The principal function of the bookseller will remain the same: to organize the offerings of many publishers into subject groupings that make sense to the reading public. To the knowledge of books, the bookseller adds the knowledge of the individual customer to make meaningful suggestions. Customers want those suggestions and will increasingly want them online.
What Should A Publisher Be Doing Now?
The overall approach by publishers to online, including the Web, has been flawed in the same way the hard copy quest has been flawed, with similar disappointing results. Just how wrongheaded the initial approach was became obvious six months ago. A few sparks flew when some major publishers threatened to go after direct-to-consumer sales from their Web sites. The flash of anger from bookstores and the ABA quelled that concept before disappointing sales got the chance. Gradually, publishers and booksellers are both finding it hard to grab direct sales off the Internet and it is clear that they will have to compete with specialists, like the very impressive Amazon.com web site, for whatever business could be there.
Let’s remember that Web sites are very easy and inexpensive to create. HTML programming language can be mastered by a computer-savvy designer in days. Putting together even complex Web pages from digitized assets will take hours, sometimes minutes.
It costs more, usually much more, to create the jacket art and catalog layout for a book than it would cost to create a dedicated Web page. When you absorb that fact, it seems counterintuitive to muster a centralized corporate effort, with all the additional costs and complications that come with an oversight bureaucracy, to create and administer a Web site.
As the skills, understandings, and resources for using online become more broadly distributed to the content-and-market specialists, home pages will spring up for individual books, or groups of books. Many, if not most, Hollywood movies already do.
Those individual book Web pages will become useful for almost every aspect of a book’s life: research, exposing drafts or concepts to review, building pre-pub word-of-mouth including getting endorsers, getting galleys into the hands of influentials, and, of course, marketing the book
when it is available. Currently-available technology permits locks-and-keys to control access to any and everything placed on the site.
Now we’re talking about a useful Web site. Which takes us back to where the publishing sites, and perhaps many of the bookseller sites as well, constructed so far have failed to grasp their content-and-market equation.
Re- Thinking the Publisher’s Web Site
Although it is often hard for publishers to accept, there are only a few constituencies that think: particularly about them, and they seldom include any part of the reading public.
For most publishers the group would be limited to: shareholders and potential shareholders; employees; bookstore, wholesaler, and library customers; agents and authors; frequent purchasers of their rights, particularly among foreign publishers; and printers and other production suppliers. These are the people that think of Random House as somehow different from Simon & Schuster or Putnam. They, to greater or lesser degrees, are the only ones who would seek information that was publisher-specific.
So a well-designed publisher home page would permit these stakeholders to sort themselves on entry to find information aimed specifically at them. Information could be managed so that only agents with a password could gain access to a list of editors and what they had recently bought, or so only booksellers could get the latest changes to the discount schedule.
If a publisher wants a Web site to appeal to the public, he needs to decide, as with every book, what public? And the specific design, shape, titling, and promotion should, as it is with each book, left to the content-and-market specialists who know that public.
When you consider how frequently you hear about how hard you have to work to make people return to your Web site alongside of how cheap they can be to create, you wonder whether striving to create repeated hits is the only logical approach. There are all sorts of opportunities to promote a new Web site just because it is new. For those of us in the word-of-mouth business trying to promote information products that normally permit a frustratingly narrow time frame to launch, a dedicated, deliberately-short-term Web site can be a better and cheaper tool than many we already use. And it can also extend all our other efforts: catalog copy, brochures, sales pitches, and special offers can all be integrated in to Web marketing, if the sites are flexible enough.
So this is how the publisher must approach the Web: the home page should be dedicated to stakeholders; the extensive activity is an array of enterprises constructed and used in the course of business by a cyber-savvy publishing company.
The Flaw in the Web
While the Web will always be what got the world to connect, it is not the last word in cyberspace. It will have to be improved upon eventually.
The problem with the web is, in a word: speed. And in a second word: reliability. Or the lack of it. Under the best of circumstances, which today is a 28 thousand baud modem, some Web pages can take a frustratingly long time to come to the browsers computer screen. Exacerbating the frustration is that the Web browser has no way to know whether a balky page will come up in ten seconds or ten minutes or never. The source of the difficulty could be an inadequate server: the page could sit on a computer too weak to process the traffic it is getting. Or it could have a “thin straw”: not enough telephone capability to handle the traffic. Or, as Europeans who try to reach a web page located on an American server can tell you, the Web browser can be stalled by too much trans-Atlantic traffic.
Having higher-speed capabilities in a fast modem or ISDN phone lines at the browser end doesn’t appreciably speed up a lot of what one might do on the Internet. Each Web server you dial up requires you to travel a course from computer to computer, and each of them presents a potential bottleneck of modem speed or processing power or band width between you and your objective.
As long as the information superhighway is running on phone lines, which in large part it will be for a very long time, it might more accurately be called a cyber dirt road. The passageway, referred to as band width, simply does not allow for speedy transmission of high volumes of computer data. And your fancy computer or phone line builds you a good driveway; it doesn’t compensate for the poor public access roads.
Web browsers already permit users to avoid loading graphics by clicking a button. Turning off the graphics speeds up the process, however it might annoy the web site creator who is proud of the pictures and images the site can deliver. If the problem of speed is exacerbated by the growth of traffic, straight text might become the motorcycle of the dirt road, with graphics and video more like a big road-hogging truck.
This matters particularly if hard copy product creation needs to connect in any way to the capabilities of the Internet.
Returning to the Hard-Copy Product Conundrum
Now, let’s go back to the New Media railroad tracks we left the publishers tied to a few minutes ago. If the recent generation of high production-value CDROMs are not what book publishers should be preparing for the consumer market, what should they do?
Remembering two fundamentals we have already discussed is a good way to start: create products that provide practical utility is the first rule; keep the content-and-market specialists involved is the second. There are two additional fundamentals it will be helpful to absorb into our thinking to frame a useful strategy for book publishers trying to solve the New Media product puzzle.
The first is what we call the Hierarchy of Utility for computer delivery systems. In other words, how do we rank the various delivery systems for convenience and ease of use?
In today’s world, for most computer users, the easiest place from which to access information is the hard drive, second is from a diskette, third is from an online source, and fourth is from a CDROM. Obviously, this is not true for somebody who has a computer with a CDROM drive and no modem or, even a CDROM drive instead of a built-in diskette drive. Such setups do exist. And the generalization may be changed some day by Windows95. Some day.
But right now the Hierarchy of Utility suggests to any seller of information that the smallest market potential is with CDROM delivery, a larger market is available through either diskette or online delivery, and the biggest potential market of all is for data sufficiently compact and useful to earn a place, even temporarily, in the user’s hard drive.
Good publishing content-and-market specialists will learn to identify the market subsets for which this hierarchy does not hold true.
The second fundamental may change how publishers seem to perceive the potential number of new electronic products in relation to their book output. Largely because the first product concept many publishers pursued was so expensive, only a small percentage of any publisher’s books were deemed candidates for electronic delivery.
But let’s apply some normal book publishing thinking to electronic product. Perhaps the two most critical economic determinants for a book are its plant cost, or the cost of bringing it to the press, and the minimum run, the number a publisher would have to print at one time to bring in a unit cost that makes the project viable.
Although piggy-backing on an existing product, particularly with modem technology, can bring the plant cost way down, almost any book will have a minimum plant cost in the thousands of dollars. And for the consumer marketplace, press runs must number in the thousands, if not the tens of thousands, to produce a satisfactory unit price.
Electronic products which add significant value beyond the book on which they are based are often producible for plant costs of a fraction of what the book cost, although, admittedly not if one wants a Spielberg-type production. And the minimum run of a diskette product, packaging costs aside, is one unit.
What this argues is that, in what John Locke might have called a “state of nature”, there are many more electronic book products than printed book products. In other words, we can look for this business to mature to one where there are X electronic products for every book, not one for every X books.
Let’s work through what this could mean in real life.
A publisher that organized book production right and did a good job of product design could produce simple diskette products which present the text and pictures from virtually any book, with a variety of enhancement features as well, for an incremental origination cost of $10.00 per title or less. Standard enhancements could include a dictionary, hyperlinks, author notes and
citations not included in the dead-trees version, and any variety of reviews or commentaries on the book or its subject that might provide extra information or insight.
If you figure that a book’s origination costs anywhere from $10,000 on up, if you include both the advance against royalties and the plant cost, you’re talking about an incremental investment of no more than 10% to create this new product. Often the incremental investment would be 5% or less.
Such a product, even if it were issued at half the retail price of the trade book, would produce incremental profits at sales of a few hundred units per title. A large publisher could launch a program of literally hundreds of such titles for the investment now required for one high production-value CDROM.
The products we are contemplating here are technologically simpler, but they are editorially more challenging. The authors and editors will need to learn to use the flexibility that these products can give them to add supporting material. The potential additional “work” raises bugaboos of morale and cost, but it doesn’t have to be a problem There are many authors and editors who will revel in the opportunity to add source notes, cross-references, and the text to support citations to create a new, in some ways more interesting, product that supports and is supported by the book itself.
Where do we find the market for these editorially-enhanced, technologically-simple electronic books? Of course, the numbers they will sell, at least at first, will be a fraction of what the source book will sell, even though the diskette contains enhancements the book doesn’t have. After all, with today’s technology, very few people want to read a book on a screen. Or even print it out to read it, which when you add up the hassle of creating the printed copy, the inconvenience of reading the cumbersome version you end up with, and the actual expense to create it, is probably not particularly cost-effective versus buying the bound book anyway, even if the diskette were free.
There are several answers to where the simpler product market might be which, like everything in our content-and-markets business, depends on the title.
We’d expect only a small audience that would indeed want the diskette book to “read” the book on the screen. I wouldn’t, but then I wouldn’t listen to a book on tape either, but that doesn’t stop books on tape from selling 10% or so of the number of books of the same title. And books on tape, unlike books on diskette, require a much more complex production process to come into existence. At least for now.
Remember the key to new technology adoption: it has to make it easier to do what you were doing anyway. Where I personally might find myself in the market for a book on a diskette is if I needed to analyze the text, or if I wanted to archive the material in the book in some way. This could cover much of what I would own for a professional purpose, including the many books I buy in my life as a baseball historian. It would also include almost every book a student buys to write a paper. College freshmen and high level professionals are similar in this way.
The point is that almost every book has some market for an enhanced diskette version that could be created at a small incremental cost. And, like the book-on-tape and much more than an elaborate CDROM loosely based on a book of the same title, the diskette product would by its distribution and sale contribute to the word-of-mouth momentum for the book.
Tying This to Cyberspace Reveals the Rule for Publishers to Live By: “Use Your Words”
These views of where the product should go and where cyberspace is going dovetail nicely. Remembering that the Information Superhighway is really a dirt road helps to see another advantage of simpler content-oriented product. You can move it efficiently through the wires. Unlike a high production-value CDROM that would take hours to download, and then would overflow most hard drives when it got to a PC, simpler products can move in reasonable amounts of time, fit easily on the space most people would have available in their hard drive, and be easily stored off the hard drive on a diskette.
There is no doubt that virus protection will need to improve before many consumers will feel secure with downloads and before publishers can be entirely comfortable that they are risking no liabilities sending these files. But raising the possibility of selling downloads brings the New Media product model into conformity with what publishers are used to in another respect: subsidiary rights. Simpler product might begin to expand the small but intuitively promising market for product sold as online downloads. How much potential revenue is there? We don’t know yet. But with the online customer base growing at the rate that it is, this delivery mechanism reaches a market that is increasingly perilous to ignore.
Another subsidiary right that could be common in the electronic world is an adaptation. If I were to publish this speech on a diskette, for example, Mark Bide might have annotations and commentary to add to it that would make an even more saleable version to important markets. Frankly, I would have enjoyed authoring an annotated version of either Newt Gingrich’s or Colin Powell’s book. I’d have lined up for an autographed diskette if they’d sold one, though I own neither book.
Making such a product is technically simple, and the minimum run is small. So Mark could license rights to annotate my diskette. And I could license Newt’s or Colin’s. This could lead to the original author licensing back for yet another follow-up. This is one way the numbers of products might proliferate.
I would sum up what all of this means for book publishers by recalling the admonition parents often give little children just learning to speak: use your words.
Words, and some still images, are what the publishers own and are the world’s experts at buying and developing. Going beyond them to create a product pushes a publisher beyond his expertise. Creating moving images and sound are skills much more developed by experiences in media other than ours.
Relationships, Type and Number, Make Publishers Different
Of course, the fact that publishers are unfamiliar with something or have previously not required a particular set of skills can’t be the sole determinant of what they do or don’t do with New Media. Going online or making simple diskette-delivered new media products both will require learning some new skills, new technologies, new ways of relating to the author, exploring new opportunities to deliver value in the product. But these skills flow logically from what the publisher already knows and already does. Developing them further will also enhance the products, the books, publishers are already creating and the relationships, with creators and customers, that publishers are already nurturing.
The nature of their relationships is something that separates all book publishers from everybody else in any media: movies, TV, software, music, newspapers. Our observation is that publishers are required to maintain a lot more differentiated and sensitive relationships, with authors, publicity outlets, accounts, libraries, direct customers, and all the communities into which they publish, per dollar of revenue than anybody else does.
The relationships publishers maintain is another aspect of the content-and-markets paradigm that governs our business.
What This all Says about Strategy
When we began, I said three things had become clear in the past twelve months:
- that expensive electronic products weren’t our business;
- that it was urgent that publishers integrate online into their lives;
- and that a New Media Division or department was not the way to get there.
The mistakes that have been made creating products that were wrong for the market and conducting online activities that couldn’t lead anywhere were nearly universal. The hype for CDROM from the technology community, with the evident good faith commitment of making their own investments in the recent generation of products, was almost irresistibly convincing. And the onset of the Web has been so sudden and so compelling that it demanded a virtually immediate response, whether or not the time was sufficient to think the strategy through.
We’ve discussed many ways that centralized New Media thinking led to mistakes, by disengaging the content-and-market understandings or by seeking a big centralized solution when the technology enables and demands ever more flexibility. Going forward it should be increasingly obvious that publishing success depends on using new technologies to extend existing capabilities and relationships.
Authors will recognize electronic product possibilities; their editors will have to be responsive to them.
Agents will learn how quick, easy, and cheap it is to float an idea or submit a proposal online, and editors without email won’t be among the first to know.
Book reviewers and talk shows and particularly fast-moving news media will demand online communication; publicists and marketers that don’t have it will be looked down upon and won’t even know what opportunities they’re missing.
Copy-editing and cover art drafting and ad submissions will all use online capabilities to save time and money and to add control.
Sales departments will be working with more and more cybersmart stores with their own Web sites and which will want sample chapters ready for email. And they’ll come back from their customers with more and more suggestions for customized electronic products that can be produced profitably in small editions.
Subsidiary rights and publicity departments will be dealing increasingly with magazines with cyberpresence the publishers need to monitor and interact with.
And with a little time, you can grow my list to include functions I haven’t even mentioned but which are highly sensitive to online technology and New Media, like customer service.
Quite aside from the fact that it doesn’t know the content-and-market equations for the products it makes, it is hard to see how a New Media Division can respond to all of that.
Being housed within a separate group divides the New Media product generation and marketing from every check, balance, and sobriety test built into a healthy publishing organization.
In a time when all electronic product development processes, as well as the product itself: must still be considered experimental, a central group will tend to attempt a smaller number of large experiments, rather than a larger number of smaller initiatives. They might not try what works.
But, most of all, a separate group will neither nurture nor harvest the core intellectual assets of a publishing organization: what the people in the company know about knitting a sweater and the modem view of economics and what the content-and-market equations are in each of the clusters and subsets of these markets.
The people who know these things know because of whom they know. Which gets us back to the notion of all those relationships publishers maintain. Publishing companies, and booksellers, for that matter, may be defined in the future by their relationships: who generates the content coming in and who gives it credibility when it comes out. If that’s where we’re going, the destination will be comfortable and familiar to those among us who don’t forget what we know now along the way.