The Idea Logical Company

  • Blog
  • Speeches
  • Consulting
  • Clients
  • Media
  • About
  • Contact

Supply Chain Data and Canada’s Opportunity

June 11, 2004 by Mike Shatzkin Leave a Comment

I don’t have to tell a roomful of people from the Canadian
publishing industry that Canada is the most difficult market for
publishers in the English-speaking world. You know it, although it may add
to my credibility with you to assure you that I know it too.

From an outside perspective, what makes your market so difficult are
these factors:

      1. You get deluged with editions from all over the English-speaking

 

      world, particularly the US and the UK, and can readily have more than

 

    one edition of a title available in your market.

2. Your trade can easily source directly from the US, no matter what
Bill C32 says about parallel importation.

3. Although publishers in the US and UK complain about consolidation
of the retail sector, they have a virtual “free market” compared to
Canada, where one chain controls a portion of the market which is the
work of two or more chains in any other major English-speaking
territory.

4. Your core market is so small that it obliges a Canadian publisher
or distributor to have an exceedingly long list of titles to survive,
compounding the universal problems of managing copious amounts of detail
that bedevil publishers everywhere.

5. Your market is so dispersed and “dumbbell-shaped” — weighted
heavily at the ends with not much in the middle — that shipping and
fulfillment costs and lead times are inevitably high on a per-unit
basis.

Accurate title metadata and fulsome inventory and sales-tracking data
can ameliorate some of these issues. By our observation, the benefits of
both grow in proportion to the size of the list a publisher has to manage.

Bottom Line First: Cutting to the Chase

It is because your market is both so small and so challenging that it
has required a government initiative to get BookNet Canada started, rather
than relying on the profit motive to create the investment by private
companies, as happened in the US and the UK. The fact that this initiative
begins by thinking about what is best for the market constitutes an
opportunity that did not exist in the larger markets. Keeping the
following considerations in mind as you move forward will help you to a
more efficient and productive book trade.

  • Carefully follow and implement the metadata schemes, specifically
    ONIX, which have been developed elsewhere.
    BookNet Canada has
    apparently taken the initiative to stay in touch with BISG in the US and
    BIC in the UK on these developments; it is critical that there be
    rigorous industry-wide effort to know and implement those standards. It
    is worth noting that the major accounts in the States, particularly
    Barnes & Noble, are important drivers to encourage compliance with
    the evolving standards. In your market, Indigo would have the coercive
    power necessary to create widespread compliance. In a world where
    computers increasingly talk to computers in order to get books from a
    publisher’s warehouse to a consumer’s hands, it is critical that
    standards that describe the book be universal.
  • Remember that POS data, particularly aggregated nationwide POS
    data, has real value with limitations.
    It definitely helps identify
    large market trends. It definitely helps an editor gauge the true sales
    success of an author’s last book (although only when it has been in
    place for a while and has compiled a historical record.) But it is not
    really enough on its own to enable publishers to maximize the two most
    important benefits available from data: 1) getting the reprints right
    and 2) matching the inventory in each part of the supply chain to the
    real sales expectations for a book.
  • Do not confuse the understandable reluctance of booksellers to
    broadcast their overall inventory positions with their general
    willingness to share inventory information on a particular publisher’s
    books with that publisher.
    Editors may need to know about other
    publishers’ books performance to make a signing decision, but sales
    management only needs to know about the performance of its own books to
    help accounts manage inventory. It is actually a little bit nuts that
    the primary industry-wide vehicle for collecting POS data in the US and
    UK does not routinely employ those same “feed” relationships to enable
    publishers to see unsold inventory and performance as measured by stock
    turn.
  • Remember that daily or weekly or monthly feeds constitute a
    “snapshot”, not a balanced picture of reality.
    In order to gain real
    value from sales and inventory information, it has to be viewed over
    time. As a practical matter in the US and the UK, that now means that
    publishers must capture and archive data in order to organize it for
    real analysis. Because publishers often want to integrate some of their
    own information into analysis, creating publisher-specific data
    warehouses is inevitable and perhaps is just as well that it be done by
    each publisher.
  • When analyzing inventory in chains, both granular store-level and
    aggregated chain-level data is of value.
    Obviously, books don’t sell
    uniformly in different stores, even different stores in the same chain.
    But chains generally try to differentiate their offerings, not just to
    accommodate geographical differences, but also store-size differences.
    That means that in everychain there are books stocked and selling in
    some places and not available in other places where they could sell. It
    is the aggregated picture that points to those opportunities.

I have personally been involved with the analysis of bookstore
inventory and sales data for more than three decades; my father literally
invented this field of study in the book business more than 50 years ago.
We have been applying that knowledge for the past 2-1/2 years to help
publishers interpret their Barnes & Noble data. So far, we’ve worked
with more than a dozen publishers. Some of the bigger ones have taken what
we do and incorporated it into their own data management efforts, but we
do the data warehousing and report delivery for a growing group of
publishers.

Of course, our experience only matters to you to the extent that you
can apply it in your own market. Sifting through chain data to get
inventories in synch with actual sales appeal is certainly an idea that
travels; our techniques should work well for Canadian publishers with
Indigo, as well as for smaller chains like McNally-Robinson and Book City,
if you get the right data and apply it with a sensitive understanding of
each company’s own supply chain realities. And if you can “see” inventory
nationwide, you will also have the tool you need to make the right reprint
decisions quickly.

Data in the US and UK Markets

Of course, Bookscan is the primary data source in both the US and UK.
In the US, Bookscan is the “distributor” for data offerings from Barnes
& Noble and Borders. Both chains offer only aggregate, chain-level,
information to the publishers about their books.

As we observed earlier, Bookscan itself only tallies sales, not
inventory. In the US, publishers gather inventory information from the
feeds of the two major chains and from the data offerings of the country’s
two biggest wholesalers, Ingram and Baker & Taylor. There are six
principal general trade accounts in the UK market — four retailers and
two wholesalers. One of our clients is now attempting to get feeds from
them. There is apparently no standard distribution of information from
accounts to publishers, such as there is in the US.

The biggest publishers in the US have created “supply chain
departments” whose primary purpose is to guide reprint decisions. To
accomplish this task, the concentration of business in few accounts is
actually an advantage. Publishers long ago learned to “check stock” at
their customers’ major warehouses before any big reprint; automation
promises to enable reprint decisions informed by knowledge of unsold stock
“out there” for all books before long. We intend to offer a “Supply Chain
Department” capability for all publishers by Q205.

Where we have placed most of our efforts over the past 2-1/2 years is
in analysis of the sales and inventory data at Barnes & Noble. They
are not only the biggest bookstore chain in the world, they also have a
comprehensive and integrated supply chain and a data feed that really
enables a publisher to pinpoint problems and opportunities.

Getting started: warehousing the feeds

As we have observed, publishers must create their own data warehouse,
collecting and archiving the feeds from several places both across time
and across sources. No external feed, whether from a single account or
from an aggregated supplier comes “complete”. All of them require
enhancement from information available only in the publisher’s own
computer system. And all of the feeds are weekly snapshots and it is
imperative to view data over a longer period than a week to analyze
inventory efficiency. This is more true for books than it is for other
items retailers stock. Most books do not sell in most stores in most
weeks. Or even most months! Unless one views performance over a longer
period of time, books that are actually performing quite acceptably can
appear not to be selling at all.

The key metrics

Of the many metrics and groups of metrics we analyze at Barnes &
Noble, there are five headings I want to detail here today. Some are
“universal” and you will see immediately that they would apply in much the
same way to any account, including Indigo or the smaller chains you sell
to; some must be interpreted within the B&N context and would require
a different interpretation in another chain; and some apply only within
B&N and would not be used in other accounts.

For “short-term” inventory performance, we look at percentage of
on-hand sold in the superstores in a week. We call that our “Flash
metric”. Using that percentage is a very useful tool to find books that
are “flying below the radar” — performing well in relation to the
inventory B&N has, even if that quantity, and therefore the absolute
sales number, is low.

For any period longer than a month, we gauge performance by using stock
turn, which we present as an annualized number dividing sales by average
inventory. We look at stock turn for three different periods: a rolling
4-weeks, 13-weeks, and 52-weeks. Looking at the stock turn performance
this way allows us to see both trends and seasonality.

We also do very specific aggregations of data, looking at groups of
books together. The most obviously helpful such grouping is by “store
category”, which at Barnes & Noble usually equates to a specific
buyer. Looking at books by imprint, by series, and — for those publishers
who distribute others — by originating publisher, also yields useful,
actionable, analysis.

All of these metrics are universal. You can use them for any store or
chain. But aggregating up to the buyer level is an account-specific
exercise. Some accounts assign buyers to particular publishers while
others buy by subject. And we have found that the subject categorization
provided by accounts is not always as useful as it is at B&N. B&N
devotes an uncommon effort to getting the metadata right; other accounts
are less meticulous and one large one we know puts a “general” label on
books far too many books which require a subject category. This makes the
data harder to analyze, but it is also a problem that it costs sales to
ignore.

As we said, some metrics are very specific to B&N’s own supply
chain. These include models, indexing to dot com and mall sales, deciding
what books need inventory increases or decreases, and the stock on-hand
and on-order at the DCs (distribution centers.)

Models are a B&N device by which buyers set a stock level for a
title in an outlet that is maintained by automated computer reordering.
Adjusting models up or down is one of the most important components of
having the right stock levels on backlist books.

When viewing B&N’s chain-level data, remembering that there are
approximately 650 superstores is critical. Knowing that identifies books
that can’t possibly be fully distributed or modeled in the chain because
the on-hand position or total chainwide model is lower than 650. This
magic number is different at Borders and would be different at Indigo or
any other chain.

The relative performance of books through B&N’s dot com and mall
store channels, as compared to their superstores, can also point to
opportunities to adjust inventory profitably.

And although the ordering for B&N’s distribution centers is highly
automated and very effective, it is not perfect and systematically
watching the on-hand and on-order levels can also locate opportunities to
improve sales. More often than not, problems with DC inventories trace
back either to metadata errors or to a publisher’s own stock shortage. In
either case, knowing about it can either correct a problem or avoid
misinterpreting an apparently poor sales performance that has nothing to
do with a book’s appeal.

The “ACE” Reports concept

Now I’d like to show you what these concepts mean in practice,
analyzing things from a publisher’s perspective at Barnes & Noble. We
call the reports we create “ACE Reports” because the acroynm explains what
makes them distinctive.

A

      is for “archiving the data”, putting

 

      those weekly feeds into a database so that one can view the impact of

 

    sales and inventory over time.

C is for “calculating meaningful metrics”,
such as those we have discussed earlier: percentage sellthru, stock
turn, and others.

E is for “editing the reports”, so that
what is seen is what is needed for analysis, either by selecting out
particular groups of titles, or sorting by various metrics, or moving
unhelpful data off the screen.

Demonstrating how they work

We have three standard report formats to analyze Barnes & Noble
inventory for publishers.

The Flash report looks primarily at sellthru in the superstores for the past two weeks. Here is an example of a Flash report from a major publisher from last November 26 – December 6. Explanations of the headers are as follows:

  • CATEGORY: Category as shown in BookScan.
  • DEPT CODE: 1-Trade paperback; 2-Trade hardcover; 3-Music; 4-Newsstand; 5-Gift; 6-Juvenile; 7-Bargain Books; 8-Mass market; 9-Non-Book; 10-DVD’s; 11-Used books; 12-Retail Food; 13-Video; 14-Calendars; 15-Maps/Misc.
  • ISBN: The ISBN for the title.
  • TITLE: Title of book as shown in BookScan.
  • AUTHOR: Author of book as shown in BookScan.
  • SHIP DATE: The date the book first shipped, provided by the publisher.
  • SS SALES: Superstore sales for one week.
  • YTD SS SALES: Total Year-to-Date sales in Superstores.
  • SS MODEL: Superstore model for this title. Numbers lower than 10 often are new store orders rather than a model.
  • SS OH: Total Superstore inventory on hand for one week.
  • SS OO: Total Superstore units on order. We don’t know if the “on order” is aimed at the publisher or at B&N’s own DC.
  • PY SS SALES: Total Superstore sales for the prior calendar year.
  • DOT COM YTD: Year-to-Date Dot Com Sales.
  • DC OH: Total Distribution Center inventory on hand. The DC wants at least 2 copies of every book on hand.
  • DC OO: Total number of units on order for the Distribution Center.
  • % SS OH SOLD THIS WEEK: The percentage of Superstore on hand inventory sold in one week. Any unpromoted and undiscounted title selling more than 10% is considered a fast moving title.
  • % SS OH SOLD PRV WEEK: The percentage of Superstore on hand inventory sold in the previous week. Any unpromoted and undiscounted title selling more than 10% is considered a fast moving title.
  • % SS OH/SS MODEL, END OF WK: Percentage of Superstore model that is in stock at the end of the week. The model indicates buyers’ intent of a stock level. If this number falls below 85% consistently and there has been no stock shortage at the publisher’s warehouse or the DC, it strongly suggests there should be a model increase.

Of course all identifying data in this particular report has been anonymized. To do so, we sorted the report by that week’s sales (the column headed “SS Sales, Wk of 12/06/03”) and labeled the top-selling title Title 1, the second-highest Title 2, and so on. It is interesting to note that an alternate sort – sorting instead by inventory (Column “SS OH, Wk of 12/06/03”) – yields a nearly-identical result, showing that high sales most often indicate a lot of inventory.

A more useful sort would be by what we term the Flash metric – the percent of superstore inventory sold in the last week. This way, the books are sorted according to performance – not simply sales, which we just saw was directly related to inventory – and it is easier to spot titles with smaller on-hands that are still doing well. We call these titles ‘flying below the radar.’ You’ll see that, with this sort, the top ‘action title’ here is the title number 89 in sales among the TOP PERFORMERS, much lower than that if we were looking at all their titles!

The Category Summary “rolls up” groups of titles for examination, usually by store category. Here’s an example of such a report, for a different publishers in a different period, ending last September 22nd. The column headings for this report use abbreviations similar to those in the Flash report. In this case, we have sorted the report in descending order of on-hand quantity, to see them in order of importance. As we’ve discussed, this organization of information is also useful to look at books by imprint, by distributed line, or by series.

The Stock Turn report calculates the key metric of “stock turn” for every single title for three periods – 4-week, 13-week, and 52-week – that we generally employ. Here’s an example of such a report for the same publisher we just looked at, and for the same period. (Note that the numbering here is by on-hand quantity.)

With the Stock Turn report, first we sort by 52-week stock turn in descending order. This way, we can see books which are performing well, and see if increased inventory would likely result in increased sales. We can also locate problem titles by sorting in the reverse order – 52-week stock turn in ascending order. It is then easy to note the books that have high inventories, but which are not turning over at an acceptable rate. Lastly, we sort by Model in descending order. This way, we can see titles which might be potential candidates for a model increases or reductions.

Using our Stock Turn Reports, we believe a sales rep can do a very
thorough job of reviewing all the titles in any section in less than an
hour and be confident that no opportunities to change inventory to improve
profits will be missed.

Making use of the analysis

One thing our analysis accomplishes — we believe for the first time in
this history of publishing — is that we standardize a publisher’s view of
inventory data by the metrics that affect their customer’s profitability,
namely the velocity of sellthru, or stockturn. Publishers’ customers are
acutely aware of this metric. Buyers are often bonused on the basis of it.
Accounts doing annual reviews of publisher performance often calculate and
refer to it. But until we developed ACE Reports, no publisher that we are
aware of ever systematically calculated it. We believe, at the very least,
that it is helpful to a sales-and-service relationship to look at the
profitability of inventory the way your customer does and we’re proud of
starting that process for the industry.

Precisely how these reports are used varies with the publisher and the
B&N buyer. We envisioned the Flash Report to be the tool to catch
things in a “right now” fashion — books that are spiking or suddenly
tanking — and the Stock Turn Report as the tool for periodic backlist
reviews and “section balancing.” However, we are aware of at least two
mega-publishers who are using Flash Reports in monthly backlist reviews.
One of our clients has commenced a regular backlist review since we
started delivering these reports with a very positive impact on their
sales at B&N.

Our intention is to deliver analysis which is actionable, but also
“reportable.” In a perfect world, we’d hope that the same Excel row which
told a rep that a title should have its inventory expanded or its model
reduced, for example, could just be sent to the buyer and provide the
information the buyer needs to consider the suggestion.

Making use of store-level data

We have been made aware that, at least until the new SAP installation,
Indigo has made sales and inventory data at store-level available at least
to some publishers. There are some obvious additional opportunities for
analysis enabled here but also some serious data management challenges.
Multiplying the number of data points by a factor of 100 or more to see
each store separately vastly complicates the warehousing and report
generation process. But it also greatly expands the opportunties.

Among the things I would do if I had access to store-level sales and
inventory data:

1. Calculate the stock turn by section by store for your
titles.

      This will tell you, and Indigo, where your odds are best for

 

    expanding your offerings for any category of your titles.

2. Calculate the stock turn by title by store, but only over a
period of 3 months or longer and only for books that are stocked pretty
much continuously.
This is actually a tricky exercise because the
“average inventory” component of the stock turn calculation is hard to
figure for books stocked in 1s and 2s and selling pretty quickly. It is
hard with a weekly feed to correctly calculate how any days of “zero”
inventory there were.

3. For all books that sell more than 15% of on-hand two weeks in a
row, check to be sure you have full distribution in the chain.

Sometimes it won’t be warranted, of course, because books often do have
some kind of regional appeal, but this is something we attempt with
chain-level data that is much easier to do with store-level data.

4. If possible, track turns on promotions. This can be hard to
do, because you have to flag the weeks when promotions applied to
particular titles. But it can tell you whether promotional dollars are
well spent.

Applying aggregated national data

Since there is a good chance that Canadian publishers will have the
opportunity to apply aggregated national data on sales in the forseeable
future, I want to offer a few thoughts on how best to use it.

First of all, remember that rankings by sales are almost always closely
correlated to rankings by inventory. That is, if you see a competitor’s
book is outselling a similar one of yours, don’t leap to the conclusion
that their book has more appeal to the consumer. It is almost certainly
more broadly distributed if it is selling significantly better and that
may reflect on the sales or marketing execution, not just on the book’s
appeal.

Second, try hard to understand the relationship between what you know
you’ve shipped to the POS world (those stores or kinds of stores that are
like what is captured in your POS reporting) and what is reported as sold.
Even the most perfect POS-reporting system will not capture every sale,
and there is a decent chance that the percentage it captures will vary
somewhat by the book. You’re going to try to subtract POS sales reported
from copies shipped into the marketplace to understand your supply chain
exposure. It is obvious that this exercise will be more accurate if you
are able to back out the shipments and POS reports from a few key
accounts.

The best use of aggregated national POS data is for marketing
decisions: watching the sales of the competition and seeing the impact of
publicity and promotion. Both of these exercises are more about relative
performance rather than absolute numbers and the data can tell you a lot.

How hard is all this?

Although we offer a lot of “value-adds” in knowledge, consulting, and a
constant improvement of our reports, there is inherently nothing about the
ACE Reports concept that can’t be done by any publisher with strong Excel
and Access skills in-house. If you want to “try this yourself at home”, we
would offer only a few words of caution.

First, the set-up is not trivial. Going from the feeds you get to the
spreadsheets you need requires building an Access database and painstaking
creation of queries. This is not something even a skilled database manager
will accomplish quickly or easily. It will certainly take days, perhaps
weeks.

Second, this process requires ironclad discipline. You must grab and
archive each relevant feed every single week without fail.

Third, we strongly discourage having a rep do this work him- or
herself. This is a process which takes time and care. Reps have very
important jobs to do with the care and feeding or their customers. It is
too easy for really important priorities to shove the data work aside. So
even if a rep is fully capable in the data programs and, frankly, most
aren’t, it would be wiser to provide the reports as a service to a rep,
not ask them to maintain the data warehouse.

And fourth, even with the best of intentions and processes, we have
learned that databases sometimes “break” and have to be rebuilt. This is a
pain when it happens and can change the work in a week when it does from
something requiring minutes to a task requiring hours.

Conclusion

The new era of ubiquitous data offers enormous opportunities for
publishers to cut costs out of their supply chain. The publishers that do
this can improve their unit costs the easy way, by not manufacturing books
they’ll never sell. We have also demonstrated that books do “fly under the
radar”; overburdened reps and buyers require so much time to manage new
title acquisition that such books have too often been allowed to die out,
even though customers bought them when they were available.

So with reduced costs and increased sales as the incentives, there is
opportunity for every publisher, every wholesaler, and every retailer to
benefit from sharing inventory information and analyzing it by the
measures of profitability.

Filed Under: Speeches

Search

Mike Shatzkin

Mike Shatzkin is the Founder & CEO of The Idea Logical Company and a widely-acknowledged thought leader about digital change in the book publishing industry. Read more.

Follow Mike on Twitter @MikeShatzkin.

Interview with Mike Shatzkin

Book Cover: The Book Business: What Everyone Needs to Know

The Book Business: What Everyone Needs to Know

Sign Up

Get The Shatzkin Files posts by email.

Recent Posts

  • Running a big publishing house is not as much fun as it used to be
  • Google knocked us out for a couple of days, but we’re back!
  • When a publisher might not do as good a job as a self-publishing author
  • What the ruling against the PRH-S&S merger means for the publishing business
  • “Automated ebook marketing by Open Road; can anybody else do it?”

Archives

Categories

  • Atomization
  • Authors
  • Autobiographical
  • Baseball
  • Chuckles
  • Climate Change
  • Community
  • Conferences
  • Digital Book World
  • Direct response
  • eBooks
  • Enhanced ebook university
  • General Trade Publishing
  • Global
  • Industry Events
  • libraries
  • Licensing and Rights
  • Marketing
  • New Models
  • Politics
  • Print-On-Demand
  • Publishers Launch Conferences
  • Publishing
  • Publishing History
  • rights
  • Scale
  • Self-Publishing
  • SEO
  • Speeches
  • Subscriptions
  • Supply-Chain
  • Technology
  • Unbundling
  • Uncategorized
  • Vertical

Recent Posts

  • Running a big publishing house is not as much fun as it used to be
  • Google knocked us out for a couple of days, but we’re back!
  • When a publisher might not do as good a job as a self-publishing author

Pages

  • Blog
  • Consulting
  • In the Media
  • Clients
  • About Us

Follow Mike

  • Facebook
  • LinkedIn
  • Twitter

Search

Copyright © 2023 · eleven40 Pro Theme on Genesis Framework · WordPress · Log in