Supply chain simulation made easy

Elephants in the clouds – an affirming but cautionary tale

Not long after publishing my last post on cloning Dell’s finished goods supply chain, I came across a May 2010 slide show presentation on measuring supply chain performance. Prepared by Joseph Francis, the executive director at the Supply Chain Council, it included a chart on slide four that caught my eye: Total Supply Chain Management Costs Expressed As % of Revenue.

The chart caught my eye because of data (credited to PRTM, now part of PWC) that looked familiar – the 4.2% for best in class and the 10% for the median in the computer industry. I went back to the numbers reported in my last post. I was right – the total supply chain costs expressed as a percentage of revenue for the 2011 Air Scenario were 4.4% and the 2011 Sea Scenario were 9.28%.

Real world validation of our Dell case study?

With the Dell case study, we showed we could distill results quickly by:

  • using limited input from the Dell 2011 Annual report ( hundreds of KB );
  • using our tooling to construct representative DNA ( tens of MB ); and then
  • cloning the DNA ( hundreds of GB ) using OperationalCloning.

The data in Francis’s 2010 chart was suggesting that OperationalCloning’s case study results corresponded to actual industry benchmarks. I admit that I was excited. Could I be looking at real-world validation of our case study results?

Cautionary clouds

I pulled myself up by remembering the words of caution in Nassim Nicholas Taleb’s excellent book, Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. Taleb argues that we have a propensity to overestimate causality, “seeing elephants in the clouds instead of understanding that they are in fact randomly shaped clouds that appear to our eyes as elephants”.

The results of cloning the two Dell scenarios (Sea and Air) for the years 2011 and 2016 hinted that an increase in total supply chain costs expressed as a percentage of revenue was inevitable. If that’s right, the past numbers for total supply chain costs expressed as a percentage of revenue would show a similar pattern because similar conditions have existed for the last decade.

I decided to do some of my own sleuthing.

Testing the Dell case study’s predictions against real world data

Google was my starting point to track down earlier PRTM benchmarking studies similar to the one cited in Francis’s 2010 presentation. It wasn’t as straightforward as I had expected. I found:

  • A presentation dated August 2009 with the same PRTM data that was in Francis’s 2010 presentation, suggesting that the PRTM data was from 2007/8 or earlier.
  • A set of similar PRTM benchmarking data from 1998.
  • A set of similar PRTM benchmarking data in a February 2010 UPS presentation that I guessed to be from 2008/9.

I took a stab at comparing the three PRTM data sets, combining values for chemicals and pharmaceuticals to do so.

Industry

Estimated
Year

Best-in-Class
(BIC)

Median

Chemicals
& Pharmaceuticals

1998?

4

9.8

2007?

4.45

9.9

2008?

4.5

9.65

Computers

1998?

4

9.1

2007?

4.2

10

2008?

3.7

8.3

Consumer
Goods

1998?

5.3

11.2

2007?

4.8

10.7

2008?

3.4

8.5

Telecom
Equipment

1998?

3.3

8.5

2007?

3.6

7.4

2008?

2.8

10.4

Nothing in the data suggested the degree of change I expected. Although the 1998 and 2007 numbers for BIC and median show a minor increase across three of the four industries, they are remarkably similar. 2007 to 2008 shows only a minor drop.

I kept digging with the oracle’s help and found an October 2006 presentation by PRTM’s Mark Hermans called Supply Chain Benchmarking. It included the same set of data from Francis’s May 2010 presentation – the one that had kickstarted this whole caper – which I now guessed to be as old as 2005. My guess was confirmed with Hermans’ next slide, titled “US supply chain costs are rising”, with a graph that shows total supply chain management costs expressed as a percentage of revenue across industry for the years 1997 to 2005.

I tried to match the averages of the original datasets for what I thought was 1998 and 2007, and found they matched 2001 and 2002 best. Although the graph shows the steady drift upwards that Operational Cloning’s Dell cloning exercise predicted, I had to admit that I had, perhaps, just been seeing elephants in the clouds.

How I cloned Dell’s finished goods supply chain in under 48 hours

Inspired by the young Steve Jobs, I recently put OperationalCloning to the test by modelling Dell Inc’s large global network and examining the impact of change on it. I did it in two days.

Back in 1980, an interviewer asked Jobs why personal computers were so successful. He said that in addition to becoming more mobile and affordable, PCs amplify an individual’s inherent ability, giving them the power to do much more with what they already know. I realized that this is exactly what we set out to achieve with OperationalCloning: putting large, complex supply chain and logistics change projects within reach of individuals and smaller working groups by giving them the power to know the network and to design change in a way they couldn’t before.

Dell as a natural choice for this test simulation Dell is famous for its configure-to-order approach that relies on air freight to maintain competitive service levels. Configure-to-order is highly sensitive to the volatility of fuel prices and product prices – something that Dell has seen plenty of in recent yearsi. I decided to quickly model Dell’s global network and look at the sensitivity of its supply chain to these changes using OperationalCloning.

What the results told me straight away about Dell’s finished goods supply chain

  1. Dell’s supply chain costs (expressed as a percentage of revenue) will increase — “Sweet spots” in terms of the average value density of a product range in relation to external cost factors do exist in time (or it is possible to get lucky). Put another way, it is likely that Dell’s total supply chain cost expressed as a percentage of revenue will increase regardless of the preventative steps it takes.
  2. The tipping point that forces a complete shift from configure-to-order is years away – The tipping point where a significant shift away from configure-to-order is mandated by external factors is potentially years away and will most likely only apply to lower cost product families and servicing non-domestic markets (assuming that US domestic manufacturing remains as is).
  3. Inventory carry cost will remain dominant – When considering supply chain design changes, the dominant factors are still inventory carry cost and how the US domestic market is serviced. In addition, inventory carry cost is very sensitive to the average planning and purchasing frequency that results due to the planning strategy implemented.

You can explore the simulation results on Google Public Data Explorer. In the meantime, read on to learn how the OperationalCloning test simulation was done.

Diary of an OperationalCloning simulation: the Dell test case

Day 1, 9.00 am

The first step is to create an approximation of Dell’s DNA. I gather all the information I need from Dell’s 2011 Annual Report and a brief tour of the Dell Inc website. Using the OC DNA generation function from our consulting toolkit, I directly input the Dell information. See The Dell Inc Summary Details Used to Compile the DNA, at the end of this post.

Day 1, 11.00 am

I enter the summary details and start the baseline DNA generation process running. It will take 5 hours and 35 minutes, so while it runs, I start thinking about the change scenarios. I know that the average annual downward trend of Dell’s product is 5%. To find the annual trend for oil prices, I take the average inflation adjusted crude oil prices for the last ten years – an average annual increase of about 5%. I decide to use the baseline DNA (in other words, all air freight) and extrapolate five years out to 2016. I compare that with a scenario using mostly sea freight with make-to-stock and an average planning cycle of 28 days.

In the baseline scenario, the DNA generation process tries to find distribution centres as close as possible to the geographic demand it serves.

In the sea freight scenario, the DNA process prefers port locations. Dell is closing the Lodz facility, so I remove it as a point of supply (for the purpose of this test case I keep revenue constant). I vary the summary DNA data according to the change scenarios identified, and start their generation process. When it finishes, I check the SKU related DNA data and notice that I forgot to change the purchasing/planning cycle to 28 days for the make-to-stock scenario. I change the summary data and regenerate the DNA data for the alternate scenarios.

Day 1, 5.00 pm

With the DNA versions correctly generated I upload the DNA files, which takes 35 minutes.

Day 1, 5.40 pm

As I am only interested in approximate network metrics for this test case, I start Prevues.

Day 2, 9am

I log in to check results and use a bubble chart to plot inbound transport, outbound transport and inventory value for each clone (see below). This gives me a sense of how the scenarios compare. The totals for the four clones stand out in the top right corner and visually align with my intuition.

Next I download the results into Excel and copy the totals for the chosen KPIs. To compare one scenario with another, I add the transportation costs and the estimated inventory carry costs (using 18% of inventory value), and divide it by the total revenue for the scenario.

DNA Scenario Revenue(monthly in $millions) Inbound Transport(monthly in $millions) Outbound Transport(monthly in $millions) InventoryValue(monthly in $millions) (Transport +Inventory Carry)/Revenue as %
2011 Air (Baseline) 3790 61.3 65.2 83.9 3.74%
2016 Air 3184 111.4 99.6 67.2 7.01%
2011 Sea 3755 55.2 74.3 1967 12.88%
2016 Sea 3341 71.2 84.8 1769 14.20%

I examine the summary results, mindful that:

  • the global transportation rates and services used in this simulation are hypothetical
  • the actual result values should only be interpreted in terms of comparative ratios and trends across scenarios
  • while an “all or nothing” approach to choosing an inbound mode of transport and inventory planning strategy is a simplification, it helps with understanding the trade-offs in the approaches and its impact on designing the network.

Looking through the results, I spot a mistake – the purchasing/planning cycle frequencies again. I forgot to re-align the purchasing frequencies for the network nodes to the replenishment lead-times after I generated the DNA for the different scenarios. I fix the DNA for the different scenarios and re-do the uploads.

Day 2, 11am

Time to kick off the new Prevues.

Day 2, 9pm

OC reports that the clones have been rolled up. I log in and look at the bubble chart, and I summarise the results as before.

DNA Scenario Revenue(monthly in $millions) Inbound Transport(monthly in $millions) Outbound Transport(monthly in $millions) InventoryValue(monthly in $millions) (Transport +Inventory Carry)/Revenue as %
2011 Air (Baseline) 3776 61 65.4 220.8 4.40%
2016 Air 3186 111 99.5 187.1 7.66%
2011 Sea 3189 55.2 73.8 927.2 9.28%
2016 Sea 3338 71.2 84.3 823.2 9.10%

Day 2, 10pm

I pour a cup of coffee and begin reviewing results and creating the preliminary observations about Dell’s finished goods supply chain set out at the beginning of this post.

i The downward trend of PC prices has continued since Steve Jobs turned them into a mainstream commodity, with prices declining an average of 5% per year. In recent years, this downward trend has been more volatile. In the last quarter of 2008, selling prices dropped about 14%. In November 2010, the average US selling price increased by 6% from the previous year.

The Dell Inc summary details used to compile the DNA

Revenue Headquartered in Round Rock, Texas, Dell Inc’s 2011 revenue was $50,002 million. To simplify things I ignored software and peripherals reported at $10,261 million, reducing the 2011 revenue to $39,741 million. 52% of the revenue is domestic.
Distribution Dell distributes to 180 countries, which I reduced to 160 for this test simulation. I guessed that Dell distributes from about 45 countries.
Product families I collapsed the Dell product families to: (i) servers and networking; (ii) storage; (iii) mobility; and (iv) desktop PCs. I estimated the total number of SKUs in these product families at about 650.
SKU parameters I assumed Average purchase price across product families: $915 with a minimum to maximum range of $250 to $60,000.Average unit weight across product families: 4 kg with a minimum to maximum range of 1.8 to 40 kg.Average unit volume across product families: 6000 cm3 with a minimum to maximum range of 2000 cm3 to 70,000 cm3.

The average gross margin across product families: 0.14 with a minimum to maximum range of 0.06 to 0.30.

Average order value I assumed an average order value across the network as $1,400.
Revenue groupings and order profile Dell reports segment revenue by grouping large enterprise, public, small and medium business and consumer. For this test simulation I grouped the large enterprise and public as order class A, small and medium business as order class B and consumer as class C. Given this grouping, the order class demand proportions for A, B and C are 56, 24 and 20 percent. Order class A subclass demand proportions are 51 and 49 percent, with the average lines ordered per subclass 6 and 15. Order class B subclass demand proportions are 50, 35 and 15 with the average lines ordered per subclass 14, 10 and 5. Order class C subclass demand proportions are a 100 percent with the average lines ordered per subclass 2.
Primary points of supply Given the scope of this test simulation, Dell has 6 primary points of supply located in North America, South East Asia, East Asia, South America, South Asia and Eastern Europe.

Forget about Big Data. Start by knowing your DNA.

I had my first brush with the buzz of Big Data while enjoying a margarita and some amazing Mexican food in San Francisco’s Mission District. The Wall Street Journal at my elbow described HANA, SAP’s Big Data initiative. It occurred to me that Big Data’s promise of end-users directly analysing transactional data in real time, perversely confirmed the need I was seeing and experiencing on the street. Mature corporations struggled to pull basic summary data about their supply chain together quickly.

Supply chain DNA is born

During Operational Cloning’s development phase, we sought opinions from many people in the know. “Sounds great,” they all said, “but what about the data?” They had a point. Anticipating this, we designed OC to reduce the data required to describe a supply chain network and its distribution facilities. The result? A format we call supply chain DNA.

Why is gathering profiling data slow?

Talking to global and domestic companies since Operational Cloning’s launch has confirmed that margarita fuelled insight many times over. They seemed to be in one of two camps:

  • in a months-long project to gather data for an existing simulation or optimisation project. or
  • aware that it would take them several months to gather their data if they chose to start a simulation or optimisation project.

Why were mature corporations not able to pull basic summary data about their supply chain together quickly? After all, Operational Cloning’s DNA format requires nothing exotic. It contains summary information about things like the network structure, suppliers, SKUs, order profile, transport rates, and facilities. A Google search on “why is it hard to gather profiling data for a simulation?” yielded no insights.

Know your DNA first

Big Data technologies promise high speed analysis of structured, unstructured and transactional data (read no data warehouse required). What is the benefit of another technology-based distraction if a company struggles to pull together the basics that describe its supply chain and logistics network due to dispersed or poorly maintained data sources?

Profiling is always the first step towards understanding what you have and how to improve your operations, and any profiling format – including our DNA format – is a straightforward composite report.

Before considering Big Data technologies, try and compile your DNA. I’ll bet you a margarita that it will yield unexpected insights into where your important data lives and how to improve your operation.

Follow

Get every new post delivered to your Inbox.