Expert recommendations, how to filter them?

If you are reading this is because you have already read interesting things in this portal or because someone recommended it or because you are going to evaluate the content for the first time. But anyway, sometimes it is difficult to know if we are in front of an expert or, at least, in front of good recommendations.

My recommendation: contrast what you read with your experience and do the exercise of validating the logic of what is proposed to you. One way is to verify "effect-cause-effect", as Dr. Goldratt used to say. That is, if we are told that an effect has a certain cause, we can predict that if that cause existed, another effect should also exist. This is also known in Theory of Constraints as "predicted effect reservation". Let's look at a recent example.

shopify.com portal - report recommendations

I don't think I'm showing here an unknown portal, and it is a source of articles and very finished reports, such as the "The Future of Commerce Trend Report 2022” (download report), que me recomendaron recientemente. Es un reporte de 140 páginas y en la 105 encuentro la segunda recomendación para la cadena suministro: “Redistribuya inventario más cerca de sus clientes”, y en el detalle dice:

Redistributing your inventory to locations closer to your customers reduces costs by keeping merchandise within shipping zones, allowing you to offer same-day or two-day shipments without incurring additional costs, and increases resilience to regional supply chain issues.

If you have a retail business, consider converting stores into mini fulfillment centers. Converting a single retail store into a mini-fulfillment center has the potential to reduce last mile costs by more than 60%. These hybrid retail spaces are also an opportunity to offer curbside pickup and click-and-collect options.

Let's take it one step at a time.

First statement: shipping (incurring a freight cost) to a location close to your customers would allow you to ship (another freight cost) within one or two days, but you say that without incurring additional costs.

Here there are assumptions that would have to be validated for this statement to be correct in both aspects: that it reduces cost and that it reduces time. The assumptions are:

  • It takes more than two days to ship an item to the store (mini CD). If this were true, it must also be true that there are no other stores in the area to justify a regional warehouse. But if this is true, it doesn't seem like a good idea to have those few outlets either. That is, the recommendation is based on the fact that there are few customers in that area or that it is far from the main distribution center (DC). If the travel time to the store is less than two days, the shipping to any customer served from that store should also be the same. That is, in terms of time, it seems to me that shipping from the DC is the same as shipping from the store. And if it's for in-store pickup... that's what stores are for in the first place: to shop there!

  • And added to the above, why would it cost less to ship to the store than to ship to a location near the store? If the argument is that you can consolidate freight to the store, you have to pay for home delivery anyway. Unless both added together are less than a home delivery from the CD, two freights seem to add up to more than just one, so I don't think it's cost efficient to do the store-as-mini-CD idea either.

Second statement: the store as a mini-CD (...) increases resilience to regional supply chain problems. I assume you mean that using the store as a stocking point for short lead time sales (1 - 2 days) helps mitigate problems associated with frequent stock shortages and overstocks (are there other "regional problems"?). And if the store has high availability of all SKUs, I don't see why a customer would wait a day or two.

That is, since we have problems maintaining availability at a reasonable cost in the stores, let's convince customers to order and wait for shipment from the CD to the store, to be picked up or shipped from the store. That way we don't have the out-of-stock-in-store problem. (But this was precisely meant to solve the lack of in-store availability and is not solved!)

I am an online shopper and wait more than two days, but if I go to a store, I expect to find immediate delivery. I recently looked for some special batteries in a department store and found them on the website. I then went to the physical store to buy them, only to be surprised to find that they are only sold online (I guess you have to read all about it on the website). I made the purchase online, but from another supplier. So it didn't do them any good to use the store as a mini-CD in that case. Besides, it is a "mini-HUB", that is to say that it is a pass-through store, not a stock store.

And the third claim is that you could reduce last mile cost by over 60% by using the stores as mini-CDs. It seems to me that it was established that if it's home delivery, fewer trips reduce cost, not the reverse. But also, if it's home delivery from the store, that's already possible if the store maintains good availability in the first place.

No. The solution to poor availability and surpluses is not to use the stores as mini-CDs (aren't they today without giving them complicated names?). I have already described the solution in other articles and it is simple. For example, I invite you to review Cross Docking: Don't try it at home!.

Conclusion

After analyzing these recommendations, the rest of the report loses a lot of strength. Saying things that seem novel but are the same as always with other words, or are directly wrong, led me to write this alert, and also to be more demanding with myself when making recommendations.

Tagged : /

Service level: is it just a KPI?

What is your company in business for - to make money? I know that was the answer in the book The Goal, however, I have seen many mission statements and none of them say that. Money is a result of fulfilling the company's mission. In the sequel to the book The Goal, Dr. Goldratt tells us that a good strategy has three necessary conditions. In the last chapters of It's not luck, Alex's discussion with the directors leads him to conclude that the three pillars of a good strategy are: to generate a good environment for employees now and in the future; to provide excellent service to customers now and in the future; and to generate good returns for shareholders now and in the future.

Any mission should be consistent with these necessary conditions.

Where does it all start?

It is easy to realize that without sales there is no business. And that, with competition, we must build a competitive advantage that makes us stand out and makes sales flow based on value exchange.

All companies that are selling something have managed to build an offer composed of a product that has an attractive price/quality ratio. Otherwise they would not sell.

Designing or finding those products, then producing or buying them, making good packaging, producing attractive marketing materials, investing in advertising, deploying the sales force; all this is a big effort in time and money.

And the promise made to the market consists of the product, the price and a delivery condition. Delivery may be by promising a date, or it may be by promising availability at the point of sale.

Everything starts by making an attractive promise and customers accept it, generating sales.

What happens when the promise is broken?

Sales do not stop immediately. There is customer frustration that results in customers looking for alternatives. But let's look at which part of the promise is most frequently broken.

Neither the product is frequently degraded nor the price is frequently altered. These two aspects are very much taken care of by the companies.

It is common for delivery to be missed, either by failing to meet the promised date or simply by generating a stock out at the point of sale.

Competitors are not much better at delivering, but this only generates more frustration.

Perhaps the most damaging effect is within companies. Failure to deliver immediately generates complaints from the market, which translates into emergencies, rescheduling, cost overruns, and a lot of stress.

How do you feel when you miss a delivery deadline, or when more than 10% of your products are out of stock in the stores? These facts generate a ripple of pressure throughout the organization. At least I'm sure they are not a source of satisfaction for anyone.

What if a competitor had a much higher level of service? Sales would likely drop and margins would erode. No one is happy now.

The cause of failure to comply

The consequences of not fulfilling the promise are many and negative, as we already sensed.

And it is enough to have a little experience to know that failures to deliver are very frequent.

Why is it that, knowing how bad it is for everyone when they fail to deliver, companies keep making promises that they break?

One explanation could be that some managers don't mind lying and promise things to get more sales. But we already know that breaking promises has too many negative consequences, so this explanation cannot be the majority explanation. There are many companies that renege on their delivery promises and there must be very few or none that base their sales strategy on lying.

Therefore, since the facts show a massive non-compliance, the explanation must be that deadlines or stock are promised without any certainty of compliance, although the intention is not to fail, which is manifested in all the actions to solve emergencies and the frustration felt by those responsible.

That is, the cause of the breach is simply that knowledge is being applied to make the promise highly likely to be breached.

Is there any solution?

If the majority of companies are non-compliant, it seems that there is no good solution, because if there were, everyone would be using it!

I don't know what you call that argumentative fallacy, but it is clearly a fallacy. If something exists and is very good, everyone should be using it. (I think it is ad populum fallacy).

I remembered a book that Jeff Cox (co-author of The Goal) wrote, Selling the wheel. He starts with the invention of the wheel and goes to offer it to pyramid builders to increase productivity, and they answer something like this: "who is using this, if it were so good, many would already use it, right?

The knowledge to calculate a reasonable delivery date that is highly probable to be met exists, is the Load Control of Theory of Constraints.

The knowledge to calculate and maintain adequate inventories throughout the supply chain exists, it is the Dynamic Buffer Management of Theory of Constraints.

These two methods are effective, I have never seen a case of failure and they are simple to implement. But they have a "catch". To implement them you have to abandon several of the beliefs that we take for granted without question and that are the basis of the methodologies used today to promise dates or to calculate inventories. And today's improvement efforts do not question the basic beliefs, so the prevailing results are still the ones we already know. Translated with www.DeepL.com/Translator (free version)

Conclusion

Service level, especially on-time delivery (OTIF) or availability (FILL RATE), are not just KPIs to measure management. In companies with physical products, it is a necessary condition for a good strategy. Without this level of excellent performance, the life of customers is not so good, and the internal experience in companies is much worse, often the cause of the "burnout syndrome" (see https://blog.goldfish.cl/consultoria/mundo-vuca-empresa-vuca/). And, as a result, profitability is limited.

It is possible, and should be indispensable, to achieve excellent service to build a company that is worth working for, and that you want to buy from.

Tagged : / /

AI to improve demand forecasting?

"Do you want to improve your demand forecasting capabilities?

I have developed a new way to use machine learning to support supply chains. Here's how it works:

I have developed a metamodel capable of dealing with virtually any supply chain demand data set. The metamodel will work with your data to create the best possible ML model."

This is the beginning of a post on Linkedin published on October 12, 2021. The promise is basically "I will improve your forecast with artificial intelligence (AI)".

And I ask, what do we really gain if this were possible? I'm not even questioning whether an "automated learning" algorithm is capable of improving forecast accuracy. My question is even more fundamental. Suppose if it does improve the forecast, what have we gained?

Determinants of inventory size

Inventory is necessary only if customers do not have the patience to wait for the product from the moment they express their need, so the objective of having inventory is to satisfy immediate sales. And we also know that the demand for a particular product at a point of sale has a high variability.

If the objective is to satisfy sales and demand has wide fluctuations, then the inventory we require must be sufficient to satisfy the maximum demand before the next replenishment, which means that most of the time we must hold inventories in excess of the actual demand at the time.

The next replenishment will depend on the time we decide to allow between replenishments, and also on how long it takes for the product to arrive after we order it. Both times together make up the replenishment time.

The longer the replenishment time, the bigger the inventory needed.

Consequences of a large inventory

The larger the inventory, the more space is required to store it, and the more tied-up capital we have. Since space and capital are both limited resources, the larger the inventory, the less variety we can offer customers, which reduces sales.

In addition, if the replenishment time is long, then we will also have higher risks associated with the inventory: risk of scrap and risk of obsolescence. In addition, a lower overall ROI of the operation.

Impact of the forecast on the consequences of the inventory

If our forecast is way off, we will have lost sales due to stock out, and we will have excess inventory accumulation.

If we improve the forecast, we will reduce the stock out, and also the accumulation of excess. However, the main factor that determines the size of the inventory is the replenishment time, and a lower forecast error does not reduce this factor at all, so the required inventory remains high.

I stop here for a minute, because I can already hear the counter argument "if the forecast is accurate, I need what is fair and necessary". I agree. But remember that demand is highly variable: some SKUs will have high sales in the period and others less so, and they alternate. For a particular period, the inventory required is the combination of high and low demand multiplied by a long time.

Why did I deduce that the time is long? Very easy, how many selling days does one need to forecast and how often does one forecast a product line? Once a week, once a month, every other month? It's not every day for all SKUs, that I think is pretty safe to assume.

Therefore, if I am using forecasts, it is very certain that the replenishment time is long. And even worse, if in addition to forecasts, I use MIN/MAX, the time to replenishment is also variable, so you should also forecast how long it will be until the next replenishment. And I am still accepting that we can improve the forecast.

From this reasoning, I deduce that a more accurate forecast, without changing anything else, has not reduced much space or capital tied up. Suppose that the improved forecast eliminates the stock out, and sales increase. The capital tied up will not be reduced very significantly. Remember that before there were stock outs, which means less inventory. Now there is more inventory of those products and less of the others, but the total effect is that inventory is still proportional to replenishment time, so it cannot be reduced very much.

What determines the profitability of a company?

Maybe I should have started here. A company is a system and its profitability depends on how much margin we can generate with its scarcest resources. Another way to look at it is how much money can be generated for every penny spent on trading.

In TOC - Theory of Constraints, Dr. Goldratt defined only three ways to measure money in a company: throughput, inventory, operating expense.

Throughput is the speed of generating money through sales. Inventory is the amount of money trapped in the system and can be converted into throughput. Operating expense is the money spent to convert inventory into throughput

I know that inventory defined in this way can be confusing. Let's use investment instead, and leave the term inventory for the units of products stored.

In mathematical optimization (linear and nonlinear programming), an objective function of the system is defined. In the case of a company, it is the profit. And that function would give infinity if it were not for the fact that the company's resources are limited, so we say that the optimum is determined by the active constraints.

In systems we already know that most of the resources must have slack (see refutation to line balancing) so there are only a few active constraints.

Thus, a company's profitability is determined by what its active constraints are and how they are used.

Space and capital are two constraints that are used more or less depending on the size of the required inventory. If the inventory needed is larger, these two constraints are being used more, even to the point of exhaustion. In that case we are forced to accept a level of stock out because we cannot increase inventory.

Relationship of forecast and profitability

As we have already seen, using forecasts is associated with a long replenishment time, so the aforementioned constraints, space and capital, will be used almost to the maximum. Now it is time to answer, how does better forecasting improve profitability?

A better forecast will make us use those space and capital constraints better, but it will not make us use them less. That is, if they are active constraints, we will be able to "move the needle a little" with more sales by reducing stock outs, but not much more, because the replenishment time has not changed and we will continue to use a lot of those constraints.

What if we reduce the replenishment time?

By reducing replenishment time we immediately alleviate the need for inventory to meet maximum demand. In other words, we can have fewer units of inventory and at the same time that inventory is proportionally larger than the previous one. This reduces stock out and reduces the use of space and capital constraints simultaneously.

By reducing the use of constraints, we can now exploit those constraints in a better way by expanding variety, for example, achieving much higher sales.

In a typical retail environment, outlets can restock every day from their distribution center. And I don't think I'm wrong when I say that outlets receive merchandise every day. What happens is that the fundamental change is that we replenish all SKUs every day.

How does a better forecast make a difference now if we are no longer coping with space or capital constraints? I don't think it makes any difference. And for such a short time, the best forecast is to repeat the immediate past: replenish today what was consumed yesterday.

But there may be changes in demand for each SKU, and inventory levels may not be adequate over time. For that we need to detect in which direction the demand is moving, but we do not require an exact number of units to be replenished. At TOC we have a simple mechanism we call Dynamic Buffer Management, which can be automated and adjusts the investment according to the actual demand. This is the origin of what has been called "Demand Driven".

One feature of this system is that it requires effort only to collect the daily data, which is already collected anyway. And no time is spent processing it, because it is done by a computer (although it is good that there is always human supervision).

When is it appropriate to forecast demand?

The capacity decision is a strategic decision. Normally capacity cannot easily be varied by significant amounts. Doubling or halving are moves that cannot be made frequently and require requirements planning. It is for this type of decision that S&OP (Sales and Operations Planning) is required.

At the capacity level there is a lot of statistical aggregation. It's easy to deduce that. If a company makes 3000 different SKUs, it will hardly have hundreds of production lines. A very large factory has less than ten lines, so the demand for each line has a lot of statistical aggregation. That also makes it possible to deduce that the aggregate demand forecast for each line has a much smaller error than the sales forecast for each SKU.

In such circumstances it is advisable to make demand forecasts to plan capacity expansions. The difficulty in this topic is how people do not understand the exponential function, but that is a topic for another article.

On the other hand, the complexity of today's supply chains also lends itself to AI applications to study capacity utilization at different nodes, such as factories, means of transport, ports, containers, etc. In that field, it is very impressive what Throughput Inc. has achieved with its ELI application (I think in honor of Eli Goldratt).

Conclusion

If I am offered a system to improve demand forecasting for replenishment at the points of sale, I already know that it is a system that operates with long replenishment times for each SKU, so I cannot expect a great improvement in profitability. Yes, there will be an improvement, but not a big one.

On the other hand, without any demand forecasting system, but with a dynamic buffer adjustment system, with short replenishment times, the improvement in profitability will be the same or better than that of the other system, but with less effort and less capital, and in addition to the release of constraints to generate even more margin.

What do managers believe in?

"A fill rate higher than 94%? Impossible! It is not possible to sustain it with an acceptable cost".

It is not the first time and I believe it will not be the last time I hear this type of statement when I talk to general managers. The fact is that they are intelligent and experienced people, which has led them to have certain convictions that allow them to make decisions quickly, without spending their scarce time on fantasies.

But what is it that happened in the past that led him to be convinced that it is not possible to have a fill rate close to 100% and at the same time be profitable?

What is fill rate?

This is a term commonly used in logistics to measure the degree of order fulfillment. In its simplest and most acidic expression, it is the percentage of order lines that have been completely fulfilled.

Of course, if the order has only two items of, say 900 units of the first and 100 units of the second, and we deliver 800 units and 100 respectively, with this definition we could have a fill rate of 50%. Then we can refine our definition by considering the quantities as well. One way is to calculate the fill rate as the units delivered divided by the total. In this case we would have 90%.

But if the 900 units represent 50% in money, we could now make another calculation that gives us 94.4%.

You see, fill rate is a KPI that can mean different things, but even so, that manager considered it impossible to sustain it above 94%.

How are decisions shaped?

The case I am relating is that of a consumer goods manufacturer that sells its production to a supply chain, where there are wholesalers, distributors and retailers.

In that company, as in many others, the managers are highly educated and have certainly learned the most well-known cost optimization techniques for the management of consumer goods companies. Among others, balancing production lines, using MIN/MAX and EOQ, and unit costing with the latest ABC (activity-based costing) techniques.

When one uses these techniques, the inevitable result is that capacity is barely sufficient to meet demand and a lot of inventory is accumulated. The inventory backlog uses two fundamental resources: warehouse space and working capital. When there is too much inventory, both resources are at their limit, so suggesting to increase inventory immediately increases the cost of the operation.

What does that have to do with the fill rate?, you ask.

Let's see, if there is a lot of inventory accumulated, that necessarily means more days of sales. In other words, the production schedule must consider a sales horizon farther into the future, so it is increasingly dependent on the accuracy of the forecast. The only thing we know for sure about the forecast is that it is wrong, so those production plans will end up with some items out of stock, resulting in a lower fill rate. Translated with www.DeepL.com/Translator (free version)

But it is even worse: every time an order is missing an item, there are production reschedules, which wastes capacity and now we have to pay a higher cost to achieve the entire production plan.

And don't managers realize this vicious circle?

It's easier to ask than to answer. How can they know that this is a vicious cycle? Or better, how could they know that they are not optimizing the operation? After all, they are following "best practices" and applying basic principles that are taught to this day in very prestigious universities.

And they are concepts practiced by many others in the industry.

After several years of optimization, this company has achieved 94% as a realistic and sustainable maximum. Every time they tried to improve it, maintaining the productive optimizations, inventories rose so much and so many shrinkages appeared, that the logical conclusion is that trying to improve the fill rate is not profitable, and it is not realistic to suggest it after so much experience that proves the opposite.

Is there a way out?

This is a question asked by a non-conformist. Someone who does not accept the trade off between fill rate and cost. Dr. Goldratt taught me not to accept contradictions; that a scientist must think until he eliminates them. Genrich Altshuller also thought this way, putting as the basis of TRIZ the conviction that an invention arises from eliminating a technical contradiction.

I refer to two previous articles to see how they invalidate some of the basic concepts that managers continue to use. See Refutation of line balancing and MIN/MAX and EOQ fallacy to know why these concepts are wrong.

In general, the major problem in business management today is a lack of awareness of the systemic nature of organizations. These examples presented here are just a sample.

The way out of the suboptimal fill rate problem is to question the concepts that give rise to day-to-day factory and supply chain decisions. By abandoning these "beliefs", another set of policies must be adopted. Fortunately, we have already been down that road, and we know what the new concepts and new policies are. And we have seen hundreds of companies (perhaps thousands) that in the last 30 years have achieved a fill rate close to 100% while reducing costs and inventories. Translated with www.DeepL.com/Translator (free version)

Why is the adoption of systems thinking slow?

Russell Ackoff answered this question several years ago in a short article. And he gave two reasons, one general and one specific.

The general reason has to do with the prevailing education, where mistakes are punished, from school, through college, and into the workplace. And the safest way to minimize the number of mistakes is to minimize the number of opportunities to make them. At least that's one of the strategies. Therefore, the survival instinct and the little urgency to do something new leads most people to avoid profound changes. And adopting systems thinking, also in the words of Dr. Ackoff, is a change of era: the paradigms to be changed are so profound and numerous that it is equivalent to changing the set of shared beliefs of a large group of people; it is a change in their worldview.

Why "take a chance" on something that contradicts the mainstream? To some extent this position is defensible.

The specific reason is related to systems thinking itself, where experts gather at conferences to present their research and cases in a jargon that is almost hermetic to the rest.

I agree more with the former than the latter, although it is true that sometimes the technical jargon is scary, but it cannot be the main reason.

Blocking fears

Before his departure, Dr. Goldratt wrote a preface to the book he was unable to write on the science of management. In that preface he talks about three fears that provoke behaviors in many managers. It is up to the manager to what degree each one affects him or her.

The first is the fear of complexity. The consequence is that the manager divides the system into parts thinking that it is simpler to manage each one separately.

The second is the fear of uncertainty, so the manager seeks to have control at a higher level of detail, thinking that he can better deal with variability.

The third is the fear of conflict, where the manager seeks an amicable solution to the numerous conflicts that arise in the company, which in practice translates into compromise.

Conclusion

With a very complex work experience, where he has never experienced what it means to eliminate conflicts and manage a complex system in a simple way, where uncertainty only grows and increases the complexity of the system, the manager clings to the few certainties he has, those he acquired in his studies, as if they were dogmas.

I invite all readers to review their own beliefs, at least in business management, and trust more in their reasoning ability. You will be pleasantly surprised.

MIN/MAX and EOQ fallacy

Why do we have inventories?

All consumer goods, such as soap or food, even household appliances, you know what I mean, all these products are in inventories. We are used to thinking about store inventories, but there are also inventories in distribution centers and even factory warehouses for finished products.

This inventory has a unique feature: it was purchased, shipped, or manufactured before a consumer ordered it. It is that inventory is necessary only when the customer's tolerance to wait is less than the time it takes to make it available, within the customer's reach. If customers are willing to wait as long as or longer than it takes for the product to arrive, no inventory is needed.

Therefore, all inventory must be generated prior to sale, so we require some method to help us anticipate the appropriate amount of inventory.

Everything that we will examine about inventory applies to each individual product, to each SKU (stock keeping unit).

But how much inventory is adequate?

Knowing that inventory costs money, the answer begins with the words "minimum possible." But knowing that the inventory generates the sales, our answer must contain the objective as well, to satisfy the “maximum expected demand”.

Demand fluctuates, and if our inventory matches average demand, shortages will very often occur and we will lose sales. The shortages (also called "breaks" or stock outs) are precisely what we want to avoid with inventory.

Lastly, inventory is required to satisfy sales before another replenishment arrives.

So, our "formula" to calculate the adequate inventory, or we can also say optimal is:

The minimum inventory is required to meet the maximum expected demand before the next replenishment.

When does the next replenishment occur?

We already see that the time between one replacement and another is a fundamental element in our formula. We can express the "maximum expected demand" as the daily average multiplied by the replenishment time and multiplied by a safety factor.

If time grows, inventory grows. And vice versa. We will see that this fact is an important part of a new way of managing inventories, but first let's understand how most supply chains work.

A replenishment order can be: a production order, or a purchase order, or a dispatch order. In all cases, an “order” is a decision, which is very good news, because we can do something different if we want to.

Before we go on with our deductions, give some thought to the fact that inventory is a result of this decision. If your company is not happy because they have excess inventory and at the same time they have shortages, do not forget that this is a result of the replenishment decisions taken days ago.

When does the next replenishment take place? The answer is now obvious: when we decide.

How do you decide today when to replenish?

When searching the internet, some articles appear such as: https://blog.nubox.com/empresas/reposicion-de-inventario, https://biddown.com/no-sabes-cuando-pedir-mas-stock-calcula-reorder-point-rop/, https://www.mheducation.es/bcv/guide/capitulo/8448199316.pdfand several others, which have some things in common:

  • All emphasize the importance of good inventory management for good profitability.
  • All of them mention some kind of reorder point, some are explicit with the MIN/MAX method and also with the economic replenishment batch.

As an anecdote, NIKE publishes this page https://www.nike.com/cl/help/a/disponibilidad-del-producto-gs to say that the product you are looking for and didn't find will be available when it is in stock... it didn't help me much, to be honest.

And looking at books and syllabi, we see that MIN/MAX and EOQ (economic order quantity) are recurring as methods for deciding when and how much to replenish each SKU. Let's take a look at these concepts.

The MIN/MAX method and EOQ

In agreement with everything said above in this article, the objective of the method is to have availability at the lowest possible cost.

The method consists of determining a minimum number of units in inventory that must satisfy sales while our next order arrives. This is why this minimum quantity is sometimes called ROP (reorder point).

And the quantity to be ordered will complete a maximum of units. In general this quantity has been calculated with a formula that optimizes costs and results in an economical replenishment batch.

Let's look at each of these things in more detail and what effects using the method has.

This figure represents in theory what the method says, but note that there are two unrealistic elements in the graph: 1) in each replenishment it appears as if the order arrived the same day it was ordered; 2) there is perfect regularity in the demand.

The reality is much closer to this other graph:

Between the time the order is placed and the replenishment arrives there is a supply time, it is not instantaneous. And consumption or demand is variable. The former explains why inventory can run out. And again, if replenishment were instantaneous, we would not need inventory.

But I want to dwell for a moment on the variability of demand. As you can see in the graph, when replenishment is done by setting the ROP or MIN at a fixed quantity, and demand is variable, what you see here occurs: the time between one replenishment order and another is variable.

Let's revisit what we already know: the inventory needed to generate sales depends on the replenishment time. Therefore, if the replenishment time changes over time, but the inventory does not, then the inventory held is almost always wrong, with a bias towards excess.

That is, the MIN/MAX method, so popular in academic programs, is a method that leads to always having wrong inventories (except when demand has little variability).

One of the elements of a solution to the chronic inventory problem, i.e. the problem of having excess and shortages simultaneously, is to set the frequency of replenishment.

If we set the frequency, the MIN is no longer relevant. The MAX will be the quantity we have to maintain, but if since the last order there were few sales, the quantity to order will be much less than the EOQ - economic order quantity.

The EOQ quantity is calculated with a formula involving cost of shortage and cost of storage plus cost of generating an order. The concept is that if the quantity is large, the cost of storage is higher, but the cost of generating orders is lower (fewer orders per year).

First, the cost of the shortfall is very difficult to estimate and is likely to be much higher than estimated. There are two aspects that are underestimated. The first is that shortages can detract from reputation and that reduces future demand. And the second aspect is that a shortage affects sales in a different way if it lasts longer or shorter.

In general, it can be said that the Pareto principle, the 80/20 principle, also applies to sales. This principle says that 20% of the factors are responsible for 80% of the result. The numbers 80 and 20 are references to indicate asymmetry.

In one case I knew well, the 5% shortage was generating 30% lost sales. I know this because when that 5% was eliminated, sales increased 40%. (Note that out of a total of 100, 70 were being sold; by increasing 40%, 70 x 1.40, this gives 98).

Therefore, the EOQ formula greatly underestimates the shortage cost.

But in addition, the costs of warehousing and order generation are usually sunk costs, or fixed costs, however you want to look at them. The first is the cost of warehouse space. And this becomes variable only if we grow inventory above a certain level. And the cost of generating orders is made up of people's salaries, which do not change if you place more or fewer orders. For practical purposes, the marginal costs of these two components are close to zero. Translated with www.DeepL.com/Translator (free version)

When applying the formula now, the resulting EOQ amount is very small, so it is irrelevant.

What is said for the cost of generating orders is valid for transportation and for production, where setups rarely have a real cost.

Therefore, the MIN/MAX method and the EOQ batch are fallacies, which lead to poor inventory replenishment decisions.

The TOC alternative

TOC stands for Theory Of Constraints, created by Dr. Goldratt, and its principles also apply to inventories.

Taking the definition of the beginning, our objective will be to have the minimum inventory to satisfy the maximum expected demand before the next replenishment.

I will first explain the generic solution and then distinguish some cases.

As I mentioned, the first thing is to SET THE FREQUENCY. This is a decision, not a result. So this decision reduces the variability of the replenishment time drastically.

The second is to ignore the optimal batches and bring the frequency to the maximum reasonable (we will see what reasonable means when distinguishing cases), so that the time between orders is reduced to the minimum possible. As the inventory is proportional to the replenishment time, the resulting inventory is smaller, occupying less space and trapping less money.

Now, with less money invested, we have inventory for more than 98% of the demand cases, raising our fill rate to almost 100%.

The method consists of replenishing with the set frequency only what is missing from our target inventory, which in TOC jargon is called Buffer.

How do we know that the buffer is the right one?

The first buffer for each SKU must be estimated. There are varied ways to do this and they are found in the TOC literature. But it is not relevant to make a very accurate calculation for this initial state, so I recommend a simple formula. I personally prefer a moving sum of the last X days for about 3 to 6 months, where X is the number of days corresponding to the replenishment time. The replenishment time should include everything: the days between one order and another, and also all the supply time (production and transport). The buffer is the maximum of these sums. Translated with www.DeepL.com/Translator (free version)

But the demand for a SKU can change, so the buffer must also change. Dynamic Buffer Management is the TOC technique to automate this procedure whereby the individual buffer of each SKU follows the actual demand. It is color-based, has certain rules, and consists of increasing the buffer by one-third when it detects that inventory is being consumed faster than it is being replenished. And it is reduced by one third when it is detected that consumption has slowed down. Translated with www.DeepL.com/Translator (free version)

Distinguishable generic cases

There are three cases that are worth distinguishing in general:

  1. Stocking points or points of sale of the same chain
  2. Central warehouse or locally sourced distribution center
  3. Central warehouse or distribution center supplied with imports

The first case corresponds to nodes that belong to us, so we have total control over their operation. In general, these points can be replenished daily, which leads to a significant reduction in inventories, and at the same time it is rare to maintain a shortage for more than one day. The criterion is to reduce the time to a minimum; if not one day, then two or at most three.

If, for example, we have several points of sale in a city far from the distribution center, where a truck is sold every three days, it is possible to make a trip every three days to that city delivering to each point of sale. When these grow in number, it may be better to have a regional warehouse serving that and other nearby cities, following the same principle.

When the nodes belong to us, it makes no sense that replenishment cannot be done with high frequency. In fact, today trucks must go very frequently, but not to replenish SKUs that were sold yesterday.

The second case is a warehouse that sources from its own production plant or from local suppliers. In both cases (for different reasons), placing daily orders is an exercise in futility.

In the case of production, it will be normal for the schedule not to accept orders to produce the same SKU several days in a row, because that would lead to wasted capacity in the constraint (see article https://blog.goldfish.cl/produccion/refutacion-al-balanceo-de-lineas/).

And if local suppliers receive purchase orders for the same SKU every day, they will most likely consolidate all those orders to be shipped once a week.

For these reasons, my recommendation for this second case is to set the frequency to one weekly order per SKU. This leads to dividing the SKUs into five groups (this is an example), and we will have Monday's and Tuesday's, and so on. The replenisher should only complete buffers from the day's group.

The third case is the one that has given me the most food for thought. I haven't explicitly said so far, but you may have noticed that this method disregards forecasts: replenish only what was consumed and dynamically adjust the buffers.

The forecast contains errors, sometimes underestimating demand and sometimes overestimating it. The smaller the population served by a node, the greater the relative error. That is, a store serving 5,000 people requires more inventory per capita than a warehouse serving 50,000 people. This method of frequent replenishment reduces inventories at the nodes with the highest error.

This phenomenon of reducing the relative error as the population grows is called statistical aggregation, and is very well studied mathematically. Statistical aggregation also occurs as time lengthens. The problem with this, as we know, is that the inventory grows proportionally.

The third case, where the distribution center is supplied by imports, is one where the replenishment time is naturally long. First, the transit time cannot be shorter without raising the cost (going from sea to air, for example). But in addition, to fill containers, it may take a week or two to sell. These two factors mean that replenishment time cannot be reduced for anyone, i.e., competitors have the same conditions.

As we can see, by having statistical aggregation for the long time, and also having the maximum population statistical aggregation, the demand forecast error for this particular case is much lower in relative terms.

However, setting the frequency per SKU will have the same benefits as described above. However, the buffer adjustment method can be modified, incorporating forecasting techniques, making this method even more robust.

Conclusions

Both industry "best practices" and academic program content are backward in many places, and the proof is there for all to see. Just go to a supermarket or store with a shopping list of 10 items, how often do you find the entire list? And even then, the store is full of inventory. Do another test; look at the production date of something non-perishable that is produced in the country and you will find that it is several weeks since it was produced and you took it in your hands. That speaks of excess inventory.

There you have the result.

On the other hand, supply chains that have adopted TOC to transform themselves have reduced inventories and raised their service levels by close to 100%.

You can always improve a lot more; but for that you need to acquire more knowledge. I hope that has happened to you by reading this article.

Cross Docking: Don't try this at home!

The practice of cross docking is another error that follows logically from a mistaken assumption. I have already shown why MIN/MAX and EOQare wrong. The basic assumption is that "reducing logistics costs increases profits", and the problem is precisely how costs are calculated. I won't go into that now; another time I will show how cost accounting gives the wrong information for decision making. And decisions are what determine profits, or so we hope (otherwise we would have to recognize the irrelevance of any method and, worse, the irrelevance of managers).

But let's focus on this mistake today. I will demonstrate why cross docking always reduces profits.

Why do we have inventories?

I am going to repeat myself a little, because it is good to go to the basis of everything in order to understand what to do and, certainly, what not to do.

Inventories are necessary because the customer has no tolerance for waiting once he has expressed his need. It is obvious that it is not possible to maintain inventory of custom-made products, and in that case customers do expect a lead time. But products that are consumed on a regular basis, by a large number of customers, and that do not change their specifications frequently, represent very low risk if they are manufactured in advance. Therefore, if you don't have it in stock, you are very likely to lose the sale.

The answer to the question was obvious: we have inventory to make sales (which cannot be made without inventory).

How much inventory do we require at each node of the chain?

I hope not to bore you with this platitudes.

The minimum inventory is required to meet the maximum expected demand before the next replenishment.

This means that the inventory level of a SKU in a store will depend on the maximum expected sales level within a replenishment time, the replenishment time being the number of days between one order and another plus the time it takes for delivery (transit). In the article about MIN/MAX ya demostré que si permitimos un tiempo variable, el inventario siempre está equivocado.

Let's assume now that you have listened to me and in the stores you have daily replenishment frequency and it takes one day to transit. In other words, you need inventory to satisfy the maximum level of sales that can occur in a two-day period.

What is the fundamental difference between a distribution center and a store?

Of course, the formula also applies to the distribution center (DC). But the sales level of the distribution center is the sum of the sales of all the stores. In other words, the CEDI does not have independent sales, but its sales level will be a combination of what happens in the stores.

Having established that, let's see what "maximum sales level" means at each node, in a store and in the DC.

When we look at the actual sales for the last 30 days of any SKU in a store, we see a lot of variability. We can see several days of zero units and days of 5 units, 1 or 10. When we look at the sales of that same SKU in another store on the same days, we see that they also have a lot of variability, but where the first store sold zero, the second store sold 5, and so we see that the combination of both stores have sales with less variability overall.

If we combine the sales of many stores, the variability of their total sales is much smaller than the variability of each individual store. This is known as statistical aggregation (in general, the variability is reduced by the square root of the number of aggregation points).

As a result, the "maximum sales level" of the distribution center will be much lower than the sum of the "maximum sales levels" of each store added together. We have just discovered why it is convenient to have a distribution center! The DC inventory that is sufficient for daily sales is much less than if we had the inventories in the stores.

Great, good theory, and what does it have to do with cross docking?

It should also be taken into account that suppliers are not very fond of making daily deliveries to the chain. Therefore, orders to external suppliers have a different replenishment time, several times longer than one day. Being very conservative, let's assume that we place weekly orders with external suppliers.

If instead of dispatching to DC we were to ask them to dispatch to each store, the total inventory of all the stores added together would be very large. So large that there is not enough space and, quite possibly, not enough working capital. That leads to reduced quantities ordered and we start to cause stock-outs or out-of-stocks. But this is exactly what causes us to lose sales, and we want inventory to make sales.

That is why if we ask for weekly dispatch to DC, it is because we store the product there and react daily to fluctuating demand, reducing inventory and also eliminating out-of-stocks in the stores, which is where sales are made.

Wait a minute: if I receive orders on a weekly basis, but I make daily dispatches, the receiving operation and the dispatching operation are, by system construction, decoupled.

If I force the coupling of the two, I must ship to the stores what I receive weekly, but then now I can't take advantage of statistical aggregation either, the main reason for DC!

Cross docking is precisely the coupling of receiving and shipping. See how in this article about cross docking system is described as one that reduces storage at the DC to less than 24 hours.

In other words, cross docking is a practice that destroys the value of aggregation and generates excess inventories and out-of-stocks at the points of sale.

Let's put some numbers

In a retail chain, the gross margin on each product can be 30% or more. The higher it is, the greater the effect.

The effect of out-of-stocks on sales is asymmetric, as we know from the Pareto principle. The 80/20, remember? That is, if we go from zero out-of-stocks to 5% out-of-stocks (which is extremely conservative), the sales we will lose are 15% or more. I have already told the real experience of a manufacturer that by reducing 5% out-of-stocks increased sales by 40%. Let's calculate with 15%.

If our chain, with no out-of-stocks, sells 100, the total margin will be 30. If it has a 10% profit on sales, that gives 10, so we know that our total operating expense is 20.

By reducing sales by 15%, we will have a total margin of 85 x 30% = 25.5, i.e., profits reduced to 5.5. To compensate for this loss of 4.5, the total expense would have to be reduced by more than 4.5, which represents ~ 23%.

Unless the savings from cross docking exceed 23% of the total expense (which includes all salaries, leases, energy, etc.), this is very bad business.

Cross docking goes against the primary objective of the system, which is to facilitate flow.

Why hasn't anyone noticed this and continue doing cross docking?

The description I made of the system, with weekly supply frequency to the DC and daily distribution to the stores is the practice proposed by Goldratt. But this is not done either, and most companies do not know how to take advantage of their DC other than to save transportation costs, in addition to saving the chaos of receiving many different trucks at each store.

As the practice today is wrong, cross docking effectively improves the current operation. Remember Drucker: "Doing the right things wrong is much better than doing the wrong things right.

In these circumstances, where the prevailing practice is to misuse DC, one can indeed say that cross docking has the benefits listed in the article already cited.

But the article also says that implementing cross docking requires investment and commitment from the teams. In other words, if you have already implemented it, it may be more difficult to get out of this permanent situation of out-of-stocks in stores and excess inventory.

I'm going to make a speculation as to why someone came up with cross docking. I guess it was mimicking the hub or hub system of passenger flights, where it is much more efficient for the airline to make a stopover than to make direct flights between all its destinations. In effect, if I have multiple destinations and they can all also be origins, but every day there is a different amount of passengers wanting to go from one place to another, the most efficient thing to do is to bring the passengers to a hub, where all the passengers going to a destination from various origins are put together in a few flights. This is again statistical aggregation.

That flight system would be more efficient if passengers agreed to wait a week in a hotel at the hub, but I'm afraid I wouldn't do that. That's why the hub is not a DC that accumulates inventory. But products do not complain and in that case we can apply what I explained in previous paragraphs.

Conclusions

Again, a reality check helps you understand: go into a store with a shopping list (including products from that store, of course) and see how many times you find everything. If cross docking served any purpose, you would find what you need more times and not see so much inventory piling up, to the point where several of those products expire or become obsolete.

Many of the industry's "best practices" such as academic program content are wrong. Going back to basics allows us to better understand our business. Cross docking promises to reduce costs, but let me ask you a question: what is your company in business for, to make money or to save money? The good news is that doing the right thing is simpler and much more profitable.

Refutation of line balancing

On June 16, 2019, an industrial engineering portal published Balanceo de Líneas, where Dr. Eliyahu Goldratt is quoted as saying "An hour lost in the bottleneck is an hour lost in the whole system", but the article lacked analyzing the lines as systems.

The conclusion of that article is that balancing manufacturing or assembly lines reduces unit costs, and further says that “The balance or line balancing it is one of the most important tools for production management, since a balanced manufacturing line depends on the optimization of certain variables that affect the productivity of a process, (…) "

In this article I will start the exposition precisely from the phrase of Dr. Goldratt, who did various experiments to show that balancing the capacities in a line reduces the productivity of the system, increasing the cost of production.

Manufacturing or assembly lines as systems

A system is a set of interdependent elements with a purpose. A manufacturing or assembly line conforms to this definition: each workstation is dependent on another and together they have the purpose of creating a product from raw material.

One of the main characteristics of a system is that it requires the synchronization of all the parts for the result to be produced. In this sense, the production of a product is an emergent result of the system as a whole. None of the parts is capable of producing it by itself, not even a subset of them. This is easy to demonstrate. If the above were true, that subset is our system and the rest is left over.

In this sense, we need all parties to generate the product. This was obvious, however what is not so obvious is understanding how we achieve maximum productivity from a system.

In reality there is variability

To make the demonstration required to refute the aforementioned article, I will begin by establishing a fact of reality. The processing time of a unit on a workstation is a time within a range, it is not a specific number of minutes.

For example, when in a station we say that a product takes 2 minutes, we know that that is an average, but that it could be 1 minute or 5 minutes.

Regarding the process times, we know that they have a marked asymmetry to the right. See the following graphic:

By making our process time measurements on a workstation, considering the process of identical parts, after a large number of cases we obtain a table of results with a large dispersion.

It could never be processed in 0 seconds or less, which was obvious. In a few cases the process was achieved in 50-70 seconds, most of the cases are between 70 and 120 seconds, but not a few cases are in the range between 120 and 250 seconds. Actually, we see that half of the cases are in the last range.

In my experience of more than fifteen years, this graph represents the reality of the vast majority of processes in all types of factories.

Although I know that there is a difference between the median and the mean (or average), I will use the average for simplicity. And we can say that a process has a 50% probability of being running at its average or faster. This I will use next in the next demo.

Effect of process dependency

Variability affects all resources. We are going to distinguish the variability due to common causes from that which has special causes. Special causes are all those that are easily identifiable, for example, a power outage.

Common causes are many and varied, and for all practical purposes, the causes that stop one process do not necessarily affect other processes. Therefore, the productivity of one process in one instant may be above its average while that of another is below it.

Let us now consider a generic line (manufacturing or assembly):

We have a flow direction and we know that a resource cannot process anything if it has not received material from the previous one.

Let's design our process to produce 10 units per hour. After a while, the process is up and running and all resources are processing what they can.

Let's see what happens if we balance the line, that is, all resources have an average capacity of 10 u / h (or an average time of 6 minutes per unit).

We already know that the probability of producing 10 u / h or more is 50%. Let's look at what happens in the first two resources in the first three hours:

Periods

Resource 1

Recurso 2

Total production

First hour

7 u/h

15 u/h

7 u/h

Second hour

14 u/h

6 u/h

6 u/h

Third hour

9 u/h

9 u/h

9 u/h

Despite the fact that on average each one of the resources is capable of making 10 u / h, when combining them in each period, as the capacities are not synchronized, what Dr. Goldratt said is fulfilled: the system moves at the rhythm of the slowest.

Didn't we already know this? Sure you do, but capacity balancing, which is one of the techniques taught in many college courses, ignores the systemic effect of the combination.

By extending this effect to the rest of the resources, we can easily see that the probability that a balanced line will produce at the average design speed is approximately 0.5n, where n is the number of resources chained on the line. In this case, with 7 resources, the probability of achieving 10 u / h of finished product is ~ 0.8%, that is, in a year of 300 work days, only 2 would reach the design productivity of the line.

The better the balance, the worse the performance.

What happens to the cost when balancing the lines?

From the above conclusion, we now know that we will have ~ 20% fewer finished products compared to the original plan (or worse), so all the production cost associated with operating the line (discounted raw material) will be divided into less products, which will increase the real unit cost by 25% (or more).

So, to reduce the total unit cost (the only one that is relevant) it is necessary to ensure that the system maximizes its productivity as a whole, and not the productivity of each of the resources.

What if it is an assembly line?

Normally one sees factories where resources are isolated from each other and material (WIP for work in progress) has to be moved from one center to another. But with the idea of ​​speeding up the process, and following the model attributed to Ford, some lines are arranged in a way where there is no room to accumulate WIP and the entire line advances at the same time.

Now that you know what happens when balancing a line, take a look at what happens with an assembly line, even if it is not balanced!

Unable to accumulate WIP between resources, the entire line advances at the slowest pace. But which one is the slowest? Let's look at the graph again:

Towards the right side we have "the tail" of the distribution, and we have already seen that it is not at all improbable that a resource is in that productive cycle.

Unlike the general case, where the little WIP that can be accumulated does allow some resources to cushion somewhat the effect of dependency, in the case of the assembly line this is not possible. In this case, the entire line moves at the rate of the resource that is operating in its tail.

If one has 7 resources coupled, we already know that the probability that at least one is in the tail is 99%. If the line has a few dozen stations (such as assemblers for bulky products such as automobiles), it is certain that they are operating well below their averages.

On an assembly line it becomes incredibly relevant to reduce variability, leading the company to a flood of improvement projects that cost a lot of time and money. And it is not possible to eliminate the tails of the distributions either. It seems like a sisífea task to improve productivity.

Even Elon Musk regrets so much automation in the line of Tesla, although I am not sure if he already noticed the effect that I just described or has other reasons, but he sees that his results are below what was planned.

The solution is to find the resource that is the constraint of the entire system (has the smallest average capacity) and isolate it, allowing WIP to accumulate before and after. This will raise the overall productivity of the system quite a bit. And yes, I realize the investment that is required to modify the layout, but with an increase of only 10% in total productivity, I am sure that this project is profitable.

"If we don't balance the line, there is a lot of waste"

In our example, suppose that the third resource is our constraint, the one with an average capacity of 10 u / h, and the rest have 20 u / h or more.

First I clarify that the double is not an exaggeration. The capacity of the line to recover when there are losses, or in other words, to absorb the variability, depends on this extra capacity. If the excess over the restriction is small, we still have a problem with variability. In my experience, this extra capacity, which in Theory of Constraints (TOC) jargon is called protective capacity, must exceed 30-50% and sometimes more.

So we see that if we feed the line with all the material that the first one is capable of processing, in a short time we have an intolerable accumulation of WIP in the corridors of the plant, because the constraint is not capable of draining that WIP. In fact, what happens is that one has the sensation that the bottleneck is moving inside the plant. The latter is one of the symptoms of the opposite, that there is excess capacity. And when there is excess WIP, there are several effects by which capacity is wasted, even in the constraint. And here the phrase "an hour lost in the bottleneck, is an hour lost in the whole system" applies.

We must control the amount of WIP to ensure that the constraint always has work but that it is not so much that it wastes capacity. In another article I will delve into how capacity is wasted with excess WIP.

This WIP control mechanism must release material to process at the rate dictated by the constraint, so all other resources will have idle times. But these idle times are not real wastes of capacity; they are actually waiting times for the system to synchronize to the rhythm of the constraint. In TOC jargon this is a buffer, which is the mechanism for achieving maximum productivity.

That is why I have written that, many times, LEAN implementations, understood as waste reduction, are the enemy of productivity.

In addition, an operator receives the same salary if he operates a machine of greater or lesser capacity. So the salary expense does not change if one has more capacity machines. Look at the prices of the machines and you will see that doubling the capacity does not cost twice the investment.

All the times that are generated like this are not waste, and are excellent for practicing 5S or for doing preventive maintenance.

Now may be a good time to reformulate the productivity measurement. If production orders are what is needed and no more, when “idle” time increases, it is a sign that productivity has increased.

"I don't know, something doesn't add up ..."

To demonstrate the effect of line balancing, I suggest an experiment that you can do at home or with your work team.

Get 100 tokens and 7 dice, and build a production line with 7 stations. Each station is assigned a die, which will be our variability simulation. Note that the die is not asymmetric, because it is uniform between 1 and 6, although it may exaggerate the spread. But it is a good simulator of variable capacity.

If the simulation of a workday is done, each station rolls its die and produces at most the number it rolled. If you roll a 5 and have three chips, you can only pass to the next 3 chips. The tokens that are going to be passed to me in the same turn cannot be used. The first resource "produces" what the die rolled because it has an unlimited supply of tokens.

What is the average capacity of a die? It is the average of all your numbers. The sum is 1 + 2 + 3… + 6 = 21 and that divided by six gives 3.5. So each station has an average capacity of 3.5 units / day. In 20 days they should be able to make 70 units.

To start the experiment on steady state, distribute 4 chips to each one, and now do 20 days of production.

Compare what you got to the expected 70 units.

This experiment saves hours of discussion and mathematical proofs, and is much more fun. Then variations can be made to demonstrate other things, such as that moving people from one place to another only increases the variability but does not get more capacity.

Conclusion

Balancing the production line only reduces the total capacity of the line. Worse still, the actual capacity is considerably less than planned, so it will default on a large proportion of the orders, in addition to the obvious effect on invoiced sales.

Delivering late isn't just bad service - it's selling less

Photo by Nandhu Kumar on Unsplash

When you promise a delivery time, you have to take production capacity into account. That means you have planned to deliver, invoice, and collect a number of orders within the promised timeframes.

When we deliver an order late, that means that the time we had planned to spend on that order was used on another order, so at the end of the month, with several late orders, we have delivered fewer orders than we had planned. This also prevents billing and collection.

This in itself is already a reduction in budgeted sales, so achieving 100% OTIF (On Time in Full), that is, managing to deliver on time and always complete, is the same as meeting the sales plan.

Lost time is never made up again. This is the effect on present sales.

I will also remember here the consequences on future sales. Late deliveries create so many problems for our customers that sooner or later our reputation suffers.

I end this brief insight by repeating a quote I saw today:

Thankful to Jorge Arias Galeas for sharing it on LinkedIn.

Yet another reason to achieve 100% OTIF.

Each company needs its adaptation

Photo by SinAbrochar Photo on Unsplash

A couple of weeks ago we started talking with a furniture factory. I will not give more details yet because we do not have permission to release more information.

However, with a single conversation we knew immediately that this factory is suffering from late delivery of its orders, and from there we made a series of guesses.

The most frequent case is that these factories do not use the time buffer concept to control work in process (WIP), and that is why their capacity fluctuates, making it impossible to estimate delivery dates with sufficient precision. (More details about this concept in https://otif100.com/the-3-keys).

However, in a second conversation, now with the production manager, we learned very valuable things. It was no surprise that the constraint of the process, which consists of four manufacturing steps, was not identified. And from all that conversation, our team deduced where the constraint should be, which in this case turned out to be the second resource.

But that was the first step. The analysis also told us that the first resource could become a bottleneck, but it is very expensive to increase its capacity, because it requires another cutting machine.

Recalling the previous article, about buffers as synchronizers, we designed the operating model for this factory. It turned out to be a combination of time buffer with stock buffer to ensure capacity buffer.

The inventory buffer I'm referring to is not just traditional raw material inventory. In this case it is an inventory of pieces already cut for a set of pieces that are repeated in several models. The supplier charges very little more for sizing, so this inventory is the buffer that builds the capacity buffer in cutting.

The system will look like this: customer orders consume parts already cut from inventory (and replenished with the TOC method), and custom parts (by color or size or shape) that need to be cut on site. The second resource processes both types of pieces, and the last two resources, with extra capacity, finish the furniture, maintaining a low WIP at all times.

Los conceptos son siempre los mismos, pero deben adaptarse a cada empresa como lo hace un sastre con un traje a la medida.

This week we will start with this new design. I hope to report the good results in another article shortly. For now it's just theory ... but it's a good theory!

Buffers: Flow Synchronizers

Photo by Edvin Johansson on Unsplash

I heard from Peter Senge that before he was 20, he was almost obsessed with interdependence. And when investigating more about systems thinking, all the authors agree on the holistic vision, where the whole is more than the sum of its parts.

Systems are not just groups of elements. They are also the interactions between its elements. And what a system produces is generated as a flow through these interactions, as an emergent characteristic of the system as a whole, and that cannot be obtained in any other way, even if we have all the parts available without interacting.

In his book The Fifth Discipline, Senge says that the fifth discipline is systems thinking. And this is very relevant for companies because companies are systems, therefore, managers should try to accelerate the flow as a first consideration. Flow of materials or services to customers, bi-directional flow of information and money flow from customers.

Knowing this, the next step is to ask yourself how to achieve better flow. In my book Synchronization, I say that the main thing is to increase the synchronization between the different flows that occur within the company. In that book I explain that systemic contradictions reduce synchronization and show how they can be methodically eliminated. But there is another aspect of synchronization that I want to talk about here.

Let's define the word first. Synchronizing is making two or more things happen at the same time.

In complex systems, such as companies, there are several flows that are occurring and we must synchronize them to achieve production. What if one flow has a different rhythm than another? One of the flows has cycles of days while the other has cycles of hours, how to ensure that what one needs is delivered by the other at the right time?

Thinking about this issue of different rhythms, I came up with the case of a field that needs irrigation. There are days of rain, and those days the field receives the rain and does not need more. For the days when it does not rain we can use the water from the stream that forms when it rains. In this case, on rainy days we do not need the stream, but it is precisely on those rainy days that there is a stream, but if it does not rain there is no stream that we need.

How could we synchronize the stream water with non-rainy days? In other words, how can we synchronize irrigation when the rate of rainfall is intermittent but the need is continuous? The solution is known: a reservoir that collects water on rainy days and can be used on non-rainy days for irrigation.

The reservoir decoupled both flows, and allowed us to control the timing between these two flows of different rhythm.

In companies we have demand, production, supply and cash flows. And each of those flows has different rhythms. The way to achieve synchronization is with reservoirs. In Theory of Constraints (TOC) we call them buffers.

Buffers can be of physical inventory, they can be of time or they can be of installed capacity, or a combination of those possibilities.

For example, if you manufacture mass consumer products, the way to synchronize demand with production is with inventory for immediate delivery. Another example; If one manufactures to order, and receives orders that use several days of capacity, the way to synchronize production with sales is by giving delivery times.

Buffers are flow stops, so it is desirable to have the minimum possible to ensure synchronization. It is not good to have a lot of buffers. The guide here is to put buffers only between flows with very different rhythms.

TOC applications for managing flows are based on buffers and have proven to be the most effective over decades of experience. And the methods include how to manage the size of the buffers based on changes in rhythms.

Do you have symptoms of a lack of synchronization in your company? Perhaps the remedy is to build and manage a few buffers within your processes.

en_US