DESIGNING A NEW MATHEMATICAL MODEL BASED ON ABC ANALYSIS FOR INVENTORY CONTROL PROBLEM: A REAL CASE STUDY

In modern business today, organizations that hold large numbers of inventory items, do not find it economical to make policies for the management of individual inventory items. Managers, thus, need to classify these items according to their importance and fit each item to a certain asset class. The method of grouping and inventory control available in traditional ABC has several disadvantages. These shortcomings have led to the development of an optimization model in the present study to improve the grouping and inventory control decisions in ABC. Moreover, it simultaneously optimizes the existing business relationships among revenue, investment in inventory and customer satisfaction (through service levels) as well as a company’s budget for inventory costs. In this paper, a mathematical model is presented to classify inventory items, taking into account significant profit and cost reduction indices. The model has an objective function to maximize the net profit of items in stock. Limitations such as budget even inventory shortages are taken into account too. The mathematical model is solved by the Benders decomposition and the Lagrange relaxation algorithms. Then, the results of the two solutions are compared. The TOPSIS technique and statistical tests are used to evaluate and compare the proposed solutions with one another and to choose the best one. Subsequently, several sensitivity analyses are performed on the model, which helps inventory control managers determine the effect of inventory management costs on optimal decision making and item grouping. Finally, according to the results of evaluating the efficiency of the proposed model and the solution method, a real-world case study is conducted on the ceramic tile industry. Based on the proposed approach, several managerial perspectives are gained on optimal inventory grouping and item control strategies. Received April 20, 2021. Accepted July 11, 2021.


Introduction
In today's industrialized world, given the intense industrial competition, it is crucial to pay attention to inventory control and the proper control of all types of organizations, especially manufacturing ones. In recent years, a large percentage of the total capital of organizations is made up of their inventories. In developed and developing countries, the holdings of capital at all times are very high. In the United States, for example, this amount is $50 billion (World military spending tops $1T, 2004) for defense projects and more than $95 billion for private companies [45]. Accordingly, it can be said that, nowadays, inventory planning, maintenance and control are significant issues for many organizations in every sector of industry and economy. It is not specific to third-world countries; even advanced industrialized countries have to somehow deal with cost problems related to inventory. Lack of proper inventory control can cost an organization a lot. On the one hand, the lack of one type of item may halt the production process; in this case, failure to deliver the product on time would result in a loss of customers and, consequently, a loss of markets. This can impose enormous costs on the organization. On the other hand, excessive inventory maintenance increases holding costs, which will have a negative impact on the company's profitability. As such is the case in the present era, inventories are assigned a crucial role to play in any production or service system. If properly controlled, inventories can help to balance the flow of operations in organizations [4,41].
In the industrial world today, the number of inventory items is significantly increasing as customer demand pressures for various products increase. So, creating an appropriate inventory control system should be a priority for all organizations in order to maintain their competitive advantage and rapid and effective response to varied demands. As careful and cautious look at the inventory issue enables organizations to make the best use of their capabilities and attain their goals efficiently. So far, different models and methods have been proposed to classify inventories. Among them, the ABC analysis is an extensively used approach for inventory planning and control [11]. It enables organizations to classify their inventories into meaningful categories. In general, this approach is based on the Pareto principle, known as the 80-20 rule. As it suggests, the policy should be that only 20% of the inventory make up 80% of the total annual cost of the inventory system, and the remaining 80% account for only 20% of the costs. The result of such an analysis shows that class A should be very low, class B low, and class C the least controlled [35].
In contrast to traditional methods, the ABC analysis provides a variety of techniques and criteria for inventory classification, such as the classification of annual order of goods. In line with the first study done by Flores and Whybark [18]. In this field, other methods have been presented in recent years to classify ABC multi-criteria inventories. In this regard, one may refer to the analytic hierarchy process (AHP), artificial intelligence techniques, statistical analysis, data envelopment analysis (DEA) [12], weighted Euclidean distance, standard criterion matrix model, cluster analysis model, meta-heuristic algorithms, optimization algorithms, ABC-FUZZY classification approach, and multiple criteria decision aiding (MCDA). As it turns out, none of the decomposition algorithms such as Lagrange relaxation, Benders and Wolf-Dantzig have been put to practice. In the present study, however, they have been used and even compared.
The method of grouping and inventory control available in traditional ABC has several disadvantages. First of all, there are no specific guidelines in the literature with which to specify the service level for each group [35]. Secondly, since grouping decisions are independent and are preceded by service level decisions, their interactions are not applied and neither of the two decisions can be optimal. Thirdly, since the existing budget is not taken into consideration until the last stage, there is no guarantee that decisions will be made about the grouping or the services offered in the first two stages. As a result, it often has to be reconsidered until decisions can be made to assign different service levels to a diverse group of items. This process can be tedious when there are too many items in stock, which may even lead to infeasible solutions. These shortcomings have led to the development of an optimization model in the present study to improve the grouping and inventory control decisions in ABC. This model also helps inventory and operations managers optimize several items simultaneously, including (a) the number of inventory groups, (b) their service levels, and (c) the allocation of each item to each group under a limited budget. This paper is structured as follows: The second section reviews the literature existing in the field, which is performed in three streams. Section three describes the proposed model and explains the mathematical model, which includes sets, parameters, decision variables, and so on. Section four presents' methods for solving the mathematical model and provides some explanations. Section five compares the proposed solutions, introduces the best method and offers a sensitivity analysis of the mathematical model. Section six provides an overview of the case study and the implementation of the mathematical model in this industry, including the results of the model implementation. Next, some managerial insights are presented in section seven. Finally, section eight recounts the conclusions of the study and provides some suggestions for future research.

Literature review
Based on the topic discussed in the present study, the literature is reviewed in three streams.

ABC analysis for the inventory control of stock
In general, the inventory of goods is an essential subject to discuss and involves both theoretical and practical aspects to consider. It has been the subject of discussions for many years now. There is still no complete agreement on certain inventory issues, and newer comments are being made every day. Among the different definitions of inventory, the most inclusive seems to be the one given by the American Institute of Certified Public Accountants (AICPA). It postulates that an inventory of goods comprises tangible assets that are (a) held for sale in the ordinary course of business, (b) currently used in the process of producing such goods, and (c) already used in the production of products and services. According to this definition, an inventory of assets includes raw materials, semi-finished products, manufactured goods, and supplies. Because the inventory of goods is of great importance as the current assets of a business, the warehouseman must invariably identify the inventory items, classify them and enumerate them in the warehouse. It is necessary to balance the need for the product and its inventory. Due to the variety of products in a company or store and considering the number of times a product is delivered over one or more days, it is so hard to practically control the goods through numerous cards and records. So, it is necessary to strike a balance between the required goods and the available goods. The most important materials and goods stored in a warehouse, including the products and goods constantly provided to meet the needs of customers, must be clearly accounted for in a precise inventory list. This is why some stores have to invest in inventory and incur costs for holding their goods. There are, however, certain factors that may create disturbances in the regularity of inventories, such as untimely supply of goods, discounts on bulk orders, precautionary storage, and reduction in the number of purchasing operations. Consequently, in order to create a perfect inventory control system, different inventory items should be classified into meaningful categories based on appropriate indicators and criteria. So far, different models and methods have been presented in this regard. Among them, ABC analysis is the most widely used to plan and control inventories [11].
For the first time in 1915, Ford Harris of the Westinghouse Institute developed a simple formula for inventory accumulation [16]. Since then, this seemingly independent formula has been developed by many people. The formula is also known as the Wilson formula because it was R.H. Wilson who raised it as an integral part of an inventory control scheme. An article by Arrow et al. [7] provided an in-depth analysis of a simple inventory model, which was later followed by mathematicians such as Massart [33]. In recent years, massive research has been done on inventory classification based on the ABC multi-attribute approach to improve inventory control. The study by López-Soto et al. [31] is a recent attempt in this case. Chakravarty [10] proposed a dynamic model for the classification of inventories. In this model, inventories were categorized according to the increase of unit demand and the cost per unit. A main strength of this model was the minimization of the overall cost, but its significant weakness was the long runtime. Flores and Whybark [18] developed a two-character matrix approach to classify inventories and provide comprehensive control over them. Their approach classified the inventory items according to the ABC standard classification and based on individual indexes; two single-item groups were combined by a conjugate matrix. A major disadvantage of this approach was that several indices were too complicated and impractical. Cohen and Ernst [13] presented a model to classify inventories. They used only one annual dollar usage index for this purpose. Guvenir and Erel [19] proposed a multi-characteristic genetic algorithm to classify inventories. In essence, this model determined the weights of the indexes when classifying inventories, which showed better results than the analytical hierarchy process (AHP) method. Van Eijs et al. [46] researched the classification of inventory items. They referred to two types of demand-side classification strategies including direct and indirect groupings. The indirect grouping strategy assumed onebase order intervals and multiple-order intervals that were linear to them. Partovi and Burton [38] used the AHP to classify inventories. This classification took into account both qualitative and quantitative criteria.
In 2008, Tsai and Yeh [44] proposed the flexible particle swarm optimization (PSO) algorithm to classify inventories. They proposed an optimization approach to inventory classification problems when inventory items were categorized for multiple purposes, such as maximizing inventory turnover rates and minimizing costs. Their model challenged the other models of inventory classification in that they required identifying the number of groups before classifying the inventories. The advantage of that algorithm over other models is that it can specify the optimal number of classification groups while completing the inventory classification. Hadi-Vencheh [20] presented a wide-ranging form of the NG model for the classification of inventories based on several indicators. This was a nonlinear programming model to determine a standard set of weights for all inventory items. Yu [50] conducted a study to compare classification techniques based on artificial intelligence (AI) and traditional classification techniques. The results showed that AI-based techniques would have greater accuracy in the classification of inventories. Through statistical analysis, it was also suggested that the SVM technique was more accurate for classification than the other two solutions (i.e., BPN and K-NN).
Inventory managers often classify inventories to control them better. The well-known ABC approach classifies inventory items to A, B, and C based on their usage and sales. Millstein et al. [35] presented an optimization formulation to improve the quality of inventory grouping under a limited budget. In the research by Kaabi et al. [25], TOPSIS was used to calculate the score of each inventory item, and a continuous-valued number system (CVNS) algorithm was used to create the weighting criterion. Finally, the inventory was classified according to the item weighting criterion.
In the literature, most existing classification models use the ABC inventory classification as a ranking problem. For example, a set of inventory items that are weighted based on their performance are listed in a descending order of scores. Douissa and Jabeur [14] examined the problem of classifying the ABC inventory list as an allocation problem. For example, an inventory item was classified in a group of items with the most similar properties. Multiple criteria ABC analysis has also been extensively used in inventory control to support organizations allocate inventory items to several classes according to different evaluation criteria. Many solution methods have been suggested in the literature to address this problem. However, most of them are quite compensatory in multi-criteria aggregation, which means that an item with a bad grade on one or more main criteria can be given good grades because the other criteria can compensate corrupt practices. It is, thus, necessary to consider the ABC analysis of multiple criteria. Liu et al. [30] proposed a new efficient classification method according to the outranking model to tackle such a problem. They combined the simulated annealing algorithm and the clustering analysis to search for an optimal classification. Recently, Mehdizadeh [34] has introduced a new criterion for the ABC analysis of car spare parts classification. The rough set theory distinguishes strong knowledge from ABC analysis. Using the rough set theory, the researcher has derived rules and patterns from stochastic information already obtained by the ABC analysis. These rules are extracted to predict future retailers' demands. The order is then determined based on the periodic review approach. Moreover, Jesujoba and Adenike [24] investigated ABC analysis inventory management practice and product quality of manufacturing companies in De United Foods Industries Limited. a populace of 385 people was considered. They entailed all the staff of inventory-related departments, and a sample size of 196 was selected. The quantitative method utilizing a questionnaire to gather data from the respondents was applied in this study. Research data were examined by means of regression analysis. The results depicted that a strong positive and significant relationship exists between ABC analysis on product quality of De United Foods Industries Limited. In the paper by Abdolazimi et al. [5], a bi-objective mathematical model was suggested to enhance the inventory grouping based on ABC analysis. The proposed model concurrently optimized the service level, the number of inventory groups, and the number of assigned items. To solve the model in small and large dimensions, two exact methods (LP-Metric and ε-constraint) and two meta-heuristic methods (NSGA-II and MOPSO) are used.

Lagrange and benders algorithms in comparison
Optimization problems have different matrix structures that are based on the arrangement of matrix blocks and their relationship [1]. The methods that utilize this particular problem matrix structure are usually more efficient and find the right answer to the problem at the right time. In general, the structure of optimization problems often includes complex constraints or complex variables. These constraints and variables usually reflect the shared use of problem blocks by one or more scarce sources. Lagrange relaxation algorithm, Bander's decomposition, Dantzig-Wolfe decomposition, and some other formats can be used to solve such problems. In fact, in order to solve a problem using the decomposition methods, it is necessary first to identify the problem structure. Next, the methods of decomposing the set of blocks are put to practice separately to achieve an overall solution to the problem. Many researchers have used these analytical algorithms, some of whose works are presented below.
Multipurpose assembly lines usually exist in industries producing large-scale products (e.g., the car industry), where multiple workers are assigned to the same station to perform several tasks simultaneously on the same product. The efficient mathematical formulas presented are only able to solve a few small samples, while larger ones are solved by heuristic or meta-heuristic methods that do not guarantee optimality. The article by Wang et al. [47] presents a new short-interest LP formulation with strong symmetry break constraints and breaks down the original problem in a new Benders decomposition algorithm to solve great samples optimally. Li et al. [29] introduced cluster supply chains to prevent this potential operational risk through chain collaboration. Due to the large amounts of data from actual operations and the complexity of the cluster supply chain structure, a parallel Lagrange heuristic was presented to solve the problem of the nonlinear mixed program (MINLP). The Benders algorithm was also used for performance evaluation through comparisons.
In a study by Kovački et al. [27], the Lagrange relaxation approach was used to propose a new method for the dynamic reconfiguration of a distribution network (DRDN). The purpose of the DRDN was to determine the optimal topology (configuration) of a distribution network over a specified period. First, the "switch-toswitch path" method was used to model distribution networks and to formulate DRDN as a mixed linear integer programming (MILP) problem. Then, the problem was solved using the two-step Lagrange method. At the first step, the Lagrange related dual problem was also solved. It was created by relaxing the switching performance constraints that separated the Lagrange dual problem. This problem was much easier to solve than the original one. At the second step, the Lagrange dual problem was solved to perform the heuristic search. Then a suboptimal solution, though possible for the main problem, was presented. Finally, the proposed DRDN model was expanded to a multi-purpose formula which accounted for the impacts of network reliability and switching costs on the DRDN process. In a study by Rohaninejad et al. [40], an accelerated Benders decomposition algorithm was suggested to address the location of the multi-echelon reliable capacitated facility location problem (ME-RCFLP). The goal was to offer a trade-off between total investment and system reliability on one hand and operating costs on the other while making facilities more costly (increasing their capacity and reducing the likelihood of complete or partial failure).
In the real world, regulated electricity market operations, electricity price forecasting, profit-based unit commitment (PBUC) and optimal bidding strategy are significant issues. Among them, PBUC is a problem of hybrid optimization. Sudhakar et al. [42] proposed a hybrid approach of Lagrange Relaxation (LR) and Differential Evolution (DE) to solve the PBUC problem. In this approach, LR was used to solve a single-commitment problem, and the DE algorithm was applied to update Lagrange multipliers. Recently, Mohebifard and Hajbabaie [36] have formulated the problem of optimizing the network traffic signal scheduling as a mixed integer non-linear program (MINLP). They have also devised a proprietary method to solve the problem with a tight optimality gap. In another recent study, Mardan et al. [32] have presented a comprehensive formulation for multi-product, multi-period, multi-modal, and two-objective green closed-loop supply chains. Minimizing the total cost and the emissions of greenhouse gases is the purpose of this model, which is met by deciding on the location of facilities, the amount of transportation and the balance of inventory. The results show that the proposed solution approach can reduce the total cost by more than 13%.
In the research by Zetina et al. [51], two detailed Benders decomposition-based algorithms are presented to solve a multi-commodity incapacitated fixed-charge network design problem. In addition, Li and Jia [28] have studied the issue of order fulfillment in an electronic environment where the e-tailer assigns orders to its delivery centers and sets routes for shipments from delivery centers to delivery stations. In their research, a mix integer program is proposed for the problem, its computational complexity is analyzed, and the Benders decomposition algorithm is considered to solve it. The computational performance of the proposed algorithm is also evaluated on problem instances created based on the JD.com logistics network in Shanghai. It is necessary to equip the network with monitoring sensors to avoid the severe risks of contaminants entering the water distribution network. Hooshmand et al. [21] have investigated the problem of sensor location with identification criteria, assuming that a limited budget is available for sensor placement. The aim of the study is to minimize the number of vulnerable nodes with the same alarm pattern. First, the problem is considered as a MIP formulation, assuming that the objective functions were ordered on the basis of specific prioritization. Then, using the basic problem structure, the exact logic-based Benders decomposition algorithm is developed. The challenge is to provide a solution to multi-objective problems with nonlinear constraints on large networks.
Networks are made up of many nodes that have millions of streams of dynamic nature. Jaglarz et al. [23] have addressed this issue. A novel linear programming problem of energy awareness, multi-objective, complex and integer formulation is modeled and solved using the Lagrange decomposition algorithm. This algorithm has been improved with new ergodic sequences to enhance the quality of primal solutions recovered from dual problems. Finally, In the paper by Abdolazimi et al. [2], a bi-objective mixed integer linear programming model was developed to minimize the overall cost and maximize the use of eco-friendly materials and clean technology. The paper evaluates the exact (LP-metric, modified ε-constraint, and TH), heuristic (Lagrange relaxation algorithm), and meta-heuristic (MOPSO, NSGA-II, SPEA-II, and MOEA/D) approaches in solving the proposed model in both small and large sizes. This study provided an all-inclusive view regarding the importance of selecting an appropriate solution methodology based on the problem dimension to ensure obtaining the optimum and accurate solution within the reasonable processing time.

Research gap
In the industrial world today inventories serve as a vital element for all organizations. In recent decades, organizations have encountered thousands of different types of inventories, and inventory management and creation has been the subject of discussions in this regard. Proper inventory control systems have become a significant challenge for all organizations, and this has brought a need for research in this area. Lack of appropriate inventory control systems creates many problems for organizations. First of all, they face inventoryrelated costs, as for holding, ordering and shortages. In some organizations, the shortage of an inventory may hinder the production process, resulting in the problem of not delivering the product to customers on time, thereby increasing the cost of scarcity. In some cases, the organization may also face excessive inventory increases, which would also increase inventory holding costs. Secondly, because the number of inventory items is increasing rapidly along with the increasing customer demand for various products, organizations must maintain a rapid and effective response to their customers' demand. This is essential if they are to survive and continue their competitive advantage. Finally, as the case is nowadays, the total capital of a large percentage of organizations is made up of their assets. In developed and developing countries, the capital held in inventories at any given time is enormous. Therefore, research on appropriate inventory control systems is necessary for all organizations.
Given the above, the necessity of research on inventory control systems is undeniable. In addition to the problems mentioned, another problem with most organizations is that their inventory control system uses only the same control policy for all inventories, which is not economically viable. Also, the resources of all organizations are limited, and the application of a common control policy to all inventories cannot ensure the useful management of resources, which results in additional costs for the organization. Various studies have shown that, to overcome this problem, organizations must classify their inventories into meaningful categories. A commonly used approach in this area is ABC analysis. In most organizations, inventory classification is based on only one index; therefore, this classification is not able to meet all the needs of inventory control systems, and research is essential to identify different indicators that can be of use in inventory classification.
For the present research, several articles were reviewed in the field of grouping different inventories, especially warehouses. Unlike other inventory grouping models, this study focuses on optimizing the relationships among income, inventory stock, service level, and management costs to maximize profit. Table 1 presents the literature review in three sections. In the first one, the reviewed articles are labeled for their critical formulating features including performance criteria (single or multi-criteria), objective function type (single or multi-objective) and budget constraints. In the second section, the studies that have used decomposition algorithms (Lagrange and Benders) are reviewed to solve their models and compare them with each other, and even with other methods or algorithms. The last section deals with reviewing articles that report case studies. The model in this paper contributes to the existing literature by presenting a combination of new formulating features such as simultaneous identification of inventory groups, their service levels, allocation of items to groups, maximizing the profits of a company or industry and optimal allocation of limited funds to inventory items. It also enables purchasing and inventory managers to cope with inventory grouping and service levels with a limited budget. Moreover, it simultaneously optimizes the existing business relationships among revenue, investment in inventory and customer satisfaction (through service levels) as well as a company's budget for inventory costs. Considering the products in different packages, the model proposed in this study takes into account inventory control policies such as inventory shortages and holding costs. It is, thus, brought closer to reality.

Description of the mathematical model
Consider N items. Each item has an average monthly demand (d ) and a standard deviation (σ). The demand for each stock keeping unit (SKU) follows the normal distribution N (d, σ). The delivery time of each item is h it , and π is the gross income of each item of goods whose selling price is minus the purchase cost. To reduce overhead costs and simplify the inventory management process, the inventory manager has the task of classifying the goods into different groups (j groups) and then setting a service level for each group. Thus, each stock group in the stock has a service level β j . The cost of the inventory maintenance for each SKU is e it . It is clear that the 99.99% service level for all SKUs has the highest revenue but is practically not feasible, as it also involves a high cost because the model has a total fixed budget and limited D to store products in stock. The inventory decision maker should optimally allocate D to the SKUs to optimize the total net profit. When the demand is distributed normally, the inventory level of each SKU in the warehouse to reach β j (in group j ) can be calculated by the standard method through equation (3.1) as follows: where Z j is the value of Z corresponding to β j in the standard normal distribution. Generally, the level of inventory in equation (3.1) can be negative in the case of (a) negative Z j (i.e., β j is less than 50%), (b) significant standard deviation (many changes in demand) and (c) long delivery time. From a cost management point of view, θ j is for the maintenance and management of each inventory group. This cost may also include purchase costs and executive costs for each group. This paper assumes that θ j is a variable and a function of the number of inventory groups. It is also assumed that the warehouse faces a shortage, so there is a cost (CL ijt ).
In the inventory optimization problem of this paper, the manager simultaneously decides about two topics; (a) select the number of inventory groups (with the corresponding service levels) and (b) assigning each SKU to a suitable group. These decisions should be made in such a way that the total holding and the overhead costs of the inventory management are not more than the limited budget.
According to the previous description, the presented mathematical formulation is a kind of MILP. The sets, parameters and variables of this mathematical formulation are as follows: -  Service level associated with inventory group j Z j Z-value associated with the service level β of inventory group j O it Fixed order cost for SKU i from the central stock to supplier s in period t CL ijt Fixed cost of shortage for SKU i at inventory group j in period t

Decision variables V it
The inventory level of SKU i in the central stock in period t La ijt The amount of shortage for SKU i at inventory group j in period t X ijt If SKU i is assigned to group j in period t, 1 and otherwise 0 Y jt If inventory group j is selected in period t, 1 and otherwise 0 Subject to: Objective function (3.2) calculates the maximum total depreciated net profit from which costs are deducted. Herein, the service level is a fixed rate per unit of time to calculate the amount of demand. If item i falls into inventory group j with service level β j , d it β j is the average warehouse demand that can be completed by any SKU. The first statement in objective function (3.2) is the total gross profit of all the SKUs. The second statement is the variable cost overhead management of the inventory groups in the warehouse, and the third statement is the cost of inventory shortage at the end of each period if the warehouse faces a lack of inventory. Constraint (3.3) assigns one SKU to a maximum of one group. It is not possible to assign any item to any group because, for each SKU i and group j, if δ ijt > 0, item i cannot be attributed to group j, i.e., X ijt = 0 [35]. Constraint (3.4) suggests groups must be selected and each SKU must be assigned to a group; i.e., no SKU can be assigned to group j if group j is not selected. Constraints (3.5) calculate the amount of inventory based on equation (3.1) to which the shortage is added. Constraint (3.6) guarantees that the cost of holding the inventory in a warehouse does not exceed the total fixed budget. Constraint (3.7) indicates that the model variables are positive, and constraint (3.8) specifies the binary variables of the model.

Proposed decomposition algorithms
In many mathematical models, with increasing the problem size, the computational complexity of the model also increases exponentially so that the exact solutions cannot be calculated in a logical time [2,3]. Following this, various methods have been proposed by researchers who use a particular approach to seek approximate and near-optimal solutions. These methods are generally divided into two categories of heuristic and metaheuristic algorithms. Decomposition algorithms are among the heuristic approaches that seek to simplify complex mathematical models in order to achieve an approximate answer in a logical time. Numerous applications of these algorithms have led to their utilization in various optimization problems. Studies by Yolmeh and Saif [49], Wang et al. [48], Naderi et al. [37], and Aydin and Taşkin [8] are examples of the application of these algorithms to supply chain problems.
This section presents the solutions to the proposed model. In this regard, references are made to Lagrange and Benders decomposition algorithms. Then, they are compared so as to select the best solution to the model. As mentioned before, decomposition solutions are intended to simplify the mathematical model proposed in this research. These solutions are introduced in the following subsections.

Lagrange relaxation algorithm
The Lagrange relaxation algorithm is one of the innovative techniques that utilize the Lagrange theorem to simplify complex mathematical models in order to obtain an approximate response within a reasonable time. This method has been used in various optimization problems. Studies by Diabat et al. [15], Kang and Kim [26] and Ahmadi-Javid and Hoseinpour [6] are examples of the application of the Lagrange method to these problems. The mathematical approach of this method is described below. Consider the mathematical model presented in equation (4.1): min c T x

Subject to
Ax ≤ b x ∈ X. (4.1) The general approach of the Lagrange relaxation algorithm is based on releasing the complex constraints of the mathematical model and adding them to the objective function of the problem using Lagrangian multiplier coefficients. The basis of this approach is that, first, the problem constraints are relaxed. Then, considering the effect of the constraints on the problem, if the relaxed constraints are violated, the penalty functions for each constraint are added to the objective. Thus, the relaxed form of model (4.2) is as follows: where µ T is the Lagrangian coefficient of the algorithm. Finally, the Lagrangian function is considered as the equation L (µ) = c T x + µ T (Ax − b). In most cases, the result of the relaxed mathematical model is unjustified for the original mathematical model. This unreasonable response is considered as a low limit (in problems with the minimization objective function) or a high limit (in problems with the maximization objective function) for the original mathematical model. In the following, the Lagrangian relaxation algorithm justifies the solution by the use of innovative processes. It should also be noted that a justified response is either an upper limit (in problems with the minimization objective function) or a lower limit (in problems with the maximization objective function) for the original mathematical model. Given that the optimal solution of the original mathematical model is obtained between these upper and lower bounds, the algorithm seeks to reduce this boundary and, thereby, achieve a near-optimal solution to the problem.
Correct selection of complex constraints for relaxation is one of the crucial parts of the Lagrange algorithm that has a direct effect on the performance of this algorithm. In this paper, constraint (3.5) is chosen for relaxation due to the complexity that it has created for the mathematical model. By releasing this constraint, the proposed relaxed model is transformed as follows: Constraints (3.3)-(3.8), except for constraint (3.5), are also added to the formulation.
In the formulation delineation above, u is the Lagrange coefficient and a free of sign. The Lagrange relaxation algorithm starts with the constant values of this Lagrange coefficient. However, the coefficient value must be updated in each iteration of the algorithm using a mechanism. Various methods have been proposed to update the values of the Lagrangian coefficients, the most famous of which is the sub-gradient method. It has been used in this paper as a well-known and popular method of solving the Lagrange relaxation problems [17]. Based on the sub-gradient approach, the Lagrangian coefficient in the C + 1 iteration of the algorithm is calculated by equation (4.4): where π c is the size of the steps of the algorithm. It is calculated by equation (4.5): BUB is equivalent to the best upper bound calculated for the c iteration, and LB c is the lower bound of the problem in the c iteration. Coefficient v c is usually between 0 and 2. Furthermore, it is worth noting that LB is an initial feasible solution for the problem. This scalar can be gained employing other solution methods like CPLEX, LP-metric, ε-constraint, and the like. In this study, LB has been achieved based on the CPLEX solution in GAMS software.

Benders decomposition algorithm
The Benders decomposition algorithm was proposed by Benders [9] to solving complex integer problems. It is an exact solution method based on the problem decomposition, and it operates in two main parts including (MP) and sub-problem (SP). It, thus, uses the relationship between the sub-problem and it's dual. Once cutting planes are added to make the algorithm optimal and feasible, the convergence of the upper and the lower bounds continues until an optimal solution is reached. In each algorithm's iteration, a corner point is obtained from the sub-problem, and the problem-solving space is cut from that point. The optimal response is obtained when the upper and the lower limits are closely matched or intertwined. Bender's flowchart is shown in Figure 1.
The first step is to standardize the problem. So, the primary problem can be rewritten in the following standard form: Subject to:  As can be seen, to conduct standardization, all the integer variables are moved to the right; on the left, there are only continuous variables.

The master problem (MP) and the lower bound of the algorithm
In the proposed model, Y jt is a hard variable because it is binary. To use the Benders algorithm in the master problem, therefore, a constraint to relaxation (defined as a nonnegative variable in the domain of real numbers) is added to the MP. It is defined as equation (4.11): Min Z Subject to: (4.11)

Sub-problem (SP) and upper bound of the algorithm
After the Y jt variable is relaxed in the main problem and the MP model is solved, for the sub-problem in the SP problem, the relaxed variable becomes binary and the relaxing constraint (i.e. constraint (4.8)) is removed. Considering V 1 it , V 2 jt , V 3 it and V 4 t as the dual variables for constraints (4.7), (4.8), (4.9), and (4.10), respectively, it is possible to solve the dual problem of this sub-linear problem; the upper bound in each iteration generates an algorithm written as follows: Subject to:

Cutting planes of the algorithm
As indicated in Figure 1, in each iteration of the Benders algorithm, the upper and the lower bounds of the problem converge as the two main types of feasibility (constraint (4.17)) and optimality (constraint (4.18)) cutting planes are added to the MP. The former remains in the problem-solving space and becomes unjustified, but the latter will work to make the problem function get worse. Given the following dual solution to the linear problem, the finite element problem that creates a lower bound in each iteration of the algorithm is obtained as follows: Min Z y (4.16) Subject to: where unbcut is dynamic optimal cut set and optcut is dynamic feasibility cut set. These sets first take the value no and based on each iteration, wherever they take the value yes, equations

Comparison of decomposition algorithms
The purpose of this section is to validate the proposed methods for solving the mathematical model presented to determine the optimal number of inventory groups in stock. Two decomposition algorithms, namely Lagrangean and Benders, are presented. Then, the model is solved using these algorithms, and the results are compared. In order to practice these two proposed methods and compare them with each other, the mathematical model for each method is implemented in 10 different numerical examples. Then, using the statistical assumption test, the results of these methods are compared in all the numerical examples using t-tests. The TOPSIS technique is also used to select the best method. It is worth noting that all the numerical examples are implemented to solve the proposed mathematical model in the GAMS software version 24.1.3 and CPLEX solver and in a system with Cori7 6700 HQ CPU and 16 GIG DDR4 RAM. For this part of the research and the comparisons, the research data are generated experimentally.

Numerical examples
In this section, to evaluate the proposed mathematical formulation and the proposed methods for solving it, the indicators are first defined. Two indices are, thus, determined. They include the value of the objective functions calculated by running the model with each proposed method as well as the time spent by that method. Also, through generating several numerical instances, the proposed methods are compared to each other. The other parameter values used in this numerical instance are based on uniform distribution. In the case of θ j , which is a variable and a function of the number of groups, it is set to get a uniform value (120, 200) if the number of groups is greater than 10; otherwise, it is a uniform value (50,100).
The results of the implementation of Lagrange as well as Benders are compiled on the numerical examples produced in Table 2 and Figure 2.
As shown in Figure 2 and the last row of Table 2, in the objective functions section, on the average, the Benders algorithm has provided a better response. On the other hand, concerning the solution time, it is clear that the average of Lagrangean is the best of all.

Statistical analysis of the results
Several t-tests are used to analyze the results of the two proposed methods of solving the mathematical model and to compare them with each other. Given the 95% confidence level, the statistical comparison of the means of the results of the two suggested methods is performed for each of the defined evaluation indices. In each comparison, the assumption of zero (H 0 ) is equal to the mean of the results of the two proposed methods, and the opposite hypothesis (H 1 ) seeks to refute this hypothesis. This hypothesis test is performed for both specified indices, e.g., the objective function value and the duration of the model implementation. The results of this test using MINITAB version 19 are presented in Tables 3 and 4. Based on the results in the tables, since the P-value for the objective function index is higher than the significance level, the null hypothesis for the first index is accepted. This means that at a 95% confidence level, there is no meaningful difference between the responses of the two proposed solutions in terms of the objective function value index. Likewise, the null hypothesis regarding the CPU Time index of the model is accepted because its P-value is higher than 0.05, which means there is no significant difference between the responses of the two proposed solution methods in terms of the CPU Time index.

Determine the best algorithm using the TOPSIS technique
Based on the results of the numerical examples as well as the statistical comparisons, it is not possible to determine a solution method which is superior to the others in terms of both criteria. Therefore, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) technique is used to select the best method.
The term TOPSIS means preference based on similarity to the ideal solution. The model developed by Hwang and Yoon [22] is an excellent way to rank options. This algorithm evaluates m options by defining ideal positive and negative solutions, using n evaluation criteria. Finally, the alternative that has the shortest distance from the positive ideal solution and the longest distance from the negative ideal solution are known as the superior options. Using the results of the numerical example to implement the TOPSIS technique, the Lagrangean method has been selected as the better one out of the two proposed solution methods.

Case study
In this section, the proposed formulation is applied for the strategic analysis of a ceramic tile factory. Mehseram ceramic tile factory was established in 1997 in a land area of 171 000 sqm located in Yazd, Iran. It currently produces all kinds of granite tiles (e.g., salt pepper, cellubel salt and vin) and porcelain products (e.g., matte glaze, bright glaze and polish glaze).
The ceramic tile industry studied in this research has several warehouses for various types of granite tiles and porcelain products, semi-finished products, chemicals, spare parts, and more. Since the ceramic tile warehouse  of this factory is very important, this research is carried out in this place. For under-examination inventory control and tile ordering, the logistics department of the factory often uses inventory planning models with the expertise and experience of its experts. At present, warehouses use a traditional ABC analysis with different categories, taking into account cost and annual consumption metrics. The main advantages of the traditional ABC analysis are its simplicity and applicability in difficult and significant situations as well as the practical benefits observed in inventory management. However, the method has some disadvantages as well, as mentioned in the previous sections. Using the information obtained, the proposed ABC analysis model is implemented with regard to the price criteria of the commodity and the annual consumption rate for the factory warehouse items. Table 5 lists the tiles in the ceramic tile warehouse of the factory.
According to the data, there are 140 ceramic tiles in the warehouse that are created in different production lines. It should be noted that the factory also has separate warehouses for semi-finished goods, raw materials, etc. In this research, the largest end-product warehouse is selected for the case study. Next, the ABC analysis model is run by the factory's specialists for these products. Table 6 presents the annual turnover rate calculated on the basis of equation (6.1): Annual Turnover Rate = Annual Consumption * Unit Cost. (6.1) In addition, based on the calculated annual turnover, the warehouse items are ranked in such a way that the higher the annual turnover item is, the higher the item's rank will be. Table 6 presents the values in a descending order and in A, B, and C classes.
As it can be seen in the table, the items in stock are totally 12 in class A, 27 in class B and 103 items in class C. In Table 7, classes A B, and C are compared by the percentage of the items and the final values. Table 6 shows that 103 items in class C, i.e., 72.55% of the total factory warehouse items, are worth 9.85%, while only 12 items in class A, i.e., 8.45% of all the items, are worth 65.82%, or $102 038 493 853.37. Moreover, 19% of all the items (27 items) are in class B, with a value of 24.33% or $37 722 210 667. Table 7 summarizes the corresponding analyses.
The company's annual sales are approximately $12 million, and there are 2650 items in its stock. It is to be noted that only a ceramic tile warehouse with a stock of 140 items is taken into consideration in the present study. The plant has a planned budget of $3.5 million set by the inventory manager annually. Based on the number of the groups, each inventory group is estimated at $500-$700 for the groups of 7 and more than $750-$2000 for the management or overhead costs. The sales and inventory data for 12 months are provided for this article.
As mentioned earlier, the plant is currently implementing the traditional ABC approach to classify SKUs according to its sales volume. After the identification of the groups (A, B, and C) and SKU memberships, an    Figure 3. The iterative ABC procedure for inventory grouping [35].
iterative method is used to adjust the service levels for the inventory groups, as shown in Figure 3. This method starts with the desired service level for each group according to the experience of the decision-makers; e.g., 95%, 85% and 80% for class A, B, and C, respectively. Since this decision is irrelevant to the inventory budget, the primary service level may result in a budget deficit [35]. Therefore, the decision-maker should review the service level before achieving a reasonable and feasible solution. According to Figure 3, in addition to being tedious and time-consuming, the inventory management policies proposed by this approach are often undesirable [43]. A solution obtained from the ABC method in Figure 3 is presented in Table 8.
The procedure of setting service levels on the basis of item classes. Based on the calculations made to classify the items in stock and the results obtained, the inventory manager seeks to answer the following questions: -Is the current A-B-C plan optimal, or does the plant have to manage groups of more than three considering their different service levels to maximize profit? -Are the annual sales a reliable principle of group assignment for each SKU? -How to optimally allocate inventory budget to each SKU? -Should the factory remove some of the SKUs from its inventory? If so, which SKUs? In order to answer these questions, the model proposed in the present study is implemented for the factory's warehouse in question. At the time of implementation, a total of 108 contingency groups are included. They comprise 99 groups with service levels from 1% to 99% (up 1%) and nine groups with service levels from 99.1% to 99.9% (up 0.1%). The purpose of considering nine additional groups is to granulate the continuous service space, as higher service levels are typically allocated to more important SKUs. The proposed formulation has been solved using the CPLEX solver in the GAMS software and the suggested Lagrangean relaxation algorithm, which was previously known as the superior algorithm. An optimal solution offered by the formulation recommends 12 groups of inventories instead of three groups and generates 8.53% more profit than the traditional ABC classification solution. Table 8 shows the group size, service level, inventory costs, and the other costs of the 12 groups. The return on investment (ROI) has been calculated and reported in the last column as a measurement criterion to quantify the profit to hold the corresponding inventory group.
The optimal grouping of the inventory and the service levels associated with it is significantly different from that in the traditional ABC method. In this classification, the highest level of service is 99%, which accounts for only 14.82% of the total 140 items but has the highest gross profit. However, its profits are far less than 78% of the programs offered by the traditional ABC. In addition, while the ABC solution accounts for 72.55% of the SKU in class C, the optimal solution of the present study has placed approximately 37.07% of the items in some medium service level groups, ranging from 60% to 90%. The optimal solution of the present study is almost always devoted to items with a higher ROI and a higher level of service, which seems reasonable. This suggests that, when a firm has a limited budget, ROI may be a better choice as a categorization criterion than sales alone. This is like the case in the traditional ABC. In addition, the MILP model in this research serves as a relevant factor for each SKU by identifying 20 items with a zero-service level and inventory.

Sensitivity analysis
In this section, an analysis of sensitivity is performed to evaluate the effect of changing the main parameters of the formulation on the objective results. In particular, the management costs of each group, β j and limited budget (D) are examined for their impacts on the optimal inventory grouping solutions. The goal is to achieve managerial insights for professionals to describe service level and inventory grouping strategies better. Initially, β j is considered for each group in the interval of (500, 2500) with an increase of 200 and the budget for D in the range of (1 500 000, 3 500 000) with a rise of 200 000.
As represented in Figure 4, when the management cost (overhead cost) for each group grows, the number of inventory groups to categorize decreases. Such a relationship seems nonlinear, so it can be concluded that, with an increase in the costs of inventory management for each group, the optimal number of inventory groups is significantly reduced. The non-linear relationship between the management costs for each group and the optimal number of groups justifies the need for a decision support optimization formulation; it would be difficult to determine the optimal number of inventory groups without the solution presented through the mathematical model proposed in this paper.
According to the effect of inventory budget, Figure 5 illustrates the relationship between the inventory budget and the optimal number of groups. The constant management cost for each group is considered at the levels at $500, 1500 and 2000. As it can be seen in the figure, the more inventory funds, the fewer inventory groups. When the inventory budget is low, the optimal number of inventory groups increases. As a suggestion, when there is too much capital to invest in an inventory, the optimal number of groups is three (based on the traditional ABC method). However, in this case, it should be considered that the ABC approach may be undesirable because it does not optimally allocate SKUs to inventory groups. Figure 5 also represents that the relationship between the inventory budget and the optimal number of groups is similar for several management costs in each group but has different indices of sensitivity. When management costs are low, the optimal number of groups is more sensitive to the inventory budget. Therefore, a slight decrease in the inventory budget may cause to a substantial growth in the optimal number of inventory groups. It can also be concluded that selecting more inventory groups when the attainable budget is low is desirable. When a large inventory budget is accessible, however, there will be fewer inventory groups to categorize items.
The relationship between the inventory budget and the optimal number of inventory groups is very important, given the low and high management costs for each group. The sensitivity of the number of inventory groups to the inventory budget proves the value of proposed optimization formulation when the budget is low. This is because, in such cases, there is a greater motivation to allocate budget better by varying their service level.
Here is an account of how the results of the proposed MILP mathematical model and the ABC method change when the inventory budget is increased. The effects are presented in Figure 6. As the budget grows from $1.5 to $6.5 million, the net profit initially rises rapidly, but, as the budget increases, the rate of change decreases and shows a less return on the investment. It can be concluded that, as the inventory budget increases, the net profit decreases rapidly.   Since the net profit curve obtained by the ABC method is always within effective bounds, the proposed mathematical model in this paper always outperforms the ABC method in solution quality. That is, when the inventory budget is lower, the optimization formulation performs considerably better than the ABC method. So, its advantage over the ABC method is when there is not much budget available. This is when it offers more acceptable results. According to the previous results reported in Figure 8, when the inventory budget is sufficient, the optimal number of groups reaches three, i.e., the quality of the ABC method is close to the optimal solution, namely the MILP model. Conversely, when the inventory budget is low, the classified groups need to be more than three. So, the optimal MILP solution becomes more important than the ABC method.
As a matter of fact, the proposed mathematical model is not without shortages. To examine this issue, let's consider the product demand in the range of [10 000, 30 000]. Figures 7 and 8 show the shortage rate for each period and item respectively.
As it can be seen in Figure 7, the shortage is in a periodic ascending trend, which is normal. This is because the shortage of each preceding period is added to the shortage of the next period. As a result, the trend goes up. Figure 8 also shows the shortage trend for each item in the inventory. Generally, the chart is highly volatile because the shortage of some items is greater than that of the others.
In the following, the cost of shortage and its effect on the net profit (objective function) are discussed. To this end, let's consider the shortage cost in the range of [150,650]. The results gained after running the model are provided in Figure 9.
According to Figure 9, as the inventory cost rises to $250, the net inventory profit increases significantly. However, as the cost increases further, there is no change in the net profit, and the profit graph becomes steady. It is reasonably concluded that, given the high cost of inventory shortages, the profit can increase because the management has to plan to offset these costs and the potential financial losses. To offset the costs, more profit must be made. On the other hand, the low cost of shortages has no significant impact on the net profit.

Managerial insights
Through the implementation of the proposed model in a real case study, several managerial implications were gained on optimal inventory grouping and item control strategies. As the insights suggest, (a) when the overhead cost in a group is reduced, it is advisable to have more classified groups for the inventory, (b) the proposed model decreases ROI in the net profit and helps organizations justify the benefits of increased inventory budgets, (c) when the budget is limited, it is more likely to grow the number of inventory class groups; in contrast, when a large budget is available, it may be possible to classify items into fewer groups, which is a feature of the traditional ABC approach, and (d) if there is a high cost for inventory shortages, it makes sense to increase profits, but considering its low cost has no significant impact on the net profit.

Conclusion and suggestions for future research
Nowadays, inventory management and control structures are among the issues of urgency seriously addressed by the organizations that are expanding. In this research, an optimization model was developed to identify inventory groups, determine their service levels simultaneously and assign items to those groups. This approach improves inventory grouping based on the ABC analysis by providing integrated, automated, and optimized solutions. The model proposed in this study differs from other optimization models in two distinct features. Firstly, instead of minimizing inventory costs, the model examines and maximizes the company's profit. Secondly, it optimizes the trade-off between inventory costs and revenues and allocates the inventory optimally to inventory items. The objective of the mathematical model used in this study was to maximize the net profit of items in stock. Limitations such as budget as well as shortages of inventory were taken into account. Another feature that distinguishes this research from other studies in the field is the use of decomposition algorithms and their comparative examination. To evaluate the proposed solution methods, two indices including the objective function value and the CPU time were determined. Then, the mathematical model was run for 10 different numerical examples, and the results of the two suggested solutions were statistically compared through a t-test. The solutions were very close in terms of quality and time of response. To choose between them, therefore, the TOPSIS technique was applied; the Lagrangean method was chosen as the superior method. Finally, in order to work out the proposed model and the solution method, a real-world case study was used in the ceramic tile industry, which distributes thousands of products to customers annually. The model presented in this paper, compared to the traditional ABC method used in the factory, improved the factory net profit by 8.53%. Also, the proposed model helped to better manage the inventory, optimally allocate service levels to each SKU, and even determine which SKUs had to be reasonably removed from the warehouse. Subsequently, several sensitivity analyses were performed on the model, which helped the inventory managers assess the impact of inventory costs on optimal decision making and item grouping. Like the other studies, there were limitations in this paper. The most important was collecting the data needed for a case study in the ceramic tile industry. Because some of the parameters in this article were considered confidential data, the proposed company refused to provide them. Therefore, based on estimates from previous years, these parameters have been set. Also, due to management's opposition, only a small part of the selected factory was available to implement the proposed model of this article, which can be another significant limitation of this article.
Additionally, several suggestions are explained for future works as follows: (I) Consider some of the model parameters such as demand and costs as uncertain parameters and use various methods such as Soyster, robust programming, Bental and Nimeski to overcome the uncertainty of those parameters. This can bring the model closer to reality; (II) Use meta-heuristic algorithms to solve the model in larger dimensions and evaluate its performance; (III) Since taking products into different categories in some cases increases the holding cost while decreasing the cost of ordering, inventory managers can solve this problem by more detailed sensitivity analysis and scenario analysis based on different needs; (IV) Pay particular attention to what percentage of the products is consumed and how long they take to fail (especially in the case of perishable goods), considering the conditions of the country's economy and the number of products purchased from suppliers; and, (V) The model in this paper was implemented in a case study of the ceramic tile industry, but future research can focus on any stockbased industry. For example, small and medium-sized enterprises (SMEs) that have small-scale warehouses can be investigated as case studies with this model.