[ad_1]
Shell is deploying a data science tool globally to help it manage and optimize the billion dollar spare parts inventory it has in the event of an asset failure.
The project was just an idea 18 months ago, but after successful development and proof of concept on two deepwater platforms in the Gulf of Mexico, it is now intended for use by analysts in the Gulf of Mexico. inventory of the company in the world.
“We have a large number of assets around the world, and most of those assets contain a number of spares,” said Daniel Jeavons, general manager of advanced analytics at Shell, at the recent Spark Summit Europe. .
“This is a safety stock – pieces of equipment that we store in warehouses so that if there is a problem with any of these assets, we can quickly provide a replacement part to put it back. active in service.
“Shell has over $ 1 billion in parts inventory at all times and if you can leverage it, the business benefits are quite dramatic. “
The company has traditionally stocked spare parts because operations are often remote, making it difficult to ship parts.
“We operate fairly distributed and complex supply chains where it is not always easy to get a spare part to site quickly, and furthermore, some of these spare parts are highly specialized equipment and, for example, therefore, they are actually quite difficult to obtain. and have very long delivery times, ”Jeavons said.
Depending on the importance of the part to the operation of the equipment, having a spare on hand means that downtime can be limited.
“There’s a bit of risk aversion on the part of the company in terms of trying to make sure they always have the right level of inventory, and therefore stockpile some of these things, in particular. in high oil price environments where the cost of downtime was extremely high, ”Jeavons said.
“We had the reverse of this where when the price of oil fell, there was actually a need to free up working capital.
“Reducing spare parts inventory is one of those ways, but are we cutting back on the right spare parts? That was still the question.
Shell had a discipline of inventory optimization and analysis, but it operated inconsistently on a global scale.
This led to a request from part of the company 18 months ago to see if data science could help.
“What we did was build an agile work team [tasked with] develop a prototype tool that would work specifically for the Gulf of Mexico initially for two specific assets called Ursa and Brutus, and develop an algorithm that would provide optimal levels of spares for these two assets based on their current operations I have mentioned.
“We wanted to do this in a web-based tool that would allow an inventory analyst to interact with the recommendations from the algorithms we were going to create.
“We also wanted to allow the inventory analyst to not only review the recommendations, but also ask ‘why’ – [looking at] the consumption history of this spare part, the system turnaround time, the equipment to which it relates and … the criticality of this equipment.
Create a PoC
The first release of the tool allowed Shell to extract historical problem data from its SAP HANA ERP system and run the data through algorithms on a basic laptop.
The team then upgraded the hardware to “a high performance 48-core offline PC,” but it still took 48 hours to simulate all possible stock levels to meet the various service availability thresholds.
Because the tool is intended to aid decision making, but leaves the actual decisions to inventory analysts, it provides a range of options and a recommendation of the optimal amount of spare parts needed for a particular part.
He bases this recommendation in part on a “criticality score” attached to the part: how necessary a spare is if the production part fails.
“We calculate [need] based on the criticality that we translated into a level of service: how many items would we need to have in stock at a time? said Jeavons.
The algorithms also calculate “replenishment points for inventory levels,” said Wayne Jones, Shell’s chief data scientist.
“The main thing we’re trying to do is estimate what the most appropriate resupply point is. How much do we need in stock to last, say, 30 days? ” he said.
“What we want to do there is also estimate the different levels of service, because if there’s a really critical part, you want to make sure it’s available 95% or 99% of the time, then that a less critical part you can live with 80% [availability].
“This is commonly referred to as safety stock analysis. “
Inventory analysts access options and recommendations through an HTML5 interface.
The team tested the tool on its two deepwater platforms in the Gulf of Mexico. It was a great success.
“We saw it right away when we deployed this [that if] we could combine data science with clear explanations of what we were doing and why it was right, as well as providing [the business] with business intelligence information in a single web portal, we were able to generate millions of dollars in business benefits in a very short period of time, ”Jeavons said.
“In fact, this proof of concept paid for itself in about four weeks.
“In addition, we only looked at inventory reduction. We didn’t look at the impact of actually having the right stock on hand, which is just as important as it has a much bigger business impact due to the avoidance of downtime.
The success of the trial has been noted by other parts of Shell’s Gulf of Mexico operations.
“They got very excited,” Jeavons said.
“We [also] generated a problem for ourselves, because in our proof-of-concept mode, we had done it on a very simple laptop, uploaded it to a server, made it available in an HTML5 web portal, and it worked very well.
“But if we wanted to expand that, we had to make it repeatable. Shell literally has millions of spare parts stocks stored in different warehouses around the world, and very quickly they wanted to make it a global tool.
“And so we had to think about an approach that would allow us to take the job and expand it. “
Go global
The team still takes inventory data from the SAP HANA ERP database, but now uses Alteryx to move it to an S3 bucket, from where it is then routed through a Databricks Spark cluster.
The results are rewritten in HANA and displayed through the HTML5 web portal.
The additional computing power behind the models means the time required to simulate inventory levels has dropped from 48 hours to four hours.
“What’s also good is that you don’t need to explicitly state how you parallelize the problem. Spark does it implicitly for you, ”Jones said.
Databricks came up with another way to restructure how the tool works, which saved the team more time on the process.
It is now reduced to 45 minutes per cycle and has the architecture to deploy it globally.
“We were able to take a tool that previously would have been fairly localized in one region and turn it into a global product that is actually becoming the basis of how our inventory analysts will now do their jobs,” Jeavons said.
“What’s also important is that by making it available as a digital solution, we were able to change the way they do their jobs on a daily basis.
“And because they were integrated into this project from the start and dictated the data they wanted to see around the algorithms, [the analysts are] start using the tool that we developed in a way that we had not considered before.
Jeavons said additional uses included reviews of dormant inventory to identify parts that are not moving quickly, calculating BOMs for different pieces of equipment based on parts costs, and performing proactive inventory reviews. .
“We are now working in partnership with our global IT organization to deploy this as a first step upstream – which is in the process of going live at the moment – and we will also deploy it shortly in the global downstream business,” said declared Jeavons.
“We are also looking at an independent use of the algorithm – it potentially has many other applications for ourselves and our partners.
“We talk to them about whether we can help them if they have similar problems.”
[ad_2]
Source link