AI adoption in biopharmaceutical supply chains
Early findings from our research into the application and usage of AI in the biopharmaceutical sector
HASH
Safeguarded AI for Safety-Critical Biopharmaceutical Supply Chains
In support of HASH’s Safeguarded AI initiative, we interviewed a diverse group of 35+ supply-chain leaders and practitioners from across the biopharmaceutical industry. Their perspectives reveal clear patterns in how artificial intelligence is being conceived, developed, and deployed in manufacturing, logistics, and end-to-end supply-chain management — an arena where the cost of failure is measured in patients’ access to life-saving therapies.
Our study is ongoing and we welcome additional organisations to participate. We have now entered the programme’s next phase, working directly with biopharma supply-chain partners to co-develop AI-enabled solutions. Read on to learn more.
Key Takeaways
1. Mixed visions of the potential benefits of AI
Ambitions vary widely. Some organisations envision fully autonomous supply chains that sense disruptions, decide on counter-measures, and execute corrective actions with minimal human oversight. Others are experimenting cautiously — using generative AI for basic productivity tasks such as drafting meeting minutes. Notably, this spread is not determined by company size: we have seen both large multinationals and niche cold-chain logistics providers leading the charge.
Regardless of maturity, virtually all respondents expressed an interest in redirecting human capacity away from administrative firefighting and toward proactive, strategic work.
2. A (mistaken) belief that existing data “isn’t enough” to realize benefits
Teams without a clear AI vision often cite “immature systems and data” as their primary obstacle. Yet firms already further along told us they started from the same place — and still realised quick wins. Even when the backbone is little more than Microsoft Teams and Excel, AI can sit atop existing infrastructure and deliver tangible value while data governance and platform capabilities evolve beneath it.
3. Varied implementation approaches
Across the organisations we examined, the maturity of AI implementation planning varied widely. Only a handful of firms—typically those with the boldest aspirations—had a clear, staged roadmap that identified which segments of the supply-chain would be automated first and how each initiative would mesh into a coherent end-to-end solution.
For most companies, even those with a well-articulated vision, individual AI use-cases were conceived and executed in isolation. Interviewees frequently expressed frustration with this siloed approach.
We also observed two contrasting implementation patterns. In some businesses, AI adoption is driven bottom-up, fuelled by grassroots enthusiasm and opportunistic experimentation. Elsewhere, projects have been green-lit simply because “AI is the topic of the moment,” rather than on the basis of a problem-value assessment. Both tendencies carry risk: without a unifying strategy, grassroots initiatives struggle for sponsorship, while hype-driven projects can drain resources without delivering meaningful impact.
4. Frustration with lack of transparency around data security, privacy and governance
At larger firms in particular, many practitioners feared using outside AI solutions, or knew that their organization had strict rules governing the usage and adoption of new technology.
However, not all AI solutions are equally suitable for use: while some run entirely within an organization’s existing infrastructure or “on device”, others require transmitting sensitive data to external clouds (resulting in lengthy cyber-security certification and slow procurement processes). Differing approaches between providers in the handling of data aren’t often immediately apparent, and opacity regarding practices makes it hard to know which solutions are viable to explore. Even when AI firms aren’t “training” on data, requiring sensitive data to be transmitted externally for processing can be a compliance nightmare.
Despite these challenges, the most effective industry users of AI were those who combined in-house teams of domain experts with outside services and support, generally provided by specialist startups – as opposed to traditional consultancies – reflecting the fast-changing nature of AI and need for continuous experimentation.
HASH CEO, Dei Vilkinsons, notes, “At this stage in the AI lifecycle, few services built are ever considered “done”, because within months or weeks significantly cheaper, better, and more reliable approaches are inevitably available to incorporate. Firms are partnering with product-led startups who they can rely on to stay on top of this for them, rather than sinking money into conveyor belts of consultants.”
5. A desire to keep “humans in the loop”
In many cases, full automation is not seen as desirable, at least initially. Enabling humans to verify and approve AI decisions or actions before they take effect, and modify these to better reflect domain expert’s understanding, allows for solutions to be more readily deployed. However, these capabilities (while widely desired) are rarely supported.
AI-generated outputs or recommendations are often not easily editable.
6. Opacity undermines trust in decision-making
Many practitioners expressed that a lack of visibility into the reasons underlying AI recommendations and decisions stymied their willingness or ability to rely upon them, particularly in safety-critical contexts, like when a patient could be impacted. Even when error rates were negligible, a lack of decision-making traceability and assurance prevented reliance, resulting in a replication of the required work, rather than it being made redundant.
7. Trialled solutions fail to reflect the complexity of real-world supply chains
Across all of the conversations we've had, it's incredibly clear that supply chain practitioners often ignore the “predictions” and “forecasts” in their tools, and find themselves manually adjusting decisions to account for all kinds of things that they know aren't factored in. For example:
→
"this market always overforecasts that particular product"→
"black swan events in our historical data — e.g. COVID — mean all of our forward-looking estimates the system generates are now reliably wrong"→
“changeover times encoded in the planning system don’t reflect reality on the ground”→
"tariffs are coming into effect, so we want to build safety stock of specific materials beyond normal levels"
Mind the Decision-Making Gap
Leah Pickering, Head of Supply Chain Research at HASH, reflects on these findings:
“Many interviewees talk about ‘decision intelligence’, yet, as a discipline, we rarely document how supply-chain decisions are made, which options were considered, or how effective those choices proved. We obsess over outcome metrics like customer service and inventory levels, but almost never look at the actual decisions and how they were made. Consequently, post-mortems become scavenger hunts through red KPIs at month-end. Root cause analysis is impossible. How can we entrust sophisticated AI with decision-making when we ourselves cannot quantify what good decisions look like today?”
In order to address this challenge, HASH is developing a solution that captures and analyses decision pathways, laying the groundwork for truly intelligent, auditable AI-based decision support for use in supply chains.
Today, because software-provided recommendations (some of which are “AI-enabled”) are often ignored or overridden, it's not always clear in retrospect why certain decisions were made at the time, and what factors contributed to them. In addition, AI and recommendation systems can't learn and get better if they don't understand why their recommendations were wrong.
The tool we’re developing helps solve these issues, enabling context around “unexpected” decisions to be captured – and we’re looking for supply chain practitioners to work with as we develop it further – and give us feedback.
Understanding (and evidencing) decision-making is necessary in safety-critical supply chains, as well as developing AI solutions that partly or fully automate their management (e.g. “touchless” or “self-healing” supply chains) – and we see this tool as taking us one step closer to achieving our long term goal of developing provably-safe AI solutions for safety-critical domains, including biopharma.
Join the pilot
We’re setting up supply chain practitioners with our decision intelligence tool in order to get their feedback on developing it further. To express an interest in participating in the pilot or to find out more, fill in the form below, and we’ll get back to you within 24 hours:
For more information, please contact:
Dei Vilkinsons - CEO [email protected]
Leah Pickering - Head of Supply Chain Research [email protected]
Create a free account
Sign up to try HASH out for yourself, and see what all the fuss is about
By signing up you agree to our terms and conditions and privacy policy