Search
From a Thousand Flowers to Nipping It in the Bud: What J&J Teaches Us About Evaluating AI Use Cases
Posted
“Let a thousand flowers bloom” used to be Johnson & Johnson’s strategy to generative AI innovation. In short order, nearly 900 projects sprouted across the company.
But subsequent internal review revealed that only 10–15% of those projects produced 80% of the value. In response, J&J pivoted. It narrowed its focus to high-impact use cases, and scrapped the rest. These remaining efforts were tightly aligned with business strategy, execution quality and adoption.
The broader market tells a similar story. The pressure to build, buy, or use AI has been undeniably crushing. One survey found that 1 in 5 workers feel they have to use AI, even in situations where they are uncertain it is appropriate; and 1 in 6 admit to pretending to use AI just to keep up with the times. AI overzealousness has started to wear on investors, as fear of an impending “AI bubble” recently wrought uncertainty in the U.S. stock markets. And, legal and AI governance committees in many organizations are inundated with approval requests, and added responsibilities to vet and approve AI. Unfortunately, it’s possible that the use cases are outpacing organizations’ ability to develop governance mechanisms, as only 25% of companies appear to have fully implemented AI governance programs.
The lesson from J&J is simple: Stop measuring progress by the number of projects. As organizations mature in their AI journey, perhaps it is best to learn from J&J’s strategic shift and focus on the age-old adage of quality over quantity. Success in AI innovation doesn’t actually mean do as much as you can, and it doesn’t mean doing it all, as fast as possible (or at least faster than the next guy). Instead, focus on what is valuable.
To focus on value over progress for progress’ sake, AI use case evaluation must filter for outcomes. J&J’s experience highlights the need for a structured evaluation framework in organizational AI governance. The following strategies can help organizations refocus their AI evaluations.
Evaluating AI Use Cases with Rigor
1. Define the Business Value
Every AI initiative must start with a clear answer to a simple question: What problem are we solving, and why does it matter? Rather than beginning with technology, begin with business outcomes such as revenue growth, cost reduction, risk mitigation or customer satisfaction. Once the objective is defined, translate it into measurable metrics. For example:
-
- Efficiency use cases: reduction in cycle time, error rate or cost per transaction.
- Revenue use cases: increasing average customer spend, enabling more successful cross-selling.
- Risk and compliance: reduction in manual reviews, false positives or time-to-detection.
One tough pill to swallow is that if a use case can’t be measured, perhaps it shouldn’t move forward.
2. Assess Organizational Readiness
A mismatch between AI ambition and organizational maturity is a common reason why AI use cases flounder. Before investing in any AI use case, organizations must gauge their ability to make the technology work in practice. This “readiness assessment” is the bridge between ambition and execution. The aim is to measure whether a proposed tool can operate effectively within the company’s existing ecosystem. There are three dimensions to the readiness assessment:
-
- Tool Readiness: Does the model or product perform the task reliably under realistic conditions? One of the best ways to discern tool readiness is a proof of concept, pilot or a limited deployment that enables a lower-risk, lower-cost trial in a controlled experiment. We’ve touted the benefits of pilots for AI tools in a past article. And yet, many organizations continue to fall into the trap of attempting to scale untested tools quickly, or obligating themselves to large financial commitments for AI vendors that ultimately fail to perform. Giving your organization the opportunity to test a build or vendor in a lower risk environment remains one of the best ways to avoid undue cost or risk.
- Technical Readiness: Can the organization’s infrastructure support the AI tool? Factors include data quality, integration with core systems, cybersecurity posture, and scalability. Unfortunately, a major hinderance on technical readiness is the skills gap of an organization’s IT team. One study of UK financial institutions showed that two-thirds of firms surveyed do not have a complete understanding of how AI technology actually functions. There is no magic pill for fixing this issue, though organizations should combine targeted training, practical experience, and a culture of continuous learning, grounded in role-specific needs and business objectives, as a start. Nonetheless, assessing overall technical readiness, and having the tools meet your organization where you are, is the best path to successful AI deployment.
- Organizational Readiness: Does the company’s AI maturity align with the sophistication of the proposed tool? Maturity assessments that evaluate governance, talent, data literacy, and ethical standards can help to reveal whether an enterprise is ready to absorb, adapt, and sustain the technology. Similar to technical readiness, overall organizational buy-in being key to AI success means that your organization should have an honest understanding of its maturity level, and strategic goals, to ensure they align with the AI use case in question.
3. Track and Adapt
An AI use case is not complete when it goes live; it enters a new phase of learning. Once the tool is actually deployed, success depends on disciplined tracking against the same metrics defined at the start. Tracking should extend beyond raw numbers. Regular audits should test data integrity, model accuracy, and drift. Comparing current outputs against initial benchmarks reveals whether the AI remains fit for purpose. When deviations appear, teams should analyze whether they stem from changing inputs, user behavior, or shifts in the business context. Governance routines help maintain alignment with the organization’s AI maturity level.
The results should loop back into the business strategy. Metrics that prove meaningful should inform new use cases; those that fail to capture impact should be refined. Readiness and tracking work in tandem: Readiness gets an AI initiative off the ground, and tracking keeps it airborne. The organizations that treat deployment as the beginning of a feedback loop rather than the end of a project are more likely to extract sustained value.
Finally, it is okay to admit defeat. Reserving the right to shut a project down—whether during the pilot phase or after—is actually a strength and not a weakness.
The Best Outcome Is Responsible Deployment
While a focus on successful business outcomes is imperative for AI deployment success, so, too, is a focus on the intangibles. Any AI “north star” should not only prioritize revenue, or efficiency, but should also focus on responsible governance.
Legal and compliance teams should not merely be seen as gatekeepers, but partners in the process. Especially when AI systems interact with customers, robust legal review helps shape design choices around transparency, consent and accountability. For AI acquired from third parties, contracts should allocate risk in accordance with organizational values and market standards.
AI governance mechanisms can be seen as a steering wheel rather than a brake. In other words, the purpose of an organization’s AI governance team should not just be to approve or disprove of AI use cases. Once deployed, AI tools should continue to be monitored under the remit of an established AI governance team to ensure that metrics continue to align with business goals while safeguarding against unintended harm, which may not always be measurable by a service level or ROI.
Conclusion
J&J’s pivot in AI strategy raises an interesting question. Should we be nipping rampant AI deployment in the bud? The era of AI for the sake of AI should be over. Instead, we should focus on clear evaluation and accountability structures, and maybe even pruning a few of the weeds. Practical governance, rooted in business goals, user needs, and risk awareness, makes that shift possible.
RELATED ARTICLES
Earning Your Trust: The Need for “Explainability” in AI Systems
Sourcing Speak

