This is the final post in a 3–part series on building an Artificial Intelligence and Machine Learning (AI/ML) capability for the first time. In case you missed our post last week, the second article covered how to set a targeted objective through development and communicating results. We will now focus on deploying and transitioning Artificial Intelligence and Machine Learning capability to operations, governance of the capability, and establishing monitoring and maintenance routines to ensure performance holds with passing time.
The stakeholders have decided to move the AI/ML capability onto operational systems, and now it’s time to move forward. At this point ensure the data utilized to build the capability and the code or model itself are validated. Establish a specific plan with the Operations team to either transfer the AI/ML capability into the system or connect to the system using an API call. Depending on the final data needs, data pipelines may need to be established.
Use a pre-packaged solution for deployment, monitoring and governance or utilize a custom-built solution out of open-source tools and frameworks (Pachyderm, etc.). There are many pre-packaged solutions that are either optimized for a specific application or that can handle many types of models and be customized. If you have internal existing tools and frameworks already for other models, you may want to expand upon what is currently used to achieve easier adoption of your AI/ML capability.
With any new business process or capability, change management activities preceding deployment result in greater, more widespread adoption and less user frustration. The outcome for the user is they better understand how to incorporate the new AI/ML capability into their workflow and what insights they can achieve. A combination of training documentation, in-person training, roadshows and even training material inside the operational tool (if applicable) are useful methods to do knowledge transfer and training. Any changes that are made to the capability may impact a user’s workflow, so perform an assessment with each capability update.
A key to a successful rollout is to have users involved as the capability is transferred to Operations, before it is deployed. They can then provide input on how a user will interact and respond to the new insights generated or give input on how to optimize or modify an impacted business process.
Like any new technology, without the appropriate care and feeding the new capability’s performance will decay over time and eventually wander into dangerous territory. To retain good AI/ML health and avoid using bad insights, there are two aspects to address. The first is ensuring the data integrity is stable: it continues to be updated at an appropriate interval and the data quality suffices. The second component is ensuring the model continues to perform as needed for the organization. These two aspects are key to your success and may be the “make or break” case for utilizing AI/ML in your firm.
To make monitoring easier, add automated methods for data quality checks and the AI/ML capability performance in Operations. Dashboards with visualizations are good tools to monitor performance and data quality at-a-glance. Use notifications for key individuals to take corrective action quickly. Have a communications plan in place if/when the capability is not available or goes down.
It is a best practice to hold a major review of the AI/ML capability on regular intervals beyond monitoring activities. The state of business may change enough that the capability does not accurately represent the current state. There may also be new data available or external data sources that can greatly enhanced performance or applicability to the business.
As this is potentially the first foray into AI/ML for your organization, it will likely receive a lot of attention. To ensure a smooth adoption, provide timely support for special training requests or any issues that surface. Identify subject matter experts that are responsible for addressing either data issues or functionality issues. This may be a combination of a business product owner, a data owner and an infrastructure owner to cover any type of problem. It is also wise to earmark maintenance resources for a period to ensure proper support is available if needed.
There is a repeatable process that occurs with every iteration, starting with training the model with updated data and/or additional data fields. Once proof-of-concept training and testing is complete, the capability is registered as a new configuration, then validated by a team of business and technical subject matter experts. The validated capability then moves to Operations, and monitoring functions start. Once an issue with performance is identified or new features / data are desired, another iteration occurs.
Example scenarios where your AI/ML capability may need to be retrained are: there was a shift in the market place or your business model/performance (COVID-19 is a great example), third party or internal data is newly available or you a need to have the current business environment captured.
Here at Strive Consulting, our subject matter experts’ team up with you to understand your core business needs, while taking a deeper dive into your organization’s growth strategy. Whether you’re interested in AI/ML implementation or an overall data and analytics assessment, Strive Consulting is dedicated to being your partner, committed to success.