[ad_1]
As AI proliferates, it is not just data scientists who need to learn AI. AI Literacy is fast becoming a requirement for professionals from all industries. I recently participated in an overview of AI for Finance Professionals, organized by SLASSCOM Sri Lanka for finance professionals in Asia. Here are the key items that I covered:
- AI can seem intimidating. It was only recently (and sometimes even now!) that many people believed that AI is only accessible to those with Ph.Ds and deep knowledge of math. This is not true however. If you want to create new types of AI, yes this level of knowledge is required. It is however not required if your goal is to use AI in your domain (where you have relevant expertise). In this case, it is only required that you understand enough about AI to know how to apply it effectively in your domain, understand what tools and services are available to you, and be aware of what AI regulations you will need to follow for your domain to use the AI safely and securely.
- The rest of this article answers these three questions for the finance industry in general.
The AI Lifecycle
While there are thousands of AI techniques and tools available, the AI lifecycle in business tends to follow a predictable pattern – shown in Figure 1. The lifecycle begins with an identification of the business need. Next, relevant data is gathered and processed. Once the data is available, an AI algorithm is selected via experimentation and evaluation. A selected model that works well at an experimental level can be deployed (put into production) and integrated with the business. Once integrated with the business use case, the AI is monitored to determine whether or not it has in fact helped address the business need. This cycle often repeats many times, with the AI being improved in every iteration based on the experiences gleaned from the past iterations.
While the lifecycle itself is generally similar across industries, the specifics within each stage will of course be determined by the industry and its requirements. For example. heavily regulated industries such as Finance will likely enforce security requirements across all stages involving the data and the AI, as well as require extensive documentation before an AI that can affect people’s livelihoods is allowed to be put into production. As an example, you can see an SEC requirement for model risk management here.
Lots of Tools!
The good news is that there are many tools now available to help carry out the AI lifecycle outlined in Figure 1. Tools also range from turnkey services to infrastructure software – so you and your organization can pick the ones that match your (desired) level of expertise. For example
- If your goal is to have the AIs be created and used by finance domain experts with minimal to no data science experience, there are a range of SaaS (software as a service) options where pre-trained AIs can be adapted to meet your needs. These are usually for more generic services (such as customer facing chatbots, marketing intelligence etc.) that do not require custom sensitive data from your organization.
- If you need to build a custom AI that learns from your data, there are still many tools available that range from no-code to low-code to code. You can find some examples here, and there are many more. In addition, the trend of AutoML has made it possible for many professionals to access a large range of AI algorithms without requiring a deep understanding of how they are built (or the code expertise required to program them). It does however help to understand what algorithms are suitable for different use cases, particularly if your organization or the use case are subject to industry regulations.
Risk Management
As referenced several times above, Finance is one of the most regulated industries – not just in AI but in general. Unlike some industries, where AI regulation is just beginning, finance already has regulations for the data privacy and model risk. In addition – new general regulations on consumer privacy, right to explanation in laws such as the GDPR and the CCPA also apply. Some additional risk management areas to consider when applying AI include:
- Data privacy (and good data practices). Are you allowed to use the data that you are planning to use to train your AI? Are you handling the data carefully to minimize risk? You can find some guidelines for good data practices here.
- Fairness and Bias (AI Trust). What are you doing in your AI lifecycle to ensure that your AI is not biased against any subset of the population?
- AI correctness in production. Once your AI is in production, what steps are you taking to ensure that the AI is making reasonable predictions? See a reference here for an overview of AI integrity.
- AI security. What steps have you taken to make sure that your AI cannot be hacked, or to detect if your AI is hacked?
AI has already proven tremendous value for finance, and we are likely only at the beginning of what AI can achieve. The three areas above will hopefully help finance professionals develop the necessary AI Literacy to bring this value to their business.
[ad_2]
Source link