how financial companies can benefit from generative AI

dos and don'ts

By James Lois

Generative AI is artificial intelligence that generates an output such as text, images, audio, or videos. This technology has become amazingly popular since 2022 when tools such as ChatGPT 3 came up. We believe that all kinds of financial institutions can reap some of the benefits of this technological breakthrough.

But first, a few words of caution. Yes, Generative AI can unlock a lot of value for financial companies but, in this industry, getting things wrong can lead to big problems, such as fines and damage to your hard-earned reputation. So always be careful when implementing processes involving AI. Remember that LLMs (Large Language Models such as ChatGPT, Claude, LLaMA 2, and Bard/Gemini/Placeholder for upcoming Google name) tend to hallucinate (make things up that are not real) and they cannot be 100% reliable.

A computer can never be held accountable

We also want to dispel the myth that financial companies have to spend several millions of dollars to benefit from AI. That only applies to big players making risky moves, like rolling out custom models for mass-market adoption. So we will explain what financial companies and institutions can do today to start benefiting from LLMs. For this article, we will focus only on text-to-text generative AI (LLMs), since we believe it has the highest potential, but don't forget that there are other modes of generative AI, such as text-to-image and audio-to-text, that financial institutions could also benefit from.

Ways To Benefit From AI

Now that you know the limitations, let's get to some good ideas for applying generative AI in the financial industry.

AI to Assist Employees In Communication (Do)

LLMs are good at creating cohesive text, so they can make acceptable blog articles, email replies, and internal memos. This can speed up the time-consuming activity of trying to craft the perfect email to explain something to your boss, getting the wording just right for a company-wide message, or replying to regulators with precise wording.

Financial institutions can also use gen AI to iterate and improve the copy on their websites, their internal materials, and the interaction with their customers, for example, by making complex financial terms and processes easier to understand. AI can write well, fast, and a lot. But it cannot be trusted blindly, so a human must always verify its output.

AI to Assist Programmers (Do)

Most coders are already using some form of AI assistance to speed up their work. As of today, AIs cannot code full applications by themselves, but they are still quite useful. For example, they can give raw implementations for small functions, help programmers come up with algorithms, and even remind them of code syntax and the code conventions of their preferred language. Programming experience is required, however, to make complex code work well and be reliable and maintainable, especially when working with large code bases.

Another good use for LLMs is helping with legacy software maintenance. Tools such as GithubCopilot and ChatGPT can help train programmers on older languages and infrastructure, such as Cobol.

AI for Research & Ideation (Do)

LLMs are surprisingly good at coming up with new ideas. They frequently beat humans in tests like "Think as many ways as possible to use a paperclip". This kind of brainstorming is tiring for our brains, but not for LLMs. We could almost call them "creative". So don't be afraid of bringing them in when you need new ideas. After all, a human will be the one to judge whether these ideas are good or not.

When dealing with large documents, such as regulations and guidelines, LLMs have two useful applications. First, they can summarize the key points of large bodies of text, making them simpler to understand. Second, LLMs can look for answers to questions inside the text and reply to them. For example, a lot of new applications allow users to "chat with their PDFs". As AI can and sometimes will get it wrong, it's a good idea to ask the AI to tell you exactly where in the text it got the answer from (that is, to cite the specific source).

A good application of this functionality is to use Gen AI as a regulation change consultant. You can, for example, feed it a big document and ask what are the main points and how your organization needs to adapt to them. This activity can be helpful to get the big picture and know what to expect, but it's not enough. Since the cost of errors is usually high in this industry, until the day we know that AI is much better than humans at understanding nuanced legal documents, someone in the organization has to actually read the document and make sure that nothing is being missed.

AI for Sales and Marketing (Do)

It's not hard to feed a model client data and ask it to provide financial recommendations. For example, you could give a model the current positions of your client, his risk profile, and relevant market information, and ask the model to give you 5 investment ideas that you can pitch. When using external models that you don't host (such as ChatGPT), always be careful to avoid giving away customer data, since OpenAI uses data in your conversations to further improve their model.

Projects To Avoid

For all the good ideas we have seen, we still consider that some uses of AI in the financial industry are not worth it and can be detrimental.

Virtual Assistants (Don't)

Research and experience point to the fact that users don't like chatbots. Companies often use them to make up for bad navigation on their websites, poor processes, or complex policies. Users prefer more reliable and faster workflows, such as clear navigation that makes it easy to find relevant information or reasonable forms to submit. Not that good chatbots don't exist, but they are the exception.

Especially in the financial industry, having your users talking to an AI that is prone to hallucination is a recipe for disaster. For example, what happened when the Air Canada chatbot hallucianted and told a user something wrong could imply losses in the millions if it involved financial transactions or regulations.

From the article:

“...financial institutions risk violating legal obligations, eroding customer trust, and causing consumer harm when deploying chatbot technology. Like the processes they replace, chatbots must comply with all applicable federal consumer financial laws, and entities may be liable for violating those laws when they fail to do so. Chatbots can also raise certain privacy and security risks...”

So, you can't blame anything on the chatbot. You're liable for everything your hallucinatory AI employee says.

A non-AI chatbot for your website, on the other hand, is a much better idea. This could simplify navigation in daunting websites, by providing shortcuts to the most requested pages.

AI to Provide Information (Don't)

Last year, Bloomberg launched BloombergGPT, which allows Bloomberg to, for example, provide quick summaries for earnings calls. Still, we don't recommend that most companies try to build something similar unless they have several millions of dollars to spend on a risky project, which might also result in a hallucinatory AI.

If you want to provide information to the users on your website and applications, you should have dedicated pages where you do so. That way, you know you have a reliable and tested answer to your users' queries.

LLMs use a random seed along with a deterministic model to generate the answers. That is why you never see the same answer in ChatGPT, because even if the question is the same, and the model is the same, the random seed that ChatGPT creates on the spot is different. This makes AI less reliable and credible since it will always give slightly different information to different users.

Big AI Projects (Don't)

Unless you manage a huge organization with a hefty budget, our advice is to start small. We have seen a fair amount of fringe projects, such as AI sales agents, AI trading bots, and even AI companies where multiple AI agents interact with one another. These belong, for the foreseeable future, in the realm of science fiction.

Conclusion

Generative AI presents both promising opportunities and significant risks for financial institutions. When used appropriately, AI can enhance communication, assist with coding and research, generate new ideas, summarize complex information, and provide financial recommendations to clients. However, the tendency of AI models to hallucinate or produce inaccurate outputs needs caution and human oversight. While AI-powered virtual assistants and information providers may seem appealing, their inherent unreliability and the legal liabilities associated with providing incorrect information make them unsuitable for most financial institutions at this stage. Similarly, embarking on large-scale, costly AI projects should be approached with extreme caution due to the risks involved. The prudent approach for most financial organizations is to start small, focusing on well-defined use cases where AI can augment human capabilities without compromising accuracy or compliance. Regardless of the application, human oversight and verification remain critical to mitigating the risks associated with AI's limitations. By adopting a measured and responsible approach, financial institutions can harness the benefits of generative AI while keeping their reputations intact and minimizing legal and financial exposures.

Ready to start your next project? Contact us