By: Pavel Samiev, General Director of the BusinessDrom Analytical Agency
Today, self-learning neural networks and data robotics are increasingly getting implemented in business processes of banks, insurers, and investment companies. Those processes include trading, underwriting, actuarial pricing, fraud detection, credit scoring, market scenario modeling, as well as customer communications that get automated with the help of voice-enabled or text-based chatbots. The more AI technologies penetrate in companies’ business processes, the higher will be the likelihood and magnitude of occurrences that require taking on responsibility for decision-making, risk management, and data access. Bankers themselves acknowledge that they do not always understand why, for instance, credit scoring robots assign different scores. This means that today it is hard to see benefits of AI in banks’ accounts and statements, as such benefits are not easy to calculate and are even harder to formalize.
As AI gets increasingly important, obvious questions arise. First, who will control the quality of AI: the regulator or an auditor? After all, a robot is an actor, rather than a legal person. Who will be liable for decisions made by the robot? Will it be the robot developer, or the bank that uses the robot, or the manager who oversees the robot’s performance? Consequently, which information could be disclosed by an AI developer to the regulator, as such information constitutes a know-how, intellectual property, and competitive advantage that cannot be publicly disclosed? The second question relates to cyber security. As soon as the regulator starts its AI audits and monitoring, the threat of industrial espionage and customer data loss will grow. Third, what fines or other sanctions will be imposed by the regulator in the event of misuse of AI technologies? Will companies need to separately report their exposure to risks?
Obviously, financial regulators across the globe, including the Bank of Russia, will need soon to formulate rules of AI application in the banking and other sectors of the financial market. Regulators will have to recall the Asimov’s Laws of Robotics and apply them to AI. Perhaps, they will be formulated something like that: a robot must not cause harm to its bank and bank’s clients, must obey validated algorithms only, and must ensure cyber security.
Though, if we consider individual cases, the above principles may start to conflict with each other. That will require prioritizing, but market participants have different priorities. Regulation should help avoid conflicts of interest.