Wall Street began restricting employees’ use of ChatGPT amid growing nervousness over the exposure of sensitive data. According to the latest media reports, JPMorgan Chase has restricted its employees from using the ChatGPT chatbot. Accenture, a technology consulting company with more than 700,000 employees, also warned employees not to expose customer information to chat tools such as ChatGPT.
ChatGPT has become an internet phenomenon in recent months, sparking debate about its future potential, from composing Shakespearean-style poetry to creating stock portfolios. Some analysts believe that ChatGPT will have an impact on multiple industries, and Wall Street seems to be no exception. From financial planning to stock trading, ChatGPT will affect Wall Street in many ways.
Some companies are already experimenting with such tools to improve productivity. However, data security and legal experts have expressed concern that information shared with chatbots could be used to fine-tune their algorithms or could be accessed by outsourced workers who are paid to check their answers.
Heavily regulated banks and financial institutions have put up guardrails as employees use the new technology. According to the analysis, banks and financial institutions are understandably cautious about introducing any new technology. During 2021 and 2022, U.S. regulators issued fines totaling more than $2 billion to 12 banks for unauthorized use of private messaging services by their employees.
There are also concerns about the accuracy of chatbots. While chatbots are designed to write human-like sentences, they have trouble distinguishing fact from misinformation. They can also be tricked into giving seemingly insane answers, such as threatening the humans testing the service, or coming up with outright nonsensical responses. Microsoft’s search chatbot, Bing, was caught at its official launch event providing completely inaccurate answers that the Microsoft team didn’t catch, such as making up numbers when asked to summarize a financial earnings press release.
Last month, Amazon warned employees not to share confidential information with chatbots due to privacy concerns, according to media reports. An Amazon lawyer warned employees: “We don’t want its output to contain confidential information like ours (I’ve seen examples where its output closely resembles existing material).” Be cautious with data.
A spokesperson for Behavox, a technology company that works with large banks and financial institutions to monitor internal security risks, said: “Over the past month, there has been an uptick in concerns raised by customers about the use of ChatGPT, particularly when it involves the use of private or proprietary data. .”
Jon Baines, a data protection specialist at law firm Mishcon de Reya, said there were also questions about whether companies using ChatGPT risked violating data laws if the software flooded with inaccurate information.
Baines said: “If the output of the robot involves the processing of personal data, then the question arises to what extent this unavoidable inaccuracy may violate the GDPR’s requirement for accurate processing of personal data” .