false
Catalog
2024 Playbook Series #1, Session #3: What Are the ...
Session recording
Session recording
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I'm gonna be moderating the discussion today on AI and compliance. Just a couple of announcements before I introduce our wonderful panelists. First is we have a couple, another webinar series coming up throughout the summer, and that's on adding value to outshine your competition. That will be happening, there'll be three different webinars in June, July, and August. So look out for further information about that. That's coming up and really exciting. But today we're gonna be focusing on the legal compliance and cybersecurity implications in 2024 of artificial intelligence. And I'm so excited for today's webinar. We have some awesome panelists. So I just wanna start by introducing them. And if each panelist could just share who you are and why you're here, what expertise you have to bring to the table, that would be great. Can we start with Myles, and then Layla, and then Alex? Sure. Thank you so much, Summer. Welcome, everyone. My name is Myles Bleschner. I'm a director at ACA, Director of Investment Advisory Consulting at ACA Compliance. We're a national compliance consulting firm. I've been in the industry about 30 years. I like to think that I've seen it all, although every day I see something new, I think. These days, what we're seeing a lot of is AI. Like I said, I've been in the industry for 30 years. I deal with wealth managers specifically. I also have some private fund and registered investment company clients. But what I like to do is really understand what the wealth manager's business model is, how we can make it better, and how we can utilize things like AI and some other items to streamline the business some. On to you, Layla. Hey, everyone. My name is Layla Shaver. I'm the founder and managing attorney at MyRA Lawyer. Our firm provides legal and compliance services to the financial services community. As many of you are and you know, the RA space is incredibly entrepreneurial. So we focus on the RA space, but really where we shine in our service is helping advisors and firms understand how the brokerage, the funds, the insurance, the advisory all kind of work together in the legal and compliance implications of those different lines of business. So I'm a self-professed compliance nerd. I love compliance. I love having fun with compliance. And I also like to read through all of the litigation releases and rule releases so that you don't have to, and then help you understand how those rules apply to your business. Thank you. And Alex. Yeah, thanks, Summer. Hi, everyone. My name is Alex Nissenbaum. I'm a partner at the law firm, Blank Rome. I'm resident in our California office here in Orange County. My day-to-day work spans a number of industries focusing on data privacy and security compliance and other technology matters. And in that context, you know, AI is a very hot topic for our clients right now across industries who are trying to figure out, you know, how they can use it in a way that is legally compliant and, you know, works for their business plans. Thanks, everybody. And I don't know if I introduced myself earlier. Sorry, I forgot. My name is Summer Perry and I'm a financial advisor and happy to be here on this discussion today. And as you can see, we have a wide variety of panelists and I'm super excited to hear all things compliance today from the legal components to just more guidelines and documentation for AI. So we're going to get into all of that. I've just got a brief overview of what we're going to get into. I've got this slide here to kind of show what we're going to talk about, but we really want to hit the specific red flags that advisors should know about when it comes to AI and then also vetting software for safety concerns. That's a huge deal, a huge topic right now in our industry. And then of course, the guidelines around AI, you know, what should we be doing and implementing in our processes? So without further ado, I'm going to stop sharing my screen and so you can see the panelists nice and clearly. I want to start by asking a general question to all the panelists, and that is, what are you hearing from advisors about AI? What's the hot topics right now? What are you hearing? I'll start if that's okay. You know, like I said, I talk to wealth managers every day and a lot of what we're hearing is, can I use this for note-taking? Can I use this internally? Am I already using this without even knowing it? And then the big questions, which is marketing, privacy, and data protection. Those are problems and conflicts of interest, but we're definitely hearing a lot of questions on both sides of the aisle, whether it's, can we use this to increase productivity internally? And how do we use this externally? Great. So for us, we're hearing a lot about advisors and firms that are very interested in understanding how they can implement AI and what tools they may have now that have implemented an AI component to it. And if they don't, what opportunities there are. We really also talk about how to review from a vendor due diligence perspective, how to implement, how to train your team, and really talking about operationally, how do you implement these new tools? How do you train your team on how to communicate the use of those tools with clients and prospects? And then ultimately what everyone's worried about is, what the hell do I tell the regulator during my next exam about how I use these tools? So we also want to make sure we're setting them up to be confident during their next exam, because they will be asked about AI, how they're using it, and how they're talking about clients. Absolutely. I can say I've heard similar things from across clients, across every industry that we talk to. I mean, there's interest and pressure to use AI to make work more efficient, improve services, stay competitive with others that are using the technology. And clients come to us to ask, how can we do that and stay compliant? And it's a real challenge, right? Because the legal landscape is changing extremely rapidly. And in many cases, we don't even know what the regulators are going to focus on just yet. So just helping think pragmatically about checks on AI, whether it's before use to think about the specific use cases and the risks, or after put in processes to make sure there's appropriate oversight. I appreciate your insights here, Rick. I really hope we can touch on all the topics that you mentioned today. And for advisors on this call, we do have a Q&A feature. I have a lot of questions to ask these three awesome panelists, but if you have more to add or you want some clarification, feel free to add a question there and I'll be looking out for that. So let's start with some legal issues. Alex, what kind of issues exist with putting personal information into generative AI tools? Generative AI tools, just to clarify, those are usually a chat GPT type of AI, correct me if I'm wrong, but that's my understanding. And tell us about the potential compromises when we're putting a client data into an AI tool like that. Sure, I mean, privacy is a really big issue when it comes to AI in general, particularly large language models that power generative AI. And that's because training these generative AI tools requires massive data sets. Common sources of data that the vendors are gonna be using to train this data is data scraped from the internet, licensed data sets, and all of that can include personal information. A lot of the AI tools that are gonna be available to advisors in the market might allow the business to further train the tool with their own data sets, which might also include personal information for which they're directly responsible. For example, a use case that we've seen really improve efficiency across a variety of industries is using it to enhance customer support delivery. And to do that, you need to input historical customer support journey and outcome data that might include customer and personnel names, customer account data, contact information, demographics, purchase information. And you can see how a tool that would be used to assist with the creation of a financial plan or something the advisor would be more familiar with would also need that sort of financial and demographic information as well to profile the plan. And using personal information in an AI model just in general makes it very challenging to comply with kind of the touchstones of privacy that we're seeing here in the U.S., the generally accepted privacy principles that underpin many U.S. and global privacy laws like data minimization, purpose specification, and transparency, you know, telling people what you're doing with their data. So certain state laws, for example, including comprehensive privacy laws in California, Virginia, Colorado, require meaningful notice be provided to consumers at or before collection of the personal information. And that's really what we're trying to do is identify what you're collecting and why you're using it. So you can see all kinds of issues as this field evolves so rapidly. You know, for one, notice is prepared prior to a business using a particular AI tool, may not cover all of the use cases that you're using the AI tool for. Also, vendors very commonly take the data from customers and use it for secondary purposes beyond, or whatever the primary purpose of the tool is. And that secondary use might not be appropriately captured in the notice. And as a result, the individual might not actually get the notice they're entitled to under applicable law that's supposed to actually give them the information they need to decide whether to give you their personal information. And, you know, another common problem that we see on all clients is that business teams are often taking a bigger is better approach when selecting what data to put into AI tools and how to use them. And that can result in, you know, selection and use of irrelevant data or personal data that, you know, is not needed for the specific use case. And that's going to be counter to a business obligation to use the minimum necessary data for a specific purpose. And, you know, to the extent businesses have to respond to data subject access rights or things like that, which are becoming increasingly common in state comprehensive laws, it can be very difficult when that data goes into a giant data lake for training to unwind that data or separate that data, to delete it, to modify it and do different things because that has an impact on the model and then the output that it's providing. So, you know, I will say that interestingly, the buzz about AI and the speed at which new AI solutions are being developed and deployed seems to actually be helping nudge, you know, the federal government at least and then the states to a lesser extent to tackle privacy issues in a more comprehensive way because you really need clear rules of the road to understand how you're able to use the data for these models. And that's very important here in the US so that, you know, we can maintain our desired leadership role in terms of AI technology. Yeah, and I appreciate that insight. I think as an advisor using AI, for example, chat GPT, sometimes that interface feels like you're chatting with a friend almost. So you feel comfortable maybe putting some information in there that you wouldn't put into Google. And from what you're saying sounds like to me, we should be looking at this like Google. Would you put your clients, you know, personal date of birth, net worth into Google? Probably not. AI is a similar, it's not necessarily a search engine, but that data can be used and given to other people. So we need to be really vigilant in our use of that and making sure that, you know, we're treating it like it is a search engine. So appreciate that insight. I wanted to ask, anyone can answer this, but we talked a little bit earlier about someone mentioned meeting notes, AI now has softwares that can take meeting notes. Tell us about, are those off the table entirely when it comes to privacy issues or where do those types of softwares land? Well, I mean, I can speak a little generally before Myles and Leila weigh in on specific advisor compliance concerns, but again, you have to consider the data and the situation just as you would any other sort of input. You know, just as an example at our law firm, we have these types of tools and we have client confidentiality concerns just like advisors. And so what we have to consider the context and the risk, right, if there's a meeting with a client regarding a confidential representation and use of a recording tool that makes the communication available to a third party could have a negative impact on attorney client privilege. And that's a big risk. And conversely, if there's an internal meeting for an administrative or operational matter, could be very helpful to summarize the meeting, make it available to those who are not able to attend, capture action item notes and other things. So, you know, really looking at the use case and the risk associated with the use case will help you evaluate whether it's a good idea to use any tool, including notes and things. And implementation wise, are there some, I have heard about some things that can be done like a agreement beforehand with the client to let the note taker on. Miles or Layla, could you speak to that? So before I do, before everyone rushes out to go get a note taking tool, I'm gonna say this, you don't always want to capture simultaneously or a recording of what's happening in your meetings. And I say this because from the RA perspective, from the financial advisor perspective, anything you create, right? So if you have this note taking tool, that becomes part of your books and records. So if there is a situation where you have a client complaint that note taking tool and the output, the document that it created, that document everything you talked about could be a double edged sword. It could work in your benefit, it can also work against you. So I wanna say that first, that be careful before you rush out what the implication of using that tool is. You know, you also have to also think about there's like, I had a call with a vendor today and the guy was using a free AI powered note taking tool. What safety are you getting with a free subscription to a note taking tool? Because they're offering it for free. So what the hell do they get out of it, right? So they're getting something out of that relationship that they're offering this tool to you for free. So those are opportunities for you to look at, is there a paid subscription that would work, that would offer me more protections, that would offer my clients more protections. So I want everyone to kind of think about those things when they're thinking about rushing out and utilizing these different tools. The other thing I want you guys to think about is AI is like the hot sexy thing on the street right now. But there may be other tools that you have in your arsenal already that can help you and your firm be more efficient without exposing all that data to these AI tools that you may not completely understand, right? So to Alex's point, there's primary uses for these tools and these companies also have secondary uses for that data. And so one of the concerns that comes up is how much when you're doing your vendor due diligence, do you understand about what those secondary uses are? And let's be real, once it's on the internet, it doesn't go away. I don't care who says they've deleted it, it's there forever. So you can remove an account as well, you are not 100% sure that they don't continue to house that data. So those are things that you have to really think about and consider before you rush out and start using these tools. So- I appreciate that. So are you saying that there's still no such thing as a free lunch when it comes to AI products, they have something, some sort of incentive? Never a free lunch. All of us girls who've ever been to a bar and offered a drink know that. There's nothing that's ever truly free. And that's why regulators don't want you to use that term either because everything has a string attached. Absolutely. Miles, did you have something to add? Yeah, I just wanna kind of expand a little bit upon what Lila said. It really, what we need to understand is that AI is being kind of baked into most of the major companies products already, Salesforce, Microsoft, Google, Apple, they all have AI components already inside them. Now, that being said, I'm a little less concerned about internal controls or cybersecurity risk with a Microsoft to Google or Salesforce or an Apple. One of the things that we're always concerned with are just like Alex said, brand new companies are coming up every day, they're popping up out of the woodwork, and you need to make sure when you are vetting and we'll get to vetting and we'll get to cybersecurity, but that you understand the operating history. Because if there's no operating history, you might wanna consider something a little more succinct, something that's been around a little bit longer. I have a lot of clients who talk to me about meeting recording and note taking integrations. And I remind them that, anytime you're doing a meeting recording, you should be considering the issues, the privacy issues or anything else. I mean, the transcriptional for Microsoft Teams could have the same amount of information. And like Lila said, that does go into your books and records. So the point here is, we are going into almost uncharted territory, we need to be very careful and vigilant. And as compliance people, that's kind of our motto is be careful and vigilant. But we wanna make sure that you guys understand that while there are some incredible uses for artificial intelligence, which again has been around for the better part of 30 years in our industry, now that like Lila said, it is the hot topic, it is something that will absolutely be a focus of the regulators for the next few years moving forward. Thank you so much. These comments are really helpful to kind of paint the picture of concerns we need to be having as we are looking at these types of tools, the note-taking tools, recording tools. Let's just jump right into vendor vetting because I think this is something a lot of people are on this call to hear about. What kind of red flag should advisors look for in software they want to use? What kind of questions should we be asking? Alex, you don't mind answering this directly and if anyone else comments, that'd be great. Yeah, sure. Yeah, I would suspect that, you know, most advisors whose main business is, you know, providing financial advice to clients who aren't giant are not going to be in the business of development of large language models and so probably the AI they're going to get is from vendors. And diligence is going to be extremely important and, you know, the types of questions to ask vendors or to review, you know, have to do with data use and controls in place. You know, Miles mentioned a good one, you know, their operating history, you know, do they seem trustworthy? If it seems too good to be true, it probably is, right? So, asking whether the vendor uses all data that's input into the tool to trans-model. Does the vendor de-identify data before it's input into the training data pool? Does it offer a variation of services? As Layla mentioned, maybe there's tiers of service, a paid service and a free service where, you know, they commit to not using data for training purposes of an algorithm that's going to be used for other people. I mean, the variation among vendors is substantial and ranges from products and tiers that don't use customer input data at all to train, you know, a fixed model that's the same across everyone or to use data to only train a customer-specific instance of a tool, you know, that particular product. You know, Microsoft is an example of, you know, a company that offers that kind of option to, you know, the types of companies that just suck up all input and potentially any other data that might be associated with or touchable by the product, as well as just transfer all ownership of the data, you know, to the vendor. So, you know, it's really important to get your head wrapped around about how they're using the vendor, what rights are they given, you know, whether it's personally identified ownership rights, how they secure it, you know, make sure you're seeing that a third party is independently vetting that, you know, they're upholding, you know, what they say they do around security, you know, and it can be daunting sometimes to try to get these answers. I mean, you got to review the contract terms. You might have to talk to the salespeople about whether there are specific configurations or offerings that might help you limit secondary uses. You might have to review privacy notices. I know that a common tactic of vendors is to say all these things and the legal T's and C's and then refer to a privacy notice that's changeable at will that talks about, you know, how they can do just about anything with the data. So, you got to be a little bit of a detective, but, you know, figuring out that data use is really important. Yeah. Sounds like a bit of a process and maybe something we need to be really taking a deep dive in when we're looking at a vendor. There's a lot of concerns at play. Leila, Miles, anything to add here? Yeah, I think the big thing for me is documentation. So, there are a lot of people like, oh, I looked at stuff, I called someone, I read this, and I was like, great, where did you document it? You want to make sure that any due diligence that you're doing on vendors, whether it's an AI-based platform or a custodian, right, you want to do your due diligence, you want to document that due diligence, that's part of your books and records, that's part of your compliance testing. And taking it even further, you want to make sure that that process and what it looks like, if it's a little bit different for tools with AI components, then that needs to be part of your compliance manual update. So, you kind of build out what those policies and procedures look like. And let's be real, there are a lot of big companies doing a lot of shady stuff. So, I know we rely on companies like Microsoft and Google that have been around and they're behemoths in our country in terms of the services they provide. You have to question everybody. We just are about done with annual amendment season and annual delivery, that should be going out the privacy policy. If you aren't already, you should be making sure that a copy of your privacy policy is present when you're giving your ADV to prospects. Those are opportunities to talk to your clients and prospects about how you're using technology in their data. And if you're looking at your privacy policy and it's a page long, not to say it's not adequate, but it's a good sign that it isn't. And if you're talking about utilizing AI in your business, that's an opportunity to flesh out that piece of it in a privacy policy. So, all of this is to say you've got to do your vendor due diligence to be able to speak to clients about how you're using it, how you're using data. The other thing about artificial intelligence that we kind of gloss over and assume everyone knows is that it's always learning, right? This isn't like my college coding class where I created code on you input English, it outputs pirate speech, right? It's nothing like that. It's constantly learning. It's learning real time. So, there are implications to that in terms of what is your turnaround time to save our data or get your data back, can you? If you have a system that's automatically and continuously learning from input. So, those are all things you want to talk about with your consultant, with your attorneys, call Alex, pick his brain, you know, talk about, you know, think about, you know, what are the questions you need to ask these vendors? And I am someone, I like to call back multiple times to make sure there's consistency with who I'm talking to. If there are any inconsistencies based on who I'm talking to, then that's a huge red flag for me. Totally. That's great. You know, just one thing I want to state on due diligence before we move on, Summer, is that, you know, we need to do due diligence. We need to do due diligence on every new third-party service provider, AI related or not. You need to make sure that there are internal controls for privacy and cybersecurity, whether it's reviewing a SOC 2 report, requesting a business continuity plan, even going as far as doing a basic Google search on the vendor to see if there are complaints, to see if there are lawsuits. You want to be able to, you know, something that's simple that a lot of people kind of just disregard. Do a basic Google search, print up your findings, you know, make a note on it, put it in the file. That way, like Leela said, you are memorializing your due diligence. I mean, you know, back in the day, due diligence on vendors used to be, can they do what we're paying them to do? Over the last 15 years or so, it's morphed into, how are they going to protect our clients and our firm's data? So in the case, you know, of AI vendors, my particular thought is that, you know, privacy and protection of private information are your number one concerns. But also again, look at operating history, you know, do not strictly rely on those amazing fact sheets that the vendor is sending you saying how awesome they are and all the awards that they won. You know, like everything else in this business, trust but verify. That's the best piece of advice I can give right there. Appreciate that. It definitely sounds like, you know, before we were just jumping into AI tools, we really need to be having conversations as a team and with our compliance officer, making sure that we're not just testing out even these technologies without making sure taking necessary precautions. I want to circle back really quickly before we move on. Someone asked a question in the chat. This is regarding the client note-taking AI. She says, does the entire transcript of a client meeting become part of the books and records or just the condensed notes that you save in your CRM? Does anyone have an answer to that? I think maybe it depends, but. If your note-taking tool outputs an email, a PDF, a Word document that says Layla said this, and then Miles said this, and then Layla said this, that is now part of your firm's books and records. Now, if you then take that or you are handwriting notes or typing notes yourself and you put that into your CRM, that is also part of your books and records. So think about books and records as anything your business generates to conduct business. So if you're using a tool like a note-taking tool and it outputs line by line what everyone said, that's part of your books and records. It's the same thing as if you didn't have the note-taking tool, but you recorded like a Zoom webinar or a call on Zoom or Teams or whatever platform you're using, WebEx, that recording has also become part of your books and records. Totally. Just be super careful, everyone out there, if you are using these note-taking or recording tools for things such as investment committee, valuation committee, executive committee meetings, because remember, these are things that you are going to be asked for in your SEC exams, and we want to be sure that there's nothing extemporaneous there that might have gotten onto a transcript that you really shouldn't have. So please be careful. Utilize caution, especially for committee meetings, because remember that these are things that are going to be requested by the regulators during pretty much any SEC exam. Thank you so much. Appreciate the insight for that question. Very helpful. Just wanted to, before we move on from vendor vetting, I wanted to ask Alex just this follow-up on the data. We've talked a lot about the concerns and maybe you feel like we already answered this question, but I want to get a little more specific. How can advisors find out where data is being pulled from and what it's being used for in that process? What's the best way to go about that? Well, I mean, there are a number of places, right? I mean, so as a lawyer, I generally start with the contract and related documentation, but I also like my business users to send me a data flow. If there's going to be some sort of integration, where is that going? You need to ask questions. When you have a data flow and it points to this little circle, it says the vendors, servers, where are those? What are those connected to? What are their rights to use that data for other purposes? It has to be reasonable based on the risk at issue, right? When client data is involved, you're going to be a lot more diligent than if client data is not involved, but it's not a one size fits all answer. It's going to depend for any vendor. It depends on the vendor's level of maturity and documentation that they have to answer the questions. You might end up, like Layla said, calling like four people because that's their documentation. It's in the minds of the people at the vendor, right? It just all depends. Yeah. So really addressing that vendor directly and trying to get the information from them, but you may need to dive deeper, hopefully get the same person on the phone from what Layla was saying. So I want to jump into compliance. Now we've talked a bit about vendor vetting. Layla, what questions are compliance officers not asking? That makes sense. Can you fill us in on where we're going wrong there? So there are a lot of questions that aren't being asked. Part of it, right? So I want to kind of set the scene of why compliance departments are understaffed. They don't get enough resources and time because I know everyone here at some point cursed their compliance department and said it was the business prevention department. So I get it. But your compliance team needs to ask a lot of questions when it comes to vendors, right? So there could be tools that you're using now. So Miles mentioned a couple of different CRMs and aggregator tools that may have already implemented AI aspects to that tool. So that's an opportunity to reach out and be like, do I have to accept this AI piece of your software now? Is this something that you have a toggle that I can turn it off? Is this something that I can refuse your firm's access to my data so that you can't use it to build your AI algorithm and let it kind of learn off of my data? So you want to understand tools you're already using that implements those things, right? Like I got a call from my LexisNexis rep like, oh, we have Lexis plus AI now. I'm like, oh, I got a lot of questions. So you want to understand for the tools you're already using that have already implemented an AI portion, you want to understand how, what, who, where, when. Like you want to really get down and understand why. How do I turn it off if I want to? Can I turn it off? Because some of these companies, they're really big and they're going to tell you, no, you can't turn it off. So you may need to make a decision at that point whether you want to keep the vendor or not. You know, so those are the kinds of things you want to look at in terms of current tools. Now, new tools you're looking at. I think a lot of this we've already discussed about understanding, is this an open environment AI? Is this a closed environment AI? Is it just looking at data that you have? So I use the example of LexisNexis, which is a tool our law firm uses to research case law. So one of the questions I ask is, well, is it close to just LexisNexis? Because there was an attorney not so long ago that used ChatGPT to do legal research and ChatGPT gave him fake cases. So you also want to look at the environment for your AI. Is it closed? Is it open? Is it run by ChatGPT? Well, ChatGP makes up a lot of stuff. You know, it's drawing on a lot of different resources online. The other kinds of things you have to be worried about are assumptions, right? There's a lot of data out there, a lot of studies out there that show that AI can have certain inherent biases, whether that's based on gender, ethnicity, race, religion, whatever. I think we've all been on like Instagram or Facebook or TikTok and it's like, I asked AI to show me, you know, a girl from every state and it's like very stereotypical stuff. So you have to be careful about those things because if there are those biases present in that technology, how does that affect your use of the tool? How does that impact the output you're getting from that tool for your client? So you really want to get a good understanding. And if you're talking to a sales rep who is unlikely to know some of these more specific, the answers to some of these more specific questions, you've got to be pushy. You've got to be like, listen, I've got a lot of data and you want access to it. I need more than your sales pitch. It's not good enough to give me all the benefits. I want to know how this benefits you. I want to know what you're doing with my data. I want to know what are some of the assumptions that are built into it, right? So it's kind of like billing software. You get billing software. You want to know like, is the software doing average daily value? Is it looking at the last day or the first day of the month or quarter? You ask these questions about your billing software. How, what's your algorithm? How are you calculating billing? Extrapolate to, to the vendors that are using AI. So you want to dig in deep and this is not the time to be nice, right? I'm a born and raised Southern girl. I know all about being nice, right? But I'm a big proponent of saying, bless your heart, bless your heart. I've got so many questions. Got to be pushy because from a regulator's perspective, this is part of your fiduciary duty as a financial advisor is to do what's in the best interest of the client. So that means you have to understand the tools and technology you're using and then make that determination. Is this in the best interest of my client? Yeah. I love what you're saying here, how, how far reaching AI is. And we, we can't just pretend it's something simple and pretend ignorance. We really need to be doing our research. I think one, one unique thing that's going on in our industry is we don't have a lot of guidance from the SEC on what we can use or can't use. Miles, can you speak to that? Like where is it safe to tread under the current environment with the SEC? Sure. And you know, listen, the guidance we've gotten is very purposefully vague. We're always consistently awaiting guidance, but I just want to bring back one thing that Leland said, what questions are compliance officers not asking? I have found that in many cases, compliance officers are not asking the most basic, simple question. Go to your marketing team or whoever's in charge of marketing and say, are you using AI or are you considering, excuse me, the use of AI? It's the simplest question, but you may be surprised with the answer. There may be people using large language models already without you knowing it. And that could be a potential problem right there, or it may spark their interest and say, oh, you know, I've thought about it. What's the best way to do it? Which opens up that compliance conversation. Leland, when I worked at Payne Weber back in the day, we were the business prevention unit as well. So I understand that, you know, we worked very hard to try and change that perception because we are really business partners. Now, as far as the guidance from the SEC, again, it has been limited. There was a fact sheet that was put out that was attached to this so we can talk about regarding conflicts of interest. Chairman Gensler has been, you know, he's given speeches about AI. He's given speeches about a lot of things. I heard some speeches at regional conferences, at the NSCP, at ACA's conference about AI. The issue is that because we don't have a lot of guidance, one of the things I'm advising my clients not to do is not to be the first enforcement case, not to be super overaggressive with AI. Now, we've seen a few enforcement cases already, OK, but the majority of them are not specifically the use and the output of AI. It's more the marketing of the way that they're using AI, using terms, superlative terms like the most sophisticated AI algorithmic tools when they're really just using a chat GPT window. These are the places in which people are getting in trouble. So it is the aggressive adopters, just like the marketing rule, just like the private funds rule, just like any changed rule or new rule from the SEC. We need to understand what it is that they're looking for. Now, that being said, my opinion of what they're going to be looking for is going to be a tremendous amount of disclosure, whether it's form ADV part two disclosure, whether it's disclosure on the marketing piece itself, whether it is disclosure in your manuals, you're going to have to disclose your usage of AI, specifically for marketing purposes. And there are going to be standard disclosures, at least in my opinion, over the next 12 to 18 months on marketing pieces that utilize AI. That way the end user sees that you were disclosing the fact that some or all of this material may have been generated utilizing a large language model or chat GPT is subject to change and has not been independently verified. These are gonna be important pieces that, we believe are going to be coming very soon in the near future. There've been some great articles by law firms and consulting groups on AI. You can look at LinkedIn, you can look at a lot of the NSCP. There's a lot of different places and a lot of different resources out there. My only advice to wealth managers and investment advisors is think before you leap, please. Understand all of the ramifications, all of the concepts, all of the potential conflicts and all the potential disclosures. We all wanna prepare for the worst and hope for the best. This is one of those situations where it is absolutely crucial. Lila, Alex, any thoughts? I don't, I wanna put like a little asterisk, right? Next to what you said, or like a footnote. Because I don't want whoever's watching to be afraid to implement technology, to use these tools, right? A lot of these things we're talking about, like, oh, you might go to your marketing team or talk to someone, they're like, oh, I was thinking about it or I'm already using it. Some of these things should already be covered by your cybersecurity policy, right? Like no one should be allowed to download a new tool without pre-approval kind of thing. So I want you to think about that too. But I want you for those of us who are a little bit older and remember when computers came out, you know, in the first time you moved from, you moved aside your typewriter and started using a computer, you know, the first time that you opened an email account, all of the companies, the SEC, the regulators, the state bars, they're all in a huff, like, what is this email? How are we gonna use it? Is it gonna create problems? Like, we've gone through a lot of technological advancements in the last 20, 30 years. So some of this is we can look back at how other technological advancements were treated, the kinds of concerns regulators had, whether it's state or federal level, and what was done in order to mitigate those concerns and those conflicts of interest. So everything that Miles says is very important, but I don't want anyone to feel afraid to start implementing these tools or start, you know, looking into it, because we've gone through a lot of technological advancement. Everyone's always afraid when something is new. That shouldn't keep you from trying it out. You just have to think about more globally, higher level, how these regulators have treated these kinds of advancements in the past and what they wanted to see implemented to mitigate any risks. Thank you, I appreciate that. Go ahead, does someone have a comment? Sorry. No, I'll just add, you know, I think what I've seen from the SEC in this industry is similar to what I've seen from other agencies in other agencies, right? And that's high level rules targeted to try to prevent consumer harm and encourage mitigation of risks, like, you know, bias, consumer deception through misleading advertising. And the way you do that is through transparency and encouraging governance, right? And we've seen, for example, bias be at the forefront of consumer harms, you know, in terms of AI regulation in the U.S. right now. And, you know, some of what the SEC has proposed in their rules is essentially saying that advisors need to be aware of the potential for AI tools to promote recommendations to clients that are biased away from the client's best interest towards the interests of the advisor. And so, you know, the same things that are gonna be important in other industries to protect consumers from other harms are gonna be important here, right? Identification of risks based on the use cases that you're, you know, using it for. Identification of ways that you're gonna mitigate that risk. How you have governance and accountability within your organization to ensure that the policies and procedures you put in place to identify and mitigate those things really work, right? So like Leila said, like Miles said, it's all new, but, you know, we've done this before with technology and, you know, keeping with those sort of themes, you know, responsible use, you know, I think will do people well. Yeah, absolutely agree. In previous sessions of this webinar series, you talked a lot about the benefits. And so it's good to kind of weigh, you know, the pros and the cons of using AI, but definitely possible, definitely within guidelines and boundaries, which we're gonna get to shortly. I just want to make sure everybody knows that we've talked a little bit about an SEC fact sheet regarding AI that will be available once this webinar is published as a resource, but make sure to check that out and take a look to see what we've been discussing today. We did have an audience question from Jill. She said, if large language models like ChatGPT or Gemini were used to modify content that are human slash advisor generated, would you treat it exactly the same as if you're using these tools to generate new content? So, you know, it's based off of your ideas. Is that the same as completely new content? How do you answer that question? Anyone who's willing to take this down? Jill have a clarification on what kind of content she's contemplating when asking the question. A little vague here, but maybe she'll pop on, we'll see. Well, just at a high level, you know, taking a large step back, regardless of the prompt that you give these tools, right? They're using an algorithm that has been developed using a ton of other data, right? Regardless of whether you ask it to modify an existing piece of, you know, an existing asset or create a new one. And certainly there might be rights that you, existing rights that you have in an asset that you're giving it. But, you know, the basic process that it uses to modify and then create the new work are the same. And so I don't know if the concerns were copyright specific or more compliance based, but. Okay, yeah, here she goes. And she says an entire, for example, an entire blog post asking the tool to simple and asking the tool to simplify. Essentially modifying the language used to be more user-friendly or rewrite to sound like a specific kind of author. Yeah, I would consider that one as if it was a brand new marketing piece that was created. I mean, you are still, you know, I do a lot of marketing review. And if you have a piece that you wrote and all of a sudden you want to make material changes to whether utilizing large language model, chat GPT or asking the person who sits at your desk next to you, can you throw a couple of comments on here to me that I always review that as if it were a new piece, it's regardless of, you know, where the output comes from but especially with an extra sensitivity to AI and chat GPT. I certainly, at least for the first, you know, six to 12 months expected any of my clients to be using these marketing purposes that I'm going to take a deep dive into each piece regardless of how it's generated, just to be safer. Yeah, I think what Alex is alluding to is more like copyright issues. So like it's something like a chat GPT, it's pulling on available resources out in the internet. So if you want to write a piece, like if I want to write a piece and I say, you know, here's something that Miles has written and here's what I've written now rewrite it so it sounds like Miles, there are potential copyright issues, right? And, you know, then it's have you infringed upon Miles' rights that I do that by having chat GPT create a post and then looking at what data it used to generate that rewrite, right? So there's those issues from Miles' perspective in terms of like a advertising role submission. Yeah, I mean, you would do that in accordance with your firm's policies and procedures. Would that be considered depending on how much change to the piece, would that be considered a new marketing piece that needs to go through compliance review? But I think there are bigger legal issues you have to be concerned with. So for example, with my team, I tell my team use the hell out of chat GPT, but use it for things like creating processes with our tools, right? So we create written processes of like how to open a new client matter in our case management system. We might use chat GPT for something like that because those are things that'll help us and make us more efficient as a team, but we're kind of avoiding some of these other issues where we're creating unique pieces that we're then marketing and saying that this is our written pieces. Absolutely. I think that answers the question pretty well. Appreciate your insight there because that is a good concern, something to wonder about. We have just 10 minutes left in our webinar. And so I wanna make sure we get to a few more questions. Before we leave compliance, I wanna ask Layla about softwares developed particularly for, excuse me, the financial advisory space. Are those automatically compliant? Or tell me about that. Oh, nothing is automatically compliant, right? And I use the example of Morningstar. So there are a lot of people that use Morningstar and Morningstar has been around a long time and Morningstar has all these great disclosures. And then we've been through enough exams where someone was using Morningstar that I can tell you right now it's not exactly compliant. And why do I say that? Because these are companies that create certain model disclosures that may not be sufficient for your business, how you're utilizing the tools, et cetera. So Morningstar is this well-established company. It's been around forever. A lot of firms use their tools and they assume, and you know what they say when you assume, you make an ass out of you and me, they assume that these products come fully compliant. So short answer is no, don't assume it's fully compliant. Look at how this tool is working. Look at whether it includes everything that you need to be worried about. So Miles has talked about conflicts of interest, disclosure language. These are all things that need to be reviewed. And you need to understand if you have an opportunity to add to or supplement what's already there or change what's included automatically if it's part of the software. And there are some tools that won't let you do that. So that becomes a barrier to use because then how, if you have this output that this tool is delivering, how are you going to be able to add the missing disclosure language if the tool doesn't allow you to do that? So these are things, add that to the list of questions to ask, right, when you're vetting out tools, but don't assume things automatically come compliant. Absolutely, that definitely makes sense, especially considering all the factors we've talked about today. Got to make sure that everything's within compliance. So I want to move to some guidelines and documentation because I know a lot of advisors are wondering about this. So I'm going to open this to any panelists who wants to answer, but if a firm is ready to put AI guidelines into place to limit use among its team, what specifically should be in that document? I'll start if you want. You know, like with any other policy and procedure, you know, you could either make it a standalone addendum, which is just specifically your AI policy. Like some firms have a separate social media policy, separate policies, or it could be entailed in your manual, but you need to lay out the rules of the road. You need to lay out, number one, what the firm's policy is, okay? The firm allows AI usage under the following circumstances. The firm does not allow AI usage under the following circumstances. Then you need to get granular where you really need to put your guidelines into place, guidelines for due diligence. We discussed vetting, we discussed due diligence. Maybe you need to have separate guidelines for AI-based products or technology-based products for due diligence, or you could put it under the general standard. But I think that it's really important that you lay out those rules of the road. If you are going to be using it, here's the process. If you're going to be using it for marketing, here's the process, here's the approval process, here are the required disclosures. As long as you are providing a roadmap, you know, you're in good shape. However, there's only one caveat there, is that do not, please, do not put anything into your manual you do not anticipate doing every single time, because that's one of the easiest layups for the SEC during exam is, oh, you say you do this for AI purposes. Can you please show me your documentation for the last 24 months? And then you have to get into that whole difficult conversation tail between your legs. Well, it's in our manual. We put it there potentially. We haven't been doing much of that activity. We don't have the documentation. And then all of a sudden, you have five different things on your SEC exit letter under Rule 20647 Compliance Program. So, you know, please, again, the point here is add it to your policies and procedures, make it a rules of the road, write it in easy enough language that your employees will understand it, and also write it in easy enough language that it might prompt questions from your employees as well, questions that you can easily understand and answer. I'll open it up to Alex. Yeah, Alex, do you have a comment there? Yeah, I mean, to add on Miles' comment about making sure you do it, I would say be realistic too. I mean, I just didn't, we've seen clients just take a whole host of positions from trying to say they're going to outright ban it to, you know, using it without really much vetting at all. And neither of those positions are probably tenable. They're probably just trying to get something on paper that, you know, is going to be on the shelf. So, you know, there should be clear rules of the road if certain uses are prohibited, except for, you know, if certain compliance steps are taken, like maybe there is an approval by a certain level or certain committee, if it, you know, if the AI is involved in employment decisions, or if it's involved in, you know, marketing, it goes through another level. There should also be confidentiality requirements in any policy. You want to make sure that employees are well aware that they need to be able to, they need to be limiting use of sensitive data. And I think you also have to obligate folks to make sure that they're reviewing AI-generated material. You know, Miles mentioned treating the revised marketing material as new marketing material. And, you know, even a rewrite could, you know, insert erroneous information into, you know, a public document or even an internal document. And you just, employees have to know that these things, they're just probabilistic. They're not capable of decision-making really, like human beings are, and to discern, you know, the nuances that we are. And so, you know, it just really needs that review and oversight in every circumstance. Thank you. And just to follow up, piggyback just briefly, what other documentation needs to be filled out as a reminder to all the advisors and kept on record regarding AI use? What do we need to keep in mind there? I'm gonna jump in and say that this is an opportunity to go back to your client-facing agreements in the same way that we include like an investment management agreements or financial planning agreements that a client consents to e-communications. I think this is an opportunity to revise those kinds of documents, to insert language that they're also consenting to their data being used with AI tools that the firm may implement. And then coincidentally also updating your privacy policy and your cybersecurity policy and your policies and procedures manual. And so I think that's a big opportunity client-facing wise in terms of updating those documents. I think too, if you're deciding to implement it, a notification, you know, whether it's a letter or email or official bulletin that goes out, but my big thing is training, right? There are firms that put together policies and procedures. There's that little annual attestation that goes out, right? December, January, it's got a link to your policy and procedure manual. No one fricking clicks on it, but everyone's like, yes, I've read and reviewed and I understand. So you have just done all this work to update your policies and procedures. You made a really idiot proof so everyone can understand. You put it out there and you're like, yeah, I signed that, I read it and reviewed it. So whether it's internally with your team or externally with your clients, you have to have an opportunity for them to sit down and talk with you and understand how it's used. So create those training sessions or create pre-recorded videos where you start collecting FAQs from your advisors when they're talking to clients and put together that content that you can then deliver. So it's really important that you're not just documenting it in writing, you're also just being a fricking human being and being like, let's have a conversation about it. Let's answer your question. I appreciate that so much. Go ahead. I'm sorry. So I just want to bring up training. You should absolutely have a training session documented internally that people attended the training session. If you want to have a standalone policy, send that around or put it through your attestation system as a separate attestation, not just your compliance manual. Well, most of my clients do this for cryptocurrency. They do this for texting where there are separate attestations with the firm's specific policy on that. I think that between policies and procedures, you know, any kind of review your form ADV part two to see if there's anything relevant there and absolutely training and documentation of training and attestation. Sorry, Sam, I didn't mean to break you. That's okay. No problem. I think this is really helpful and we are out of time for our webinar, but I just wanted to say that I hope everyone found this very helpful painting the picture of some of the things we need to be aware of, but also that AI is available to use and we can implement it under safe guidelines and procedures as we've discussed today. So hopefully you took away something from the webinar. I want to say thank you so much to our panelists for their time and for their input and all of their value that they brought to the table. And without further ado, thanks everyone for joining. Thank you so much.
Video Summary
In the video transcript, the panelists discussed the importance of implementing AI compliance measures in financial advisory firms. They emphasized the need for due diligence when selecting AI vendors and the implications of using AI tools in client interactions. Compliance officers were urged to ask specific questions about data sources, privacy issues, and biases in AI algorithms. The panelists advised creating comprehensive AI policies and procedures that outline permissible uses of AI tools within the firm. Training sessions and documentation of employee training were highlighted as essential components of AI compliance measures. The importance of legal and regulatory compliance in implementing AI technologies was underscored throughout the discussion.
Keywords
AI compliance measures
financial advisory firms
due diligence
AI vendors
client interactions
data sources
privacy issues
biases in AI algorithms
comprehensive AI policies
×
Please select your language
1
English