false
Catalog
2024 Playbook Series #1, Session #1: What Is AI?
Session recording
Session recording
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, everyone. Welcome to NAPFA's first webinar series for AI and advisors. Today we're going to be talking about what is AI. And in the future, we'll have two webinars diving into how to employ AI and legal compliance side of AI in our space. So we'll start off by some introductions. My name is Summer Perry. I'm a certified financial advisor based in New York City. And I love using technology and what I do every day. So that's why I'm here on the panel or mediating the panel. And we have great guests, panelists who I'd like to introduce now, who will be joining us today. So we'll have a brief introduction, and then we'll jump right into the webinar. So panelists are Tyson McDowell, Andrew Smith-Lewis, and Derek Nottman. So we'll start with Tyson, if you could just introduce yourself and say why you're here on the panel, and that'd be great. Wonderful. Yeah, I'm Tyson McDowell. I'm a software entrepreneur and AI engineer turned venture capitalist. We build AI-driven companies that help create efficiencies and improve quality of life for people. Great. Thank you. Andrew, do you want to introduce yourself and then Derek? Sure. I'm Andrew Smith-Lewis. Pleasure to be here today. Excited for this conversation. I run a group called Ally Studios, which is a creative AI studio focused on amplifying human brilliance. So we're on the side of human augmentation versus automation. Very cool. And hello, everyone. I'm Derek. Thanks for joining us today. CFP, been an advisor for 18 years, cut my teeth in the insurance broker dealer world. I have my own RA now. I'm also kind of a recovering advisor because I also am the founder and CEO of an AI company now. Great. Yeah, we're excited to hear from each of you today. And so just kind of for our audience, this is kind of the outline of what we're going to get into. Just a background of artificial intelligence, talking about its limits and it's all that it can do. And again, feel free for questions to use that Q&A feature. So we'll try to get to any questions. So with that, I would like to start off by asking our panelists, can you define artificial intelligence? It's used everywhere now, but let's get to the heart of it. What is it? I'm not the technologist here out of everybody. I'm like the ideas guy, right? So my definition probably is not going to be technically accurate. So I'd love to hear from people who actually know what it is. I think that's a tough answer, though. AI is whatever you think it is anymore. It's a pure hype term, just marketing, especially as you look to solutions that are going to help you in your practice or solutions that will influence your business, the business you're in, and maybe disrupt it. They're all going to be powered by AI. And many of those won't actually be technically artificial intelligence. It doesn't matter all that much until you get down to the specific thing you're using and wanting to use it for, as to really care about how it does its magic. But I think it's fair to say that AI as a marketing term, it's got nothing to do with the underlying technology at this point. Andrew, do you have anything to add to that? I would agree. It's really buzzword bingo. And everybody is saying they have AI in something, sprinkling AI marketing pixie dust on top of stuff. I think stepping back, though, what AI really is at its core is this drive to emulate human intelligence, to imbue computers and machines with the ability to reason, to learn, and to perceive. And there are various flavors of AI within that, that take a swipe at those various skills of perception, learning, memory, and reasoning. But that's the goal. And I think it's different. For me, it's different from a lot of what we've seen in the hyper-progressive technology over the last couple decades, in that, for the first time, we're dealing with something that quickly starts to approximate, if not surpass, human intelligence in specific areas. And that's a unique time and place to be in, in the world. This is the very first time we have something besides humans on our planet that possesses intelligence, and that intelligence is accelerating. Totally. I know that makes sense. You mentioned, Tyson, that AI has gotten a lot of publicity recently, a lot of traction, too. Why is that? Why are we seeing this huge buzz around AI? Well, it really ties to what Andrew said there. The type of systems that are human emulating, that is new. That's new to where they're useful enough, to where a general non-engineer can go about using them. They're wildly more powerful in certain situations than anything that ever came before, to where the way that you go about doing certain things like producing images or editing copyright or copywriting or anything like that, it completely throws the workflow out the door and replaces it with something different. So I think that's where the excitement comes from, is the opportunity for dramatic acceleration in efficiency, but also dramatic increase, potentially, in quality. But that last piece is really where differentiation is going to come in as you're using it. Those people that just trust AI's intelligence are going to lose to the people that understand AI's intelligence and contribute to it actively. So I think I'll stop there for a second. Derek, you were nodding your head. Did you have something to add? I totally agree. An example would be, let's say you're going to use it to write a blog post for your advisor website. Are you going to put a human filter on the output there and actually see what has come out, or are you going to edit it, or are you just going to copy and paste and throw it out there? I think you have to be careful with that. I would argue that AI, it's a tool, but it's a tool that should be leveraged by a human being instead of replacing it. Skynet's not taking over. I don't think we have to worry about that. I love woodworking, for example, or working on my car. You have to know how to use the tool to get the leverage out of it to be able to get the outcome that you're looking for. I think AI would be very similar in that way, but Tyson's exactly right. You have to understand it, not just use it. Love that. That's a great point. Speaking of understanding it, there's a difference between generative AI and just typically what we refer to as AI, as far as I understand. Can someone break that down and explain what is generative AI? I can take a shot here. Generative AI is, effectively, when it's creating original work, it's ostensibly creating original work. Whether it's written words, whether it's images or video, its job is to create an all-new thing that is unique in the world. The way that it does it tends to be a massive averaging system, although in certain situations, there's some new insights that do come out. That's what generative means. It's crafting brand-new work as opposed to giving you a statistical report of generally what's out there. Interestingly, it's run by a statistical report of generally what's out there underneath, but it's bothering to go that next mile and apply it to something that is a usable piece of content in any of the video, image, or written format. Got it. Andrew, did you have something to add there? No, I would agree. I think what's interesting about the hype moment we're in now is that we've had this perfect storm of enough data being available to train these new types of processors, these transformer models that were created originally from Google and then let go. We had the internet, we had the transformers, and we had this perfect storm that allowed the creation of this generative AI bubble that we're in right now. When we play with chat GPT and different things, that's what we're experiencing. Of course, AI has been around since the 50s, and we've been using AI in all sorts of different fields to do that analysis. We have data sets, we're looking for insights on those data. What's unique now is we say, hey, learn all this data and then create original or new data from that. That's the hyper bubble that we're in right now that's caused all the excitement around AI. Just to clarify there, you mentioned chat GPT. Is that considered generative AI? Yeah. They built this by taking these transformers and they trained their system on 40x terabytes of data, which is everything they could slurp off the internet to basically imbue this with this unique ability to parrot human language and behavior as we have all experienced up until now. That's very much the star example of generative AI. Got it. That makes sense. For our audience here, financial advisors, where have we seen AI and maybe not have realized that that's what we're utilizing? We've discussed chat GPT, but you mentioned it's been around for a long time. Where are we seeing that? Where are we under our noses? There's a magical company called Netflix. I don't know if any of you have heard of that. Sure. Anybody who's used any of these systems, under the hood is a bunch of AI models that are making recommendations. If you've used an iPhone or an Android device, you are on a daily basis touching a stack that is using AI to make predictions about you to decide what to serve you next and how to shift your focus and attention in a particular direction. We've all been, for the last n number of years, been subjected to AI from these large platforms. Totally. Derek, did you have something to add there? I think it's also cropping up within the actual wealth and insurance space quite a bit. It's funny. Our industry is notoriously old and a bit slower to evolve and definitely a heavy compliance environment. I would be willing to bet that AI is probably being used in the background in a lot of places already because these companies, these different tech vendors, even insurance broker dealers and whatnot, are gathering a tremendous amount of information. What are they doing with it is really interesting. A lot of these companies, if you look, they have a team of data scientists. They're doing stuff with this data, but we may not see it on the consumer side as much just yet. There are some tech companies out there that are starting to push it a little bit more, whether it's compliance or marketing. There's some pretty cool stuff out there. Wealth management, GPT, is a relatively new firm that's using AI to help advisors with some marketing stuff. It's out there, but I think advisors are a little leery still because we tend to be in an older, slower to evolve environment. Yeah, absolutely. That's actually one thing I wanted to bring up here is there is a lot of hesitation around AI in the advisory space, as there should be. It's a new tool and people want to be careful. Could you help us dispel some of those myths or fears around using artificial intelligence in this industry? Anyone who wants to touch on that? I'll take that first. I wouldn't be worried for one second. If you go back about 10 years ago, you had all these robo-advisors coming out and everyone's like, oh, the financial advisor's gone. We don't need them anymore. That was a flash in the pan that just ended like that. At the end of the day, as amazing as AI is and will continue to get better, AI can't empathize. At the end of the day, human advice is extremely important. We need to be able to cry or laugh across the table with our clients. I think that the risk for advisors and AI is not being replaced by it, but by being replaced by advisors who are leveraging AI. Tyson mentioned there's all sorts of efficiencies and cost savings and things we can do by leveraging this new tool, or not new, but new to us. I think that's the challenge and the thing I would be scared of is if you're at a firm, we're just stuck in your ways and you're still using the yellow pad and metal filing cabinet, that's where I would be worried. Can I chime in for a second? Of course. I'm not from the tribe of wealth management, but I spent three years working in an alts platform developing educational programs for advisors. We train about 40,000 advisor teams. What I saw from my, again, limited experience in the independent channel with advisors is that if you look at, there's two main functions that advisors have. One is the mechanics of portfolio management, investment advice, and then there's that guide side. I think Derek alluded to that when he said AI can't empathize. I think there's two core roles, the mechanic side and the guide side. I believe that AI is going to come at the mechanics very hard. I think robo advisors were the MVP for what's possible. I think if you are an advisor and you're really just a mechanic, which I'm sure many of the people on this call are not, but if you have colleagues who are just mechanics, I think that AI will come for their jobs. They'll certainly be replaced by people who are using AI, as Derek mentioned. If you're a guide and you double down on that personal relationship, that connection with your clients, I think that's where you can really, as the human, stay ahead of the onslaught of AI. I think that's the really important place to double down. The mechanics will get commoditized, I believe. 100%. Great. I super appreciate those responses. That answers a lot of my questions myself. Tyson, I have to get you to weigh in here just because I know this is something that you are working with, human interaction and AI. Did you have anything you wanted to add? We've lost your audio for a sec. I think it's important to understand where you're trying to apply it. There's a lot of disruption opportunity in AI helping you market your service. That's just a basic thing. That's really disrupting the marketing industry and being able to do a better job of crafting messages and getting them out there, having a higher frequency of contact, more empathetic email messages, more empathetic outreach, automatic generated blog content that validates and substantiates your position on things. The differentiation in the marketing side, that's one piece. I think that's actually a higher risk for businesses like an RAA where not everyone can afford great marketing, but anyone can be great at marketing with SEO. AI is really helping that truckload. Advice aside, relationships aside, that's a huge deal. The second piece is relationship maintenance and management. I think that there's huge opportunity with a lot of the automation systems that are available to do an even better job of having automated outreach and relationship maintenance do a good job. So again, that's just another example. When it comes down to robo advising or picking or having some sort of opinion on what's actually the best advice, the data sets just aren't out there to train AI to replace what's in your mind. So I think it's all about how you can get yourself in the position to convince a relationship that you're the best one, you find that pipeline, maintain and nurture that pipeline and bring that human touch while simultaneously being able to do a good job of sourcing the best information online that supports your position as you're giving advice. Because there's more and more news information that your customers are gonna be seeing that's full of crap. I mean, the amount of AI generated content and financial advice and automatic generated YouTube videos and all this stuff, it's a wild west. It'll be that way for the next seven or so years. So doing a better job of curating content online that means something and sending that to your customer base can be a great problem. I think I wanna emphasize the point that Tyson's making. And it's again, it's an inflection moment in terms of where we've been up until now. So primarily, you can say that all of the data that was on the internet up until these models got released was human generated, right? It was people behind these things. Doesn't mean it was accurate. We all know what that looks like. But now what we have is we have machines adding to the content, right? So now you've got all this additional content that was not human created there. And we know that there are wild problems with that content. And that content is now going into the training sets for these models. So it's sort of like a self licking ice cream cone of information that's going on. And that's a very different point than we were at a couple of years ago when it was all human created. And that's gonna just accelerate as Tyson puts it, the crap that's out there. And it's something we need to be aware of. Sure. Oh, that makes perfect sense. And it's a good thing to keep in mind. We had a question from the audience. Sorry, we kind of got carried away but I wanna get to this question. Someone asked, what is the difference between algorithms and artificial intelligence? Someone- Let's ask the engineer on this call. Yeah. Algorithms is kind of another word that's full of lots of different meanings to different people. The short of it is an algorithm is you can effectively trace what it really means and how it's working for the most part. Algorithms often consume the output from AI. So AI is giving some sort of insight like a very basic, simple example would be like your FICO score, your general propensity to pay or something along those lines. Or if you're on your TikTok algorithm, the likelihood you'll like the next piece of content that it's gonna feed you. It doesn't know why, okay? So it just kind of says like, wow, you're sort of magically figured this out. And that magic part, those are the AI parts. And then the algorithms are the ones that are then deciding what to do with it. So the algorithm is sitting there saying, I'm going to force feed you this piece or I'm gonna give you these three and these alternatives or so on. So algorithms are your friend. They're the ones where the logic comes in and it starts to make heads or tails of how you might use the data. And then the AI side is the insight that is coming up with a piece of information where an algorithm can decide it has an opinion one way or the other, whether you like something or not, that kind of thing. Great. Did anyone want to add to that? It was a great answer. Great answer. I mean, I agree 100%. I think that to me that simply algorithms don't learn, right? AI learns. That's a great differentiation. That's a simple differentiation. And I want to, Andrew's absolutely right. However, the opportunity is taking AI and people mixed in with an algorithm so that the whole collection rises. Okay, so the AI will learn and it learns new things and super duper, but once it's learned and who cares, it's about how you go apply it in your business. And so it's really this partnership. The best way I think to think about AI systems, and by the way, people say AI algorithms and it's a nuanced question and lovely question. I think that at the end of the day, the AI thing, think of it as an employee and treat it the same. And by the way, it's this really brilliant child. And so it's got miraculous capability and talent in a certain area. And then otherwise it's totally unruly and ridiculous. So if you apply it in the same way as you would think about an employee, put it in a workflow, supervise it, be excited about its new insights, but still be somewhat skeptical. Make sure you have ways of kind of counteracting that and testing it. So algorithmic children. That's really helpful. So kind of speaking to that human element with AI, treating it as an employee, how can advisors think about AI in a way that helps them be more personable and just help them with their own human element? Is there a way it can do that or what are your thoughts? I think there's a couple of ways, like Andrew's mentioned some, so is Tyson. I think one is the efficiencies that Andrew was talking about, the mechanics. If I can spend less time on back office stuff, I can focus more on human development relationships, having deeper, more meaningful conversations with my clients. And that obviously has a variety of benefits in doing that. So that's not maybe using AI directly for that, but it is helping. But then for things like I mentioned earlier, writing a blog post, you can have an idea. And I've written content myself for years and what would take me a couple hours, I can get done in two minutes now. That's pretty awesome. I still put a human filter through it. So there's things like that that I think can help or even like websites or, I mean, you can do almost anything. I'm not a big GPT user, I use Gemini, but being able to put in different things that you want to know more about and ask it to come back with, give me some insights or tell me more about this or change this. There's some really cool things you can start doing with it. I wouldn't use it to start writing financial plans or anything like that. I think the back office stuff is really the brilliant spot, but marketing as well could be done really, really well if you start with the nugget of gold that's human. You have that idea and you feed it in there and then work with it. Like Slices said, it's that kid. So you give the kid this idea, go do something with it, but I'm gonna check in on you. We're gonna edit it a little bit. We're gonna pat you on the back or say, hey, you did this part wrong, go redo it. That's some of the ways I've seen it being used and it's been pretty helpful. Yeah, I know that makes sense. I've seen that too. It's really helpful with writing emails and such. And we're not gonna get in, we're not focusing today on what it can help us with, but just making it more personal or like using it to enhance the human elements, really fascinating. Andrew, did you have a comment? No, no, sorry. Okay. I just wanna throw on my favorite use of GPT is making text empathetic, not asking it knowledge, okay? Here's my message that I'm going to send to this customer who tends to be a poker player, who is also constantly afraid of the end of the world and is generally kind of neurotic, whatever. And putting in information about my, and then say, write this email so that they are comfortable and they feel like I understand them and they'll actually listen to it. Like even something that simple in the prompting can transform the empathy of that message. So that's, to me, GPT's greatest power just as you're using it in the way it's made right now, versus asking it for the best advice or these sources never ask it to fight sources. It's wrong. Yeah, there's something I call advisor intelligence, AI, and we can use artificial intelligence to enhance advisor intelligence through empathy and things like Tyson just mentioned. And it's amazing what we can do then. Because sometimes an advisor simply just can't articulate what they're truly trying to say. Here's a way to help us kind of pull that out. Because you can tell that, as you just said, I think I know these things about my client, help me tell this story better. That's huge, that's huge. Totally. Yeah, and to that point, we wanted to talk a bit about the limits of AI, like what it can do, but also what it can do. So I wanted to ask specifically, we're financial advisors, we crunching numbers all the time. What are the limitations of AI and math? I've seen some interesting things and I've looked into this myself. So I wanna hear what you guys have to say about doing an equations, that kind of thing. Well, someone asked about the difference between AI and algorithms, and this is the difference. So AI is a, it's a smooshy approximation of a broad set of averages. It is not actually contextually aware. There are some that you'll hear about that like doing depth of field image generation, where it kind of looks like it's sort of in first some sort of context and structure to it. And you could ask it like, what have you learned? And it might say that there's a depth measurement or something, but even then it's very generalized. It's smooshy by definition. So when you ask it explicit things and to do procedural things, it doesn't actually know something. You need algorithms for that. And there's plenty of very convenient math tools that use almost no power and energy and forces in the world to solve math problems. It's just not for AI to do. Perfect, yeah, makes sense. Wouldn't you say that's kind of today, but that's a problem being solved, Tyson? I mean, a year ago, you couldn't add three numbers together with chat GPT, and now it's gotten much better in terms of computational capability. Yeah, you might, today's AI can easily be included to say like, be aware that this is math. And in that case, go ask a math library for help. And it can be very accurate right now because of its ability to do contextual analysis like that. So you can string together a partner system that covers all this ground and the AI is more of a traffic cop and an interface and an augmenter. Sure, maybe one day AGI, Artificial General AI can do math accurately and all kinds of other things. The amount of compute power that it's gonna take to do it and the amount of data that it's gonna take to do it is so extreme when you can just turn around and just apply a math. Sure, maybe AI will do that one day. I think that's a meaningless exercise. I think you're, okay, but are we splitting hairs a little bit? So if you have a generative AI front end to Wolfram Alpha, for somebody on the other side, they don't really care how it was done as long as it's done accurately. So technically, yes. It's that last part. The market availability of AI, is it done accurately? We're a long way away from anyone being able to attest to that. And even in systems that are getting close to accurate, they still are hallucinating enough to where, for instance, the defense department wouldn't trust it. So, or a bank or indemnity insurance or any of those things. So I think it's just a little bit of a esoteric kind of question. The fact that AI can't answer math doesn't mean that it's not doing the other things right that it's good at, like empathy, creation, information. But Tyson, you wouldn't trust the chat GPT Wolfram Alpha plugin to do computation for you? I don't, no, because it'd be a, I wouldn't have auditability to the pathway. It's too unstable in terms of long-term. Why would I trust a new system when there's so many free and available guaranteed systems that we can use? I just, as a business, I wouldn't bother with that. That makes sense. But you're saying that changes may be able to come in the future to correct that or with an AI? I think at some point it's like, it'll either be certified or not as good at that. And then if it's certified or not good at that, then go ahead and use it. And it'll be interesting to see who gets to call them. Sure. Andrew, did you have anything to add there? I'm gonna take this off with Tyson. So talking about just, again, this limitations of artificial intelligence, what would you say, can AI think for you as a financial advisor? Can it write for you? What would you say to those things? We've talked about it a little bit, but I wanna touch on this. Maybe Andrew, we can start with you. I mean, I think you have, I think my issue with a lot of these use cases, especially in finance, is the underlying training data. So whether you're talking to chat, GPT, or Gemini, which is Google rebranded, Bard, I think is Gemini, you're still speaking to the internet. So the training data for this is CNN, Fox News, Reddit, and all things in between. And that's your source of truth. Now, I like Tyson's example of not using this as the primary originator of content, but rather a way of modifying content. I use it all the time. I'll write something, I'll say, I need to Chris Voss this a little bit, and it'll put this in the context of Chris Voss. So I think that's a really interesting righteous use. But the blind trust of the underlying AI, and then the training data is really risky. Now, there are different companies that have stood up their own content on top of one of these large language models. So there are experts in behavioral finance, or there are experts in different topics. There, you're gonna get a little bit of a pure source of information. I think you have to be super cautious about trusting these systems to do the primary thinking for you. And it's very tempting to do, because it sounds good, and it writes fast, and you're in a hurry, and you're like, this looks okay, but it's a very dangerous game to play, especially when you're thinking about this lane of wealth management. Totally. Yeah, that makes sense. Derek, did you wanna add to that, or what were your thoughts there? I mean, I use it quite regularly to create content, and never do I just copy, and paste, and post, and see you later. There's regulatory compliance issues that we have to be aware of, and make sure that we're ticking those boxes, but also reputational risk. What if I do something quick because I don't wanna spend five more minutes editing, and I post it, and it's wrong? That's gonna make me look like an idiot. And let's face it, trust is really, really important in the wealth management space. And the last thing you wanna do is jeopardize that, especially with a piece of content that might be the first point of entry for a potential customer. So just being really, really careful with that. So yeah, you can use it, but you have to have the human filter come before and after it, to really make sure you're doing what you should be doing. Yeah, makes sense. So another thing I wanted to ask a follow-up on is, I can't remember, I believe it was Tyson said, don't ask it to citing its sources, or something to that effect. Are there, besides the math equations and that kind of thing, should advisors be asking, should they be limiting what they're asking AI to do? For example, I have this client in this very specific situation, what's the solution? But what should be some guardrails that we're thinking about as advisors? I'd be careful with that one. I mean, that's our job, right? We're supposed to study and become experts in financial planning, retirement, insurance, whatever it is, and be able to know the nuances and actually have a source of truth we can go back to, where AI is not gonna give you that. I would be really careful. I mean, if you want to replace yourself, I guess go ahead. But I don't think I would go down that path. I've never in any of my time created a financial plan or investment proposal and used AI in the way we're talking about it today to do anything with it. Sure. So generally you'd say maybe not the best thing to do to go to AI and ask it to do your job, right? Ask for financial advice from the tool, but to enhance your job, right? And rewrite this email or this is the answer. Can you fix the way it sounds? That kind of thing. How would you feel if your surgeon asked AI how to perform a surgery? I don't know. Maybe some real specific guardrails I'd recommend in general when using any of the, like the GPT like interfaces, asking it to give you an ordered list, asking it what's the best or what's the, what's number one or, you know, that, that it lies about that. It doesn't really know. It'll, it'll present you in a birth test or I'll tell you the top five, or, you know, I'm, I'm, I'm advising a single mother in her, in her forties and she wants to retire, you know, what would be the right risk balance that you would do? You know, none of that it's going to know. Now the, the, the problem is it sounds like it does. And then on top of that, they, there is a lot of now linking to effectively Google searches or other search engine searches to where theoretically it will now. So really this thing is just an interface for it to bother to do the search for you. That's, that's a, that's now two layers of making stuff up, right? And if you're asking any kind of sorted list or best of, or most important or anything like that, it's going to give you strident answers and they're going to be wrong. Now, what I will tell you is that that's what your customers are going to be doing. I think that it's actually a really important exercise to search the, ask your customers how they're getting their information. And if they present the search and they tell you that they did this thing and whatever, and I asked GPT the best, you know, the best stock to pick, you know, this year, whatever. It's important that you come back and educate them a little bit to the fact that the AI experts said that it, it doesn't do any, it doesn't know best or whatever. But beyond that, you know, bring your alternative sources and say, look, I worked hard to search for these and validate them. That thing didn't work hard at all to validate anything. It's not even in its like model. So anyway, I just, those are some nuance points. I think it really depends on the system that you're using. So I agree with my, my colleagues on the panel, a hundred percent. If you're talking to these generally available models, because they only have intrinsic knowledge, the references aren't great and there's a real problem. But, you know, Derek made a point about, would you trust your doctor, your surgeon who was talking to chat GPT? Well, hell no, not talking to chat GPT, but I am working with a team of doctors and we're standing up an embedded database whereby we're doing some very advanced semantic search against their own knowledge, right? So they've entered, you know, hundreds of their own procedures into this, and they're using this to query. They're using this semantic search to query their own content. They're not asking chat GPT how to fix a deviated septum. They're asking themselves through these models. Now, still, you need a bit of caution because there's still being interpreted by these transformer models, but the source of truth is very different than asking the internet how to fix a deviated septum. And I think it's just a matter of time where the margin of error is reduced to the point where people are going to really rely on these things. And it may not be today and it may not be tomorrow, but it isn't 10 years out. Sure. I agree with that. The important thing for you all to understand is that you're probably not going to be qualified to tell the difference unless you clearly are qualified to tell the difference. Meaning you will know the difference if you're talking to someone like Andrew's company about a topic that you're close to, and you'll know that that is a trusted source of innovators who are paying attention and building a business of quality who understand your domain and you understand it too. And by the way, it's probably the cost of money. So any of the things that are free, any of these things that there will be lots of things that claim this stuff. Yeah. And so Andrew's right about where the technology is going and can go, especially in subdomains. You as a consumer of it. Yeah. Just unless you you're going to know it when you see it, when it's real. And if it otherwise feels like too good to be true, just ignore it. Sure. Well, that makes sense. It seems to me a point that we're kind of getting at here is that you as an advisor and AI are on a team, right? And you're coming with the knowledge and the AI can be an assistant. It can help you kind of put that knowledge to work or get put it in a way that maybe a client could really well understand. So it's a it's team effort, but you can't you know, one can't replace the other. Speaking of to a replacement, one of the questions that I had was can AI replace an administrative assistant or an associate advisor? If you're not familiar with what that is, maybe I could let Derek start with the associate advisor and then we can get to the assistant. But what would that look like? Could it truly replace those positions within our world? I don't think so. I mean, maybe mechanically it could do a bulk of the lifting. But in my experience, my clients don't just talk with me. They would talk with whether it's another advisor in my practice or support staff, and they build relationships with those people. And so, I mean, can AI be a tool to help us get more things done? Yeah. Yeah, I can do it faster. But is your is your client going to feel good about almost marginalizing the human relationship that they built with you and trusted with you and your team? So I don't know. I would I would really be struggling to replace because I have another advisor on my team. I have a staff person and I would never think about using AI to replace either of them. Sure. I did just like give them access to Gemini so they can do tasks and get more done with it and be more efficient. I'll do that all day long to support them, but I wouldn't use it to replace them. And it just goes back to are you going to replace yourself? Sure. Yeah. Especially regarding that associate advisor, because they typically have a lot of doing a lot of financial analysis, be difficult to replace them with an assistant. They're generally more administrative. Right. So just kind of want to hear thoughts weighing in there. Andrew Tyson, if you have anything to add. Derek, I have a question for you in return. So I agree with you. You wouldn't replace them. But would you double down on the human? So the right associate advisor who's really fluent with Gemini and other related tools. Do you need to keep adding to that human team or are there some economy of scale there with the AI? Definitely economies of scale. And obviously it depends how much you want to grow your firm and your client base. So I would say you'd want to add humans, but maybe you don't have to add as many humans as you would in the past, because each one that you have that you're empowering with this new tool that has exponential scale is able to help get more things done. Yeah. Makes sense. Tyson, did you want to say something? No, I just think that I'll just triple down on that point. Hiring people that come with experience and an understanding of this stuff and your domain are just going to be the best hires. So I think that it becomes more becomes more interesting and then they'll end up scaling themselves in a certain way and switching things around for you. Definitely want to hire for this. You don't have to learn it, just hire for it. And they're young people that figured out and Andrew knows a lot of them. That makes sense. I just want to remind the audience that we're approaching kind of the end of our webinar here. We're going to open it up to the Q&A in just a minute after I ask another question, but I just want to remind you that that function is available. So one thing I wanted to ask about are chatbots. Can someone touch on that and what should we be wary of or aware of? Tyson, do you want to start? Just the chatbots in general, like across the web. Are they affecting AI specifically or what does that look like? That's a pretty broad question. Chatbots were awful and for the most part, the way they're implemented are still awful in the world. They're going to get a lot better. So the idea of talking to a chatbot now, like if you're calling into Disney and trying to sort out an issue or typing in or talking to stuff online or even having a text back function for your own practice, the chatbots very soon will be very good. And they will effectively remove that first-tier requirement so people can service themselves. But I'm not sure that's where your question was coming from, Summer. No, no, I think you got it. Yeah, yeah. Sorry, a little broad there. Yeah, the promise of chatbots has been there for 15 years and it is coming true finally, and it is thanks to this stuff and it will probably be more widely available in such a way in the next two or three years. So I'd suggest starting to experiment with them in real ways. Great. You have something to add there, Derek? I agree. They've been really annoying for a while, but I think we're on the cusp of something pretty cool. And let's face it, in the world we live in today, most computers are not as in the world we live in today, most consumers like to be able to do some self-service, whether it's just like an updated beneficiary or whatever, where they're like, man, I don't want to have to schedule a meeting and meet with my advisor just to do this one little thing. And that's where a chatbot could be helpful, where if it's smart enough, it can help you do that or point you in the direction to go do it versus having to get a human involved. But it's super interesting because now you're gathering data, the advisor sees what's going on, but it's also getting some efficiency out of it at the same time. So we're on the cusp there. I don't know if we're there quite yet, but we're close. Cool. Andrew, did you want to add? I think the chatbots that we've all been exposed to that we hate, the primary reason is that you know immediately that it's not a person. And these things were rules-based. So the way it worked is, companies behind these things, there's a decision tree. If they ask this, go here, if they do this, go there. And so they were kind of bulletproof because you couldn't mess them up because they were following a rule and that was fine. Now you start to blend in a little bit of generative AI and they start to take on more human-like characteristics, which are great. The challenge becomes in the integrity of the underlying data because these models, these GPT models are people pleasers. They just love to make you happy and give you an answer that they think that you'll be satisfied with. That's not necessarily what you want when you're communicating about a situation with a chatbot. So there's a real challenge for these companies to keep the thing in the lane of what it should be talking about, but do it with a human-like GPT interface. So it's early days and you're starting to see a couple examples of these things that are better, but it's going to take a while for that to settle out, I believe. Cool. Wow. It's really informative. Sounds like there's a lot on the horizon for artificial intelligence. It's kind of overwhelming. You think? Yeah. Yeah. You would know best. So I want to open it to the audience to ask any questions in our last couple minutes here. And if, or when we have a question come in, I just want to kind of ask, where are we seeing AI crop up in our space? We've talked a lot today about where we've come from, where we're going, but today, what tools are available in the advisory space? And maybe Derek, you want to start here? I know there's some things going on with Morningstar and such, but just kind of want to see what's out there. Yeah. That was kind of why I was at that conference when they revealed, I forget what they called the name of it, but they were having a conversation on stage with their new AI persona. Pretty wild. And then there are other tools out there. Again, not a lot, I would say, that are consumer facing yet though. There's still some question about, from a compliance perspective and having been in a very, very strict compliance environment for a number of years, the answer is almost always no. We're just not going to do it. So it was tough to get access. So as I mentioned, wealth management, GPT is an actual tech that's using it. There are others. I think they're using it almost in stealth mode a little bit, whether it's compliance software, income type of calculations, or there's another company called Bento Engine that is doing some interesting stuff around life events, or even like the folks at AssetMap, and they have something called signals where based upon information coming in, now it's starting to output. Again, I don't even know if they're using AI as we're talking about it today, but there's definitely a massive amount of data where they could. So again, I think we're right on the cusp. I think, honestly, we need to get the industry more comfortable with it, which is why we're chatting today, right? To get the education out there. Totally. Andrew or Tyson, do you have other tools that you've heard about, not necessarily just in the advisory space, but are there general tools that you think a financial advisor should be aware of in this AI space? Tyson, what do you think? I think Derek built a tool, and I think, Andrew, you guys engineer some cool stuff. Any tool that is marketing itself is coming from people in your space, or either in your space or that are in the specific thing you're looking for to solve. They're really worth a look. When you're using the GPT environments personally, it's changing so fast. It's like the app store. And going on to forums and searching for what your peers are using and try it out is really helpful. We could list a couple, I suppose, and they're either out of date or too esoteric. I think it's more like really search for your peers. There are those of you out there that are blazing with these things and getting familiar with those tools and playing with them is great. I'm sure there's Reddit groups on that or whatever. Yeah, I'm sure. Andrew, did you have something you want to say? No, not a whole lot to add. I mean, I think that it's an emerging space. There are a lot of, I'm sure that advisors using just general tools, everything's getting sprinkled with AI magical power. So I think AI is coming into everything you're using from HubSpot to whatever else. It's interesting in its early days, again, for some of these emerging new technologies to offer bespoke solutions into wealth management. It's kind of like you think about the early days of the internet and how many of those companies are still standing. So it'll be very interesting to see what happens a little bit down the line. Sure. Well, I appreciate all of you so much, your time today, the value that you've added to those listening. I just want to remind advisors on the, or viewers watching today, if you're watching this live, there's going to be a survey here at the end, but just want to thank Derek, Andrew, and Tyson so much for their time. And with that, I'll just preface that we do have two additional webinars coming up in March and April. We're going to be diving into using AI more specifically in the advisory space, and then getting into all of the guardrails around that on our final presentation. So just want to make our viewers aware of that. And again, thank these gentlemen for their time. And with that, appreciate all of you coming out to watch, and I hope it was really helpful. Awesome. Thanks for having us Summer. We really appreciate it. Yeah, you're welcome.
Video Summary
In this webinar, financial advisors discuss the role of AI in the industry. They emphasize that while AI can enhance efficiency and improve tasks like marketing and content creation, it should not replace the human element in client relationships and decision-making. The panelists caution against blind trust in AI, highlighting the importance of human oversight and critical thinking. Chatbots are seen as a promising tool for enhancing client service and self-service options, but the integrity of underlying data and the ability to keep chatbots within their designated lanes are important considerations. The panelists suggest that advisors should be cautious in using AI for tasks that require a deep understanding of financial planning and client relationships. They recommend exploring AI tools developed by industry experts and peers, staying informed about emerging technologies, and approaching AI as a complementary tool rather than a replacement for human advisors. Overall, the discussion underscores the potential benefits of AI in the advisory space while emphasizing the importance of balancing human expertise with technological advancements.
Keywords
financial advisors
AI in the industry
efficiency
marketing
client relationships
decision-making
chatbots
client service
financial planning
×
Please select your language
1
English