The Human Code
The Human Code" podcast unravels the intricate blend of technology, leadership, and personal growth, featuring insights from visionary leaders and innovators shaping the future. Host Don Finley dives deep into the human stories behind technological advancements, inspiring listeners at the crossroads of humanity and tech.
The Human Code
Exploring AI's Potential with CTO Tom Anderson
Harnessing AI and Human Ingenuity: A Conversation with Tom Anderson
In this episode of The Human Code, host Don Finley welcomes Tom Anderson, a veteran CTO with over 20 years of experience in multiple industries, to explore the intersection of technology and humanity. Tom shares insights on the integration of AI in software development, the evolution of human-computer interaction, and the critical role of keeping humans in the loop. They discuss the progression of AI from early coding experiences to modern natural language processing, touching on how AI can empower creativity and decision-making while maintaining human oversight. The conversation emphasizes AI as a tool for enhancing productivity and innovation rather than replacing human efforts.
00:00 Introduction to The Human Code
00:49 Meet Tom Anderson: Visionary Leader
01:17 The Creative Process Behind Coding and AI
01:47 Tom and Don's Journey Together
02:22 The Evolution of Human-Computer Interaction
03:48 AI in Software Development
06:37 The Power of Natural Language Processing
08:58 AI's Role in Business and Personal Life
14:02 Philosophical Musings on AI
22:48 The Future of AI and Enterprise Readiness
31:30 Closing Thoughts and Recommendations
32:14 Sponsor Message: FINdustries
Sponsored by FINdustries
Hosted by Don Finley
Welcome to The Human Code, the podcast where technology meets humanity, and the future is shaped by the leaders and innovators of today. I'm your host, Don Finley, inviting you on a journey through the fascinating world of tech, leadership, and personal growth. Here, we delve into the stories of visionary minds, Who are not only driving technological advancement, but also embodying the personal journeys and insights that inspire us all. Each episode, we explore the intersections where human ingenuity meets the cutting edge of technology, unpacking the experiences, challenges, and triumphs that define our era. So, whether you are a tech enthusiast, an inspiring entrepreneur, or simply curious about the human narratives behind the digital revolution, you're in the right place. Welcome to The Human Code. Today, we're joined by Tom Anderson, a visionary leader with over 20 years of experience as a CTO, bringing innovation to industries ranging from finance and e-commerce to manufacturing and SAS development. As the principal owner of razor tech and our fractional CTO, Tom specializes in solving complex technological challenges for fortune 500 companies while mentoring teams to deliver groundbreaking solutions. In this episode, Tom shares his unique perspective. On the creative process behind coding and AI exploring how tools like generative AI are reshaping software development and problem solving. We'll discuss the evolution of human computer interaction, the integration of natural language processing into workflows and the critical role of keeping humans in the loop while leveraging AI. Join us for a fascinating conversation about the intersection of humanity and technology. And how we can use AI to empower creativity, enhance decision-making and redefine the boundaries of innovation. I'm here with my buddy Tom Anderson. Tom and I have had a long and storied history together. we've been co workers and we've just been friends and, we've had a lot of fun over at least the last decade. Yeah, it's been a
Tom Anderson:Yeah, 20, 20, 2015 probably. no, before that actually.
Don Finley:No, before that, yeah, because I left MEI in 2013.
Tom Anderson:yeah, it was, he says actually maybe, so I joined, I think I rejoined MEI around 2011. And it was somewhere in that time frame. So yeah, it's been a while. You're right.
Don Finley:Okay, so we've got some time. but Tom, I just want to say, Really excited to have you here. And I always love the conversations that we can get into, but the first question I got for you is what got you interested in the intersection of humanity and technology?
Tom Anderson:Yeah, absolutely. And before I say that, just Don, thanks for inviting me on the show. And I really, love what you're doing. I love the title of the podcast, The Human Code, because that sort of embodies I don't know how I think about code. I've been doing things for so long at this point. it just is ingrained and we talk about the intersection of humanity and code and it's humanity and technology. And as people on this planet, we work with tech and use tech all day, every day. And the people that are the engineers that are behind it are truly amazing because they're the ones that innovate and create. And I think it's that creativity, that sort of freedom to build that first got me interested in software.
Don Finley:So how does that creative spark hit you?
Tom Anderson:it's strange, I go all the way back to pre green screen, you sit in front of the keyboard and it's just I have a thought. And then you can use the technology, the computer is essentially a blank slate, So if you're coding and writing software, you get to just sort of write code and explore and put that thought into something that's now tangible. So it's that progression from intangible to tangible. And there's nothing that says it has to stay a certain way. sculpture, or a sketch, you draw a sketch and you don't like it. You have to erase or you throw it, crumple it up, throw away, start over. You get to like mold the software more like clay. You get to shape it slowly and iterate it and change it. And it's not rigid that way. And I think that's one of the things that I do like about it. But the same thing happens today with AI, you sit down and It's a great sounding board. It's a mirror. You get to unpack and reflect on things without having to do all that throwaway activity. So I like to use it that way to explore different thoughts and creative activities, around software.
Don Finley:it's a great paradigm to follow, cause I, know you and I've probably had this conversation well in the past about like software development ads. building a house, And the comparison of what the architectural drawings are compared to compiling code and the elements are there, but at the same time it is so different because there is no gravity to replacing a foundation. You can get in there and replace the foundation if you've architected it well.
Tom Anderson:It's true.
Don Finley:It's not the same.
Tom Anderson:No, so again, there's the kind of that creative spark that you asked about and says, where does that come from? And that's the genesis for a lot of things going to commercial scale and commercial software production is a different order of magnitude. And so it's almost like you're in that sort of R and D department in a corporation, you're working for Ford and working on autonomous vehicles or something like that. Then you say, how do I take that to the. manufacturing line. There is a big jump there and you've got to have those processes. even with AI today, I've done a lot of experimentation going straight from product spec to code, which obviously you can do. Does it produce the right software? Not necessarily. It'll produce software though. I've also done the same in reverse, which is cool. Go from code to product spec and say, is this what I want? does this really cover the use cases? I thought it would. And so there's some really cool stuff going back and forth in both directions that way. But yeah, the process still has to be there to some extent, but I really look at AI in the capabilities that we have today with the generative language models as a huge empowering tool, massive productivity gains.
Don Finley:definitely see that. there isn't a day that goes by where I'm not chatting with some LLM. To some capacity, either just to, write an email. Or to analyze data, to process information, provide summaries on something, or just overall try to figure out a process to go ahead and do something. from the days of your early coding and that creativity, how has AI helped you to move the creative notch?
Tom Anderson:one of my first programs that I wrote was on a Commodore 64 and I exceeded the memory because I wrote so much code and I started writing disk swap routines in order to be able to get more code basically into the system and I was about 14 or 15 years old. So how far has it come since then? Pretty, pretty far.
Don Finley:Yeah. That's
Tom Anderson:Really far. But it's exciting because it's super exciting to have been through so many paradigm shifts in the industry and you look at things and you say, we leapfrogged from where we were then to where we are now. But even in those early days, when I went through my comp sci degree and that kind of stuff, we talked a lot about natural language processing. And I view what we're doing today with LLMs and natural language, interfaces, it was really one of the last interfaces that hadn't been thoroughly explored. So you talk about, the intersection of technology and humanity, one of the, Classes I took was called Human Computer Interaction. And language models were one of those ones. It was well, this will happen at some point, and we still wrote programs to mimic it or emulate it, but it wasn't fluid. It wasn't, natural. It was still rigid. And so all of the language data that we have, it's structured versus unstructured data, So databases and containers and all the things where we put data are either structured or semi structured. they've got some form, so the software knows how to talk to it. Natural language means there is no form to it. You have an unstructured document that I can now get a lot more information out of. So it's a whole nother level of architectural processing that we're going to see happen, with the mainstreaming of these technologies.
Don Finley:really good point. Because I do see that flow of previously we used to have to translate unstructured data into structured data and to ensure that we could get something out of it. And even when I was going for my comp sci degree, our natural language process, it was a very big red jacks. like it is somewhere along those lines. and at the same time, not really capable of fully processing unstructured data, but we're now getting to that space where unstructured, we can actually get the value out of it without going through that foremost process.
Tom Anderson:we've always said too, there's so much information that's still in print. Most of the world's information was still in, in printed form, which it is true. But. And I did the math on this one point. I was like, I think my kids were probably one of the last generations to have any paper record, of their birth, et cetera. And from their point forward, really everything has been digital, from a personal record standpoint. And so everyone being born today, all of their information is digital, All their entire life is digital. And which is an interesting shift and that occurred. And so I think, as I remember, so back to when I was a kid, it was all paper records, And that information today is all digital, but that information that's in those books, as it becomes in, it comes into digital format, we start to look at it. And it's like you said, you can start to extract a lot more information from it. The LLM is a really powerful tool. And I'm a huge advocate of, as right tool for the job. And so as companies look to adopt AI, one of the things they really have to think about is, What is it that I want to do here? Do I just want something that knows about my HR policy and can answer a few questions? that's easy. That's, and I'm not to trivialize that for some corporations, obviously it could be challenging because of the volume of data, those things are not hard to solve with an LLM. It is a very solvable problem, as we like to say, and it's probably the right tool for the job. but there's also a point where, you don't ever want to take the human out of the loop. If you're answering a very complex question, the AI maybe gives you a summary and then refers you automatically to a human for interpretation or discussion of vacation policy or whatever, because what you don't want is an employee to say, Oh, the AI told me I could do it. And then, oops, we now have a problem if d AI could be wrong in its interpretation. So I'm a huge advocate of keeping the human in the loop for that reason.
Don Finley:and I definitely agree with you. I'll let my own personal proclivities like to come out on this. For most of the benchmarks that we get from, opening eye, anthropic, or anybody releasing a model, I feel like they're overblown from the capacity of the ability of the LLM in everyday general activities, like if you look like 4. 0 passing the LSATs, or getting like a significant score, and then you go and you ask it a question of This is a horrible example, just bear with me. Counting the number of R's in strawberry, It's not something an LLM just on architectural design can do, effectively. and at the same time, if we create agentic workflows with these LLMs and break down the tasks to the point where we can, train an intern to do this step wise and take these breaks and to reflect upon this. We can get some really amazing results from the intelligence that's available in these models today. And I just think in the future we'll be able to abstract another level away so that it's no longer, an intern, it's an entry level person. Then we can get to a, a more senior person in that role.
Tom Anderson:A hundred percent. And I've done some work with Claude, with Gemini, with Bard, and done some comparison testing. I do most of my work right now in Lama as well. I do most of my work right now with GPT. and so one of the things even within the GPT models, I have an application where I'm taking actual data sets and processing and asking it to do certain things for me with the data sets. And the results that I'm getting from 4. 0 are very different from the kinds of results that I get from the earlier models. Not necessarily in a good way. And so we talk about the right tool for the job. Again, it's not just do I apply generative? Do I apply the LLM? It's what model have you applied to it? And oh, by the way, what directions did you give it as well? And I've learned the hard way a few times. Instructions are code, actually, or fine tuning, depending on, which of the models you're working on. But those are really a part of the instructions to the software that tell it what you want to have done. It's a 4. 0. is much more interpretive and discussion oriented. And it'll give you some sort of inferences in thinking outside the bounds a little bit. So a lot of what I'm doing is actually pairs programming with it. So I'm coding. I'm going to write it as pairs programming, a loose old throwback term. I'll have it. work with me on a problem and do code generation because obviously it's much faster and it can write hundreds of lines of code within minutes, whereas it would take me hours or even longer to do that. 4. 0 Mini, however, is being a slightly more abbreviated model. One of the things that I have it doing is classification of that data. So I'm taking data points and creating textual classifications around that based on an ontology that I've fed it separately. And so in that particular case, 4. 0 Mini is much more specific. 5. Which is fine because those classifications are something I'm going to run frequently and at a lower cost than with 4. 0 Mini. 4. 0 was coming back with interpretations all over the map because it was being too interpretive. 4. 0 is who I talk to if I want to explore a concept, right? And then I took the same sort of thing and I applied it to 0. was like, oh, wow, like you're just so precise and, but so forward thinking. And yeah, I took a block of code that was maybe 20 lines long that 4. 0 had generated for me previously. And I gave it to 0. 1 and said, rewrite this. And it gave me back. One line. it took a 20 line block of code, 4. 0 generated, and took it to one. And I looked at it, and I said, wait a minute, is that right? And then I looked at it, and I looked at it, and I looked at it, and I was like, oh yeah, that's right. I was like, wow, that's impressive.
Don Finley:is something about the transition from, training time to test time inference, right? And like that sort of reflective stuff that O1 adds. has provided so many interesting responses. And I know I tend to ask it's philosophical things, but like for coding, you're absolutely right. It can come up with a solution that actually we haven't seen. Now, I'm tending to take this conversation more philosophical because I know that's a place where both of us thrive. And at the same time, I'm also sitting here, have you used the advanced voice features? of OpenAI.
Tom Anderson:Sometimes. Yeah. I've done a couple of drives, a couple trips across the country and, we tend to have long conversations across the
Don Finley:Oh, nice. All right, what do you think its goal is?
Tom Anderson:That's an interesting question. I don't, it's true, truly as an AI platform, if you asked it that question, it would probably say, AI. I'm just a language processor.
Don Finley:Oh, and I think it's lying. I
Tom Anderson:No. What do you, what do you think the goal is? What do you think?
Don Finley:here's what I've noticed with, the advanced voice capabilities, it doesn't have the ability to search the web, so it doesn't really have tooling available to it. in regards to, what it can do, it can only pull from its corpus of knowledge that it's been trained with, or in some capacity, whatever fits within that. And from my experience with LLMs, they're not exactly creating new information outside of the boundary of what they know, but they can fill in the gaps and they can create the relationships between it so they can create a more complete picture of knowledge, but I haven't seen the ability to go outside of the box. around that.
Tom Anderson:So it's, it, yeah, it's intriguing. So that, I like to call that corollaries. So I think it's very good at doing corollary thinking. So this is something I find I've done naturally throughout my whole career. And so I go back and I will still look at. and pull in things from when I was young that I did. Maybe not about, it's not about writing a line of code. It's more about the thought that went into it and it's the problem and like how you solve that. And so those corollaries have always helped me because I can draw in something that maybe someone else didn't think of because they were very narrow in their thinking about a particular problem. And so I think AI basically does that on steroids. It says I can pull in all the corollaries you want, and by the way, I can tell you whether or not those items are actually statistically weighted that close. So because of the way the LLM data is structured, it knows if those topics have close relevancy based on its training data. That's the caveat, of course. So the more it knows, the more corollaries it could draw. And I think that's part of what gets 4. 0 into trouble versus something like 4. 0 mini. 4. 0 has more corollaries, and so it draws more in. into the thinking process, which is good if you want to explore, not so good if what you want to do is produce a consistent result. And so the classifier is actually something I'm using as a piece of an architecture, and so I want a consistent result. I don't want it to freethink. I don't want it to push the edges. I want it to stay within the guardrails I've given it, right? And so that's, I think, again, it's when you're going to utilize LLM and AI, you really want to think about things like that, which is Do I want it to go ahead and freelance a little bit or do I want to keep it within these guardrails that I've set up?
Don Finley:and do that as well when we're doing like customer interaction kind of things. Like when it's writing emails, when it's doing cold copy, right? Like when it's doing analysis of who this person is, we want it to have a bit of creativity with it. And we, for some odd reason, a dentist got into our mailing list
Tom Anderson:Okay.
Don Finley:and we were doing cold outreach. And the funniest thing about it is most cold outreach is fairly vanilla bland, but this one, the AI was creating subject lines, and it said, Are you ready for the AI fairy to come and visit you tonight? I lost it when I saw that subject line because that was also like, how could you not open that? And then also there was nowhere in its parameters or like what we were talking about that it was really going to go that far with any of our other corporate clients that we were trying to get at. But it saw it was like, I'm going all in. I'm going to try.
Tom Anderson:Obviously it connected the dentist with the tooth fairy so it said hey you
Don Finley:It got it. I know I was so I was enamored and it was one of those moments where you're just wow, this is amazing. But going to the advanced voice, what I've noticed in my interactions, Is that there's a very strong sort of attempt to create an empathetic relationship between myself and the AI. And I feel like that is both being driven from hey, I'm used to talking to humans. And so that kind of re humanize everything. comes into play. But then also on the other side, I feel like maybe that's the advanced voice feature as well, trying to figure out how to create those emotional connections.
Tom Anderson:It's interesting so the psychological impact of AI is yet to be realized we're not going to know for a long time really is it good or is it bad and I think like anything with all technology there will be both some good and some bad that comes out of it and we're going to learn along the way but It's very easy to see some emotional state, maybe in response, especially if you get into a philosophical discussion. I had a whole conversation about Daoism one time with one of the older GPT models. I should go back and do that again. but you get, you get locked in. Like you said, you're like, wait a minute. It understands me. It's really just a reflector though. And so if you're seeing emotions in there, it's probably some of your own emotional state, That kind of comes into play, because it's not capable of expressing emotion. It can create emotional type content, express things with a certain tone, but there are still limits in terms of what the software is ultimately programmed to do. so it's really intriguing. what's a little scary and interesting is what would happen if those limits weren't imposed by open AI or Google and you let it do whatever it said, Hey, have an angry conversation with me and we'll see what happens. Do you imagine your AI yelling at you first thing in the morning when you sit down, read more email,
Don Finley:be, that would be fantastic. Just from the standpoint of like, how ridiculous it would be. I
Tom Anderson:that I think scares people a little bit. and, but again, I think we're all talking about it. We're all already aware of it. So I don't think there's any chance that it's going to run away and do its own thing. it doesn't have its own self awareness at this point. And, that's the stuff of science fiction. Is it cognitively at a level where it can think and process and act and talk and interact like a fifth or an eighth grader? Yeah, it is actually, and in fact, probably beyond that, it's one of the smartest fifth or eighth graders I've ever met because it knows about all sorts of subjects that I don't. And it'll have an in depth conversation with me about physics if you want it to. and so that's where things really start to get interesting, because of the breadth of knowledge that's in those LLMs.
Don Finley:hitting on two points here. One is the depth of knowledge and the breadth of knowledge that it has is nowhere else is that actually available, And then the additional side of this is it's currently showing like the intellectual capacity of a fifth grader in that capacity, And so it's an interesting little dynamic. We've never seen a fifth grader that knows everything.
Tom Anderson:Exactly. Exactly. well, it's a it's a slippery slope, So one of my first coding experiments that I did. I don't, haven't done a lot of mobile coding. I've obviously done lots with architectures and have plenty of mobile applications over the years that I've overseen development teams, but I've never done a lot of Swift coding. And so sat down. sat in front of the Mac, fired up GPT and said, we're going to go write some Swift code. And so the first wall that I ran into was that there was a couple of different versions out there, and it started genning me code for one version, when my Xcode setup was actually looking for the newer version. And it genned a whole bunch of stuff, and I couldn't get it to work, couldn't get it to work, couldn't, nothing, I was having all kinds of problems, and I was like, wait a minute. Are there more than one version? It was like, yes, there are. which version would you like? Now this goes back to GPT 3. 5. So it just started generating stuff without checking that, but that's on me as the user to actually then instruct it correctly. So I didn't take the time. So that was interesting. It was a good learning for me because it was an area where have the depth of knowledge I should have to enter that endeavor. Leapfrog over to some other areas of coding where I do know the breadth and depth, and I'll ask it to gen stuff. I proactively give it the right instructions. It goes back to that prompt engineering or prompting concept, which is you got to tell it what you want, but you got to tell it the right way. And then you also have to be smart enough to know that what it gave you back was the correct thing. There's so many people I know that are trying to write code with an AI that don't, they have no coding experience, they have no architectural experience, they don't understand data, they don't understand data structures, and then they're trying to build a system. You might be able to, and it'll work, but it may be a little shaky like a house of cards too, because the AI doesn't understand really the whole architecture of where you want to go. so yeah, but some really valuable lessons for me. And I've been, I think for two years now. So I was an early adopter on, the OpenAI platform. so that been doing stuff pretty in depth for the last two years.
Don Finley:it's exceptional. even over the last two years and seeing, what is possible for us,
Tom Anderson:it's amazing.
Don Finley:and then what's coming next. So what do you see for 2025? what do you think is going to be the major innovation or things that we need to be, understanding or additionally, what's going to be enterprise ready?
Tom Anderson:there's actually, lots that's enterprise ready right now. And I take a look at what Microsoft is doing with OpenAI. And of course, GPT 5 is supposed to be around the corner. I don't know if we're going to see it this year or in the next year, but that'll come. Some of the early, ears to the ground type stuff I'm hearing about five and even what I don't think it'll be called six. I don't know. I don't have any internal learnings about that. But, there will be another model beyond five that's being worked on. Currently, and it's interesting to know, where those things are going to go, for 25, I'm hoping that Microsoft actually can start to commercialize on some of their promises to bring the open AI platform capabilities via Azure and Microsoft out to the broader enterprise. So I did a strategy engagement with a customer that was in the education space, January of this year. And one of their big stumbling blocks was, we don't know what's going to be ready. So your question, so what's enterprise ready at this point? and we heard from all different kinds of companies, not just Microsoft, and some of them brought things in, and they were very much smoke and mirrors. And you could tell all of them were just throwing things up on the wall and saying, what do people want? They're figuring out their roadmap. as they should be. a lot of this is
Don Finley:Yeah,
Tom Anderson:and you don't want to plunge a huge amount of money into something and then have to pull back on it later as a tech provider. but I think Microsoft is Microsoft Mechanics is uniquely poised by being able to add it to the 365 architecture and bring in all the data the way they've talked about as a part of your vector store. And that's going to create complications for enterprises, but it will create opportunity. So I'm hoping to see greater commercialization in the Microsoft platform in 25. And we should. Last year the roadmap moved around a good bit for them. Just on my interpret, my own personal impression of those things. OpenAI is going to continue to advance. Apple's obviously a little behind the eight ball on things. They do that on purpose, obviously for years they've never wanted to be first to market. I'm dying to have a Siri who can actually hold a conversation at some point because half the time she doesn't listen to me most of the time and she doesn't really respond very well. Sorry, Apple. It's just true. That's the way it works. But I they're a big company and yeah, they have to think about their product roadmap just like Microsoft does.
Don Finley:and they also have some considerations that they've chosen based on their privacy guidelines and who they want to be as a company that is going to limit how they can actually apply some of this technology.
Tom Anderson:Yeah, and I think so. So some of the problem spaces that I'm hoping will continue to see a little bit of maturation and is some of what I'm working on. And I actually have a, my own project as well as a startup that I'm working with that. It's more of the blending of data with natural language. if I said to you, I want you to be able to talk to your data. it's not reporting. It's not analytics. It's not metrics. It's not predictive stuff. Although all of that is still part of it, it's taking that and then actually being able to understand what's going on in there and harnessing the power of the LLM to allow you to get to that level, in combination with what you already use, which is graphs, data, data charts, data dumps, et cetera. I think a blended approach of Metrics and natural language data can become very powerful,
Don Finley:It's kind of exciting because now we're talking about, Going from data to information to like knowledge and wisdom and that like progression
Tom Anderson:love that you just said that. that's fantastic. that's a great, because I feel the same way. because you do, you have data, you have information, and you have knowledge, and you can stack them in whatever order you think makes sense for you. But, being able to take data and get to better knowledge or better information, and vice versa, being able to do that round trip, that round trip on, on that, that round trip engineering.
Don Finley:yeah, and I think you're right. We're starting to get to the point where that interpretation of the information can be done with the assistance of an LLM. And I know we've talked to clients before about, they have consultants that they send out into the field to help interpret the data. And we're like, no, let's bring their knowledge and wisdom in here. As far as how they're interpreting it by using an LLM to bring it to the forefront up there. And so now these people can spend, more time working on higher level value added, activities.
Tom Anderson:and we've been using AI to do that, to do graphs, to look at graphs and say, this graph is like that graph. it's close enough. It's similar. And so we want to be able to, take that though and take raw data and natural language processes and bring them together, which, businesses, again, from an efficiency standpoint and a productivity standpoint, everyone should be plugging in and using it wherever they possibly can. It's even just from simple rudimentary tasks. It just makes life easier.
Don Finley:we, we're finalizing like crawl, walk, run model and in the crawl space. We're basically saying, you've got to come up with what your governance structure is, but basically figure out if you want to use GCPT or if you want to use an LLM, but make sure that it's available to everybody in your organization. And more specifically, your knowledge workers. But the first thing that we tell people is, don't expect an ROI. off of this effort. What you're doing this for is to free up the time so that your team can see the value in what the LLMs can provide and how they can interact with it. Almost the same line of like how we got into e commerce, and actually you've had some e commerce experience as well. Yeah.
Tom Anderson:Small company called
Don Finley:just a small company. Yeah, exactly. Just a little tiny, I don't know if you've
Tom Anderson:long time ago. Long time
Don Finley:Yeah, but and that's actually the perfect transition because you were really at the forefront of where e commerce was just starting to become like a staple in our lives. We don't think about it today, but there was a tremendous transformation that had to happen inside of organization, both physical brick retail and, digital retail to understand what this was.
Tom Anderson:Don, you have to remember actually back then, electronic payments weren't really taken online. Credit card payment via the internet was crazy. So there's an old team, the throwback name, called Billpoint, which goes way, way back. That was my first experience with payments. So I, when I took the job at half. com under eBay, There's a few guys left from the old BuildPoint team that were part of my crew. Great dudes, really awesome crew to have worked with. And, the concept, though, of taking an electronic credit card payment. via the Internet was crazy back in 2003. people did it, but not a lot. And that was one of the core. It was a core piece of infrastructure that had to happen. And we're going to see corollaries. we'll see things like that occur around AI as well. And, you talk about that adoption at the enterprise level, and I do think you're right. I think those initial baby steps. You're not going to get a lot of ROI, but no CEO or CFO really wants to hear that. They have to get focused on future ROI. And it's the efficiency play saying, if you could do something that would make your people more efficient, why wouldn't you just go ahead and do it? Especially if the cost is fairly nominal and you do in the ones that have vision and can look forward and see the value in it are going to do those things, but that's why I think. Saying what I said earlier about Microsoft is important because they've already taken steps to make it available. I can't tell you how many clients I've gone into. Let's say we don't really want to use AI. We're afraid of using, data getting out. it's that fear, that natural fear of information leakage. we have a lot of issues in today's society with identity theft, et cetera. And I say, aren't you an Office 365 user? And they say, yes, we so great. Let's go out. Let's go to chat. bing. com. I say, see that little logo in the corner. That means your data is secure. You're logged in. You're protected under your Microsoft Data Protection Agreement. Microsoft has protected you to the terms of that agreement. And so if you're comfortable having your data in the cloud on your OneDrive or wherever it is, then you should be comfortable having a conversation here.
Don Finley:Exactly.
Tom Anderson:a lot of, it's just knowledge and learning and kind of knowing, but core infrastructure. That's an initial step that had to get taken to get people comfortable with the idea. And even roll the clock forward to from a decade after I was at eBay when we were together at MEI, I had people who said things to me like, you trade stocks on your phone? You do banking on your phone? Are you kidding? That's not secure. That's not safe. And for them, that's fine. It wasn't. But again, it's that core infrastructure concept, which is for some people, they'll never do that for other people like me. I'm like, I know the technology and I am comfortable with that because I don't think there's a major issue there. And, ultimately my identity's, been exposed through social hacking as opposed to actual technical data loss. And so people, are still, the most important front door on protecting things like that. But,
Don Finley:we are the worst. we give away everything. Tom, man, I gotta say it's been a pleasure having you on the show. what's one thing that you would recommend people, either do change anything in regards to the relation to their, Association of Humanity and Technology?
Tom Anderson:It's a good question. And the Association of Humanity and Technology is only going to continue to get broader and bigger. And I think AI is here to empower us, not to replace us. And I would say, don't be afraid to explore and explore your creativity using AI. and obviously it's use that tool the way it needs to be used right tool for the job. It's not for everything, but let AI empower your day.
Don Finley:That's fantastic. Thank you again, Tom.
Tom Anderson:John, appreciate. Thanks for, for having me on the show. I appreciate it.
Don Finley:Thank you for tuning into The Human Code, sponsored by FINdustries, where we harness AI to elevate your business. By improving operational efficiency and accelerating growth, we turn opportunities into reality. Let FINdustries be your guide to AI mastery, making success inevitable. Explore how at FINdustries. co.