The Human Code

Personalization in the Age of AI: Jay Van Zyl on Transformative Technology

Don Finley Season 1 Episode 61

Send us a text

Understanding Human Behavior in AI: A Deep Dive with Jay Van Zyl

In this episode of The Human Code, host Don Finley engages in an insightful conversation with Jay Van Zyl, a seasoned expert in artificial intelligence, machine learning, and business transformation. Jay discusses his intriguing journey into the intersection of humanity and technology, emphasizing the importance of understanding human behavior to build meaningful AI systems. They explore the nuances of hyper-personalization, the role of trust and reliability in AI-driven solutions, and the integration of computational social science with generative AI. The discussion also touches on practical applications of AI in enhancing customer experiences, especially in regulated industries like finance. Jay shares his philosophy on balancing the analytical and empathetic aspects of AI, aiming for solutions that are both technically sound and human-centric. This episode offers valuable insights for tech enthusiasts, entrepreneurs, and businesses looking to leverage AI responsibly and effectively.

00:00 Introduction to The Human Code
00:49 Meet Jay Van Zyl: AI Pioneer
01:55 Jay's Journey into Human Behavioral Sciences
03:50 The Philosophy Behind AI and Human Behavior
06:19 The Role of AI in Personalization
10:32 Challenges and Trust in AI Systems
15:10 Practical Applications of AI in Business
21:27 Future of AI and Human Interaction
29:34 Ecosystem.ai: Enhancing Customer Experience 
38:22 Conclusion and Final Thoughts

Sponsored by FINdustries
Hosted by Don Finley

Don Finley:

Welcome to The Human Code, the podcast where technology meets humanity, and the future is shaped by the leaders and innovators of today. I'm your host, Don Finley, inviting you on a journey through the fascinating world of tech, leadership, and personal growth. Here, we delve into the stories of visionary minds, Who are not only driving technological advancement, but also embodying the personal journeys and insights that inspire us all. Each episode, we explore the intersections where human ingenuity meets the cutting edge of technology, unpacking the experiences, challenges, and triumphs that define our era. So, whether you are a tech enthusiast, an inspiring entrepreneur, or simply curious about the human narratives behind the digital revolution, you're in the right place. Welcome to The Human Code. Today, we're thrilled to welcome Jay Van Zyl, a pioneer in artificial intelligence, machine learning and business transformation with over two decades of experience shaping industries. Like finance, cybersecurity and automation. As the founder of ecosystem.ai. Jay leads the charge in developing no code platforms that provide actionable insights, enabling organizations to make smarter, real time decisions. In this episode, Jay shares his philosophy on understanding human behavior as the foundation of meaningful AI systems. We'll explore the intersection of computational social science and generative AI, the importance of trust and reliability in AI driven solutions and how hyper personalization can redefine user experiences across industries. Join us. For an insightful conversation about the role of AI in empowering businesses and individuals and how understanding the complexities of human behavior can lead to transformative outcomes. I got Jay Van Zyl here with me. The man is awesome. I am really looking forward to this conversation, Jay. you've got such a rich background, and I'm going to start you off with the classic question of what got you interested in the intersection of humanity and technology?

Jay Van Zyl:

Don, the way that I grew up, in the context of science and math and in a world of absolutes, you get to a point where you really believe that everything can be modeled. can be framed in some very specific outcome and highly deliberate and accurate answers and frameworks. and it's only when you realize eventually that, oh dear, this human in the system doesn't do what it's supposed to do. It completely confuses you because nothing that it says it's going to do, it will eventually do. There's high degrees of uncertainty as it goes through cycles of change. People, get angry about things they're not supposed to. They get happy when they might not need to. they are just, very dynamic beings. and in the 90s, I got exposed to the concept around, human behavioral sciences. And as I got more into it, I realized that there were very few actual frameworks and capabilities that we could leverage reliably to build algorithms. And it became a journey. And my journey started in learning, human learning. So that means single loop, double loop learning, Sean and, people like Peter Senge with, the fifth discipline and that, that was my kind of foray into the discipline was to try to make sense of not necessarily psychology of the human itself in their own right, but how they fit within a collective, that collective being their family, their work, their friends. And how that fits within a society. so it deeply intrigued me how these various elements, are used to make sense of how humans behave. So what transpired over time was this notion that if you want to invent any technological tooling capability, or you want to create some kind of way in which you're going to assist a human in achieving better outcomes, then You need to first understand or attempt, let's call it attempt, to understand the human or empathize with an individual because it's tough to empathize because you can't really think the way that people think that are nothing like you, but you attempt to understand who they are, and then you want to make sure that the tooling that you provide can service that individual. They must feel great about what they experience, and then if they are in a society, or let's think about they are with their friends and their family, And they in their, local coffee shop of people that they meet only on Mondays and then they're in a restaurant of people that only meet on a Friday. You might find that people have got these different swirls of relationships. they form in different ways and people shift their personalities almost to fit these communities. So you need technological capabilities to allow people to do and to deal with these ongoing changes. And if you don't make sense of any of that, then the easiest and the kind of the opt out is it lets us make an assumption that all of society based on the stereotype are all the same. And I think that really dawned on me. The fact that many parts of the research that I observed over time was heavily focused on genericizing or stereotyping or simplifying all humans that look like this, all humans that. Speak like this. All humans that, read these things, all humans that engage in the following activity must be the same. And that really scratched me and I really couldn't make sense of that. And I wanted to understand who am I as the individual in that system. That's what intrigued me. That's what kind of got me into this. And I kept on reasoning about it and It's been keeping me busy for 20 something years.

Don Finley:

It's amazing because it sounds like a very spiritual process as well as an analytic process too, because that individual in a system, we all know that we we can get attached to the team. We can get attached to the society. we have an identity that reflects upon the society that we live in the geographic, region that we're in as well. And so we do it to ourselves, but you're also saying from the opposite side of trying to understand. who your customer is or who that population of people is. You want to get to a hyper personalization of that space, or at least that's what I'm

Jay Van Zyl:

Yeah, I think maybe to reflect on a word to use that I've thought about over time and maybe slightly differently than most, we don't really think about a word like spiritual or it's like a spiritual experience. It's more Do I have a key philosophical hook from which I can drive my reasoning that if I want to take that reasoning into some kind of technological invention, that it will provide me enough guidance that the output of what I produce Will be And I think that a philosophical tone for us made more sense because if I think about, all the elements of deciding how I will engage with, let's say a one on one chat, let's take a couple of examples. If I chat on Facebook chat or telegram or WhatsApp or whatever the tool is that I use to have a one on one conversation or iMessage, whatever that might be, the technological platform is purely a carrier of the intent that sits in my mind in communicating with another human who needs to receive it reliably and communicate back with me. So that means That there is nothing in the platform itself, in the technological invention itself, that assists me in becoming better or poorer at who I am or what I am. It is the fact that I'm connecting with another human at the end of the line that plays the role of connecting with me. So that means that we often confuse this technology because if I and I deal with a technology that's going to, Get me to connect with my immediate friends. And I have, and if you think of the strength of weak ties, that is a breakthrough bit of reasoning and economics of graph theory. So if you take, the social graph by meta, or you take the knowledge graph by alphabet, or you take the, economic graph by LinkedIn or any one of the companies who built the entire business on the fact that there's a collective that believes certain things. The collective has a far greater impact on you as an individual because you keep on. review and essentially get bombarded with content that are produced by people that are one, two, three, four degrees removed from your immediate relationship. So that means that if I think of on a one end, having a personal conversation with somebody over a chat, and it's only about us, you and I, in this situation, that's very different than all of us, where everybody can contribute and I might not know half the people, and The unfortunate thing is that humans behave very differently when they go from this very private conversation to a conversation that gets more public because they put on a face, they change the entire style of engaging. They behave differently. And I think That deeply intrigued me because if you think about any new invention and I want to deliver something that's better to you as an individual, I need to have at least a philosophical view of how people behave differently under these different situations. Because if I think it's all just the same, then I'll be in trouble. I won't be able to create something that's truly impactful and meaningful. and that's the thinking. Yeah.

Don Finley:

I can really appreciate that. That's a nice, solid foundation in order to, take a look at the world, take a look at how you can actually make somebody's life better. and the one word that I continue to resonate with is the intention. And. you are right. it takes intention for us to get onto this call. It takes intention for us to go. And the technology doesn't move itself in that regard, right? Like we years ago probably would have recorded this in person, right? But now we have the opportunity to record this halfway around the world kind of approach. And that's like the absolute fascinating aspect of where it is, but the intention that you and I have to connect. hasn't changed. The technology has opened up an opportunity to, now, we're sitting in a wave of, of AI right now. and I think that we're starting to see a lot of fascinating solutions come out of it. How has your philosophy played out in this, latest tech rush of artificial intelligence and having a shovel that can now show some choice in where it's applied?

Jay Van Zyl:

Yeah. Listen, that's a great entry point into, I think, any technologically enabled company to figure out how they think about the tooling. And I think, for us, we want to make sense of a human situation or an intent reliably. And I think the really key thing to think about is reliability as the key construct to find out that if I behave in a way, and I know that I want a response because of a certain set of behaviors, I don't want an engagement with the technology that's going to make up things on the fly. So that's why we really think about the AI world In the way that the discipline is broken up into the two kind of, if you think about the latest movement with generative models, we think of generative models and discriminative models maybe differently than most, because we truly believe that if I have a model that's going to detect that, that I have spent money in a certain way, that I am still engaging, that I have connected with certain people, that I have been to the office today, that I am driving my car down a freeway at this speed. All those are bits of evidence that you don't need a model that is going to make up stories about what you're doing. I need the evidence of what is accurate and precise about the things that you are engaging in and doing over time. And I want to then enable that using various kinds of generative tooling. So I want to enable it with something that is linguistically useful. that is aesthetically appealing and that might also be, from a auditory point of view, something that I can turn into a voice that is soothing or that makes me feel great about my situation. So that means that the generative piece is incredibly important to, to be good at what it does. So that means that in our world, we have an intent detection engine. And an intent detection engine is, if you're busy spending, and let's say that I detect that you might be getting yourself into trouble, because the algorithm is showing me that people who spend at this velocity, based on the current debt situation, income ratio, whatever that might be, is showing signs of stress. I need to now take that based on who you are as a person and translate the language into something that will make you act in a way that is appropriate for you without feeling like you are being treated wrongly. So that means that if you are a person that is, let's say we take, a well known framework, like a Myers Briggs type indicator, MBTI indicator, and you like, let's say a thinker or a feeler, If you are a thinker, and we have detected from your historical behaviors that you are a thinker, I want to talk to you in facts and figures. I want to say to you that you do know that if you continue to spend at this rate, you're going to get a 20 percent deficit at the end of the month. You're going to have to earn more money. You are, 100, short of this particular payment. You need 60 to solve that. And I'll give you the facts. But if you are a feeler, you want to be able to say to the person, how bad you're going to feel when you can't pay your bills this month. Because your family are not going to be happy with you. So that means that I think what is happening in this intersection of accurate and precise evidence based activity of what you're doing, and then using generative AI inventions that in our case, we fine tune our own models. We've got quite a big stack of tools that we bring together to make sure that these tools can function together into something that. will serve the customer at the end of the day or serve the employee that is also a human. Because what we found is that if I'm on the phone with you and let's say that I'm a private banker or I'm a, an insurance representative or whatever the role is that I'm, The first line, if you think about classical supporters, I'll just speak to some agent and it's like something that's, you don't, they don't understand me. They don't really care about me. They're really friendly. They're trying to solve my mechanical problems. But as soon as I have an investment portfolio, I have an advisor. I have a far more client centric relationship. I need and expect that you and on the other end. to know me better because I'm the one who's spending millions in my portfolio with you. I'm the important one as your customer. So how are you going to talk to me? So that, so it's tough for companies to make sense of this. so now what we've been working on is saying that, let's say I can reliably categorize, predict, determine your personality in the context of, let's say, your spend or your money behavior or your, Genetic behavior in the context of your retail spend, whatever the algorithm is in the company situation. You want to then enable the representative of the organization to extract from those behaviors, something that they can use as a cue to speak to that human, but it must be reliable. we've been bringing the generative, and a part of the AI world that we know that works well, that is linguistically useful and the like, with accurate and precise outputs, morph that together, so that the person receiving it can see, oh, this really works for me because I can see That it is not making up something about a client that it doesn't know anything about. yeah. So that's maybe a longer explanation, but that's how we see that.

Don Finley:

I think that's a, there's a couple of beautiful notes that you have in there. One, we've got to be careful as far as how much trust we give to technology in any one batch of, space, hallucinization of generative AI models is still a challenge. there has been some work to alleviate some of that, but additionally, it isn't an Oracle of information. And then additionally, just as a human does it, right? Like we have processes, we have things that we go through. we synthesize data from other pieces of data to create knowledge and information. there's no reason to throw those processes out because a new tool has come by, the ability for us to audit information is incredibly important for financial services. like we were looking at. AI to, to assist in underwriting, which is, a highly regulated space, and you definitely want to understand why you're making decisions on who you're lending money to and who you're not lending money to. and so like the technology isn't, I would say, mature enough today to be doing AI underwriting. but at the same standpoint, there are certain aspects of it that can be done in this moment. But I think you've laid out a good paradigm. That we all can learn from is that we a have to use a tool for what it's good at. It is good at synthesizing information from a subset of data and presenting it back to you in a personalized way. in fact, that's one of my favorite uses of LLMs is to take, are you familiar with the I Ching? So the book of changes, It's a, an ancient Chinese

Jay Van Zyl:

Yeah.

Don Finley:

basically, what I'll do is I'll throw the coins, and for the listeners who aren't familiar, it's basically you, it's a divination tool in some capacity, and so you use it to ask a question, you throw the coins, and then it gives you a hexagram. That is representative of the situation that you're in. But I'll tell you, Jay, all the language is highly culturally Chinese. And I don't have that background, but I do have an appreciation of it. So what I'll do is I'll send it over to chat GPT or I'll send it over to one of the other LIMs. I'll be like, Hey, translate this into a Western like context for me. And it does really well with that aspect. And so I think yeah, you've got a good A good idea as far as let's base our, to say, base the information that we're using on hard, objective, provable, factual processes, and then use the LLMs, the generative AI for that last mile of helping to be, that emotive type of capacity of how can we connect to it culturally, but also, as individuals.

Jay Van Zyl:

No, definitely. And I think that the thing is that I like your example, because if you think of how the models are trained, If you take a list of tokens from an input source and you tokenize them and you go through an encoding process into the latent space basically only contains the patterns and the common behaviors of what came out of the input data. It has no understanding, reference, or link to the evidence that was created or that was used to create it in the first place. So you'll see that we have a RAAC framework. So we've got Retrieval Augmented Generation frameworks and the like to make sure that the knowledge that we provide comes from a source that we know that has been pre approved, pre vetted. And it has got some kind of empirical understanding as to the hypothesis that you're busy, implementing at the, in that moment, where a generative model, because it generates tokens based on a cost function, the cost function is trying to generate the next token in a cheapest possible randomized way based on its context. And obviously the context is what's known as a prompt to the world. so that just means that The model decoding process in generative versus discriminative models needs to be at the forefront of your thinking when you design any of these capabilities. And I think that Because most of us are, in my team at least, we are all deeply technical. We train models, we build architectures at that level. We make sense of it. But I think that the people who don't, who just believe it blindly, who think that, a learning model is intelligent, because it's not really. It's a probabilistic model that generates the next token. And believe that it's sentient, that it's obviously not. let's create, It does create some confusion in our client base that I often have to be careful of because I have to stop myself from saying certain things because I might, say something that somebody might have a belief to say, hang on, I'm talking to this and it does. like me, and I would just say to it, the linguistic engine determined the cost function to tell you that somewhere in the input data in the latent space, there's somebody that talked about that. And now I can present that to you. It doesn't mean that's actually what it feels because it doesn't feel it is not a biological system. And I think that's the whole point. Maybe also in the context of my business is that we really want to agonize over are there better ways to understand the biological system, the biological system that is genetically enabled being the human? Is there a better approach and understanding to, to make that person feel better about who they are, what they're doing, where they're going, what they're spending money on, how they're conducting their lives, how they're dealing with their families. Is there a way that you can understand that better? Now, people might say in the humanities that, Oh, we've all figured it out. we've got psychoanalysis, we've got, genetics, we've got social sciences, we've got anthropology, we've got, but not really because most of the studies done historically, We're done with students at universities and closed groups communities. It's only now that we have access to this enormous, evidence base of human engagements that we can construct new hypotheses. That's why I like computational social science as a discipline, because you don't believe what people say. You only believe the shreds of evidence that they are leaving behind. So it's no point me telling you I'm vegan. All right. And I'm telling you that every day. And then. If you look at my financial transactions, that I buy my Wagyu beef and I buy all that sort of, every morning, then, what's the point that it might mean that I'm keep on giving my financial transaction to somebody else. But if I want to then communicate with you about a certain eating habit or a style or whatever it is, and I don't understand that I look at what you're trying to tell the world who you are versus what you actually are. I think there's a lot to be done in that space that in the previous era, I think it. Almost created like a creepy situation for a lot of people. We say, how do they listen to me? They're hearing everything they do. And it's personalization was often seen as something that, that people don't like. So the question is how do I create it to make sure that you feel safe and it's engaging and I know that it's not going to be something that this wants to exploit me and that's what really agonized over that.

Don Finley:

And God, I've spent a lot of time agonizing over that as well. Cause you look at like, social media was the big, the aha of this, right? Like you get the advertisement for the thing that you were just talking about or the thing that you just Googled. And so like you, we all had that little, is it listening to us? And it's now the data is a lot cheaper to get from somewhere else. And, people are just, Trading data sets constantly in order to get you that advertisement and build it out. But additionally, Facebook, Instagram, any of the social media networks are optimized for attention. And the AIs that they were using recognize that the attention is easier when you're angry, fearful, upset, or, something else. So it has a negative impact to our emotional state. to be engaging with these systems that want to keep us attached to them. that level of trust that I think we, we all have for these algorithms is a bit of mistrust being that it wasn't aligned to what our intentions were of being able to connect with family, connect with people to, to share our information, to share what we have. with this world and the algorithms seem to want us to do something else, right? Like how do you play in the world of you're creating algorithms over here and additionally creating AI systems that help to engage with people. And that level of trust is low because the system has its own intention and goals that it's been designed for. Whereas in Today's world we're getting more emotionally. It's emotionally capable isn't the right term It's definitely like it's an imitation of emotion, right? Like it's the mimicry that you know We use to connect with humans that the machine is now able to mimic us in a way that actually creates that emotional response that talks about like how we have to be careful talking about sentience and talking about like the actual intelligence that's there when it is a probabilistic machine that just knows that hey, I can do this. So I guess the question comes about is in the systems that you're building, how do you breach that approach of making it a system that is there for the end user?

Jay Van Zyl:

Yeah. Maybe just to stand back and look at this holistically, in the social kind of media era, if we can call it that, of the evidence that is collected about a society or a community is self declared, as we know, it's self exposed and it is mostly reflective. If I go to Instagram and I'm busy shaping a story about a persona, the persona that I'm creating is a persona that I think the world needs to see about me, not necessarily who I am. And all social media platforms are essentially working on the back of that core assumption that the humans that participate will provide content and evidence, basically as content. That will show some things that they are doing. So it means that if you become an influencer and you at some island, in, in Italy or in Greece, and you take a picture, it is likely that you are there, so that's the assumption that you have to work from you, you create the image. And if you are a person who's always happy about a situation, you can never be sad and you're in your public persona. So you create this image. you essentially work off. The construct that most of our large clients worked on for years in a, in the basis of design, and that is that, that there's a concept called a persona. it's a generic description that is an abstract representation of a collective of people. If we now take the world we're moving into, because people are pushing back on that, and let's take your bank, your telecommunications company, your insurer, your retailer, what they, initially thought about the data that they collect about you is that it's really dis transactional. And what we believe is that the human sits in that data somewhere. There is evidence that is far more reliable. And far more closed off in private than you have in your public situations. So that means that when you engage with your bank or with your telco or whatever the companies you're engaging with, they cannot expose, they cannot sell because they are regulated. They are forced to make sure that they do it in a safe way. So that means that you think about the evidence that you have collected far more carefully than what you do. If you are just a generic public, Tech company. So what we've been doing, our clients are mostly large enterprise who really want to service their clients better. They really believe it. they find that if I'm going to implement customer lifetime value practices, can I give some things on the journey to you for free? Can I assist you to get you to a better outcome? And if you are in a better place, we will be in a better place. Why? Because I've got evidence, reliable evidence, of the things that you are doing day by day that I can use to make those decisions within my risk and compliance and all the things that I need to conduct my business safely. If you look at public media companies, you have none of that. In actual fact, you have maybe a fraction of it, to be honest, because you might have the money that I spent. on getting to done through some kind of campaign to say that I want to target a person who's in fully at a coffee shop at this age doing these kind of things and I want to pay and then what I want to do is put an advert that might want to manipulate you and there's no way the platform can stop you because it's considered free media. You don't know if any of the evidence is true or whatever it's presenting you as even practical or if it's not been made up. You have no understanding. I think we're moving into a space now where companies realize that The evidence that they collect about you when you paid for that coffee. You paid for it. There's evidence somewhere in there. You might not know the fact that, you had a cortado with, almond milk or you had, soy milk with your cappuccino or whatever that might be, but I do know that you were there and I can rely on that far more accurately as an input data point to determine your behaviors than what I would do in the social world. So that's what we've been working with. So we've been working on. a technological invention, a series of, and it's actually, in fact, that's why we call it ecosystem, an ecosystem of platforms, technology come together, that keep on solving those kinds of problems. So that means that if I know nothing about you never engaged with you, you are new to my business, my algorithms cannot be the same than if you have been engaging with me for many years. So that means that if I know nothing about you, should I just treat you as some generic persona, or should I attempt to get to understand you based on your preferences of the platform that you are engaging with me on? So if I'm a bank or whatever, and I perform certain transactions and I see that this person is never taking up a loan, but I can see that they are doing, home renovations. Are they doing it on their own home, on the rental. I see. They're not paying rent, so wherever they're going. so you could make, far better decisions about, one of the algorithms we've been working on is called the money personality is to work out your debt situation, your protection situation. So do you have insurance and the like, on your vehicle. And so that means that there are better ways to get to understand the person. And then make sure that you can service them. so that's what we've been working on. And then, and the outcome is that, we call it the real time behavioral prediction engine is that let's say you go to a website, to an app, you phone, a call center agent or whatever that might be. You want the moment that you connect with them, it needs to know that you are a person that let's say highly ritualistic. You go check your balance, you pay the bill, you leave. You are intentional, you're ritualistic, and you go. Or you're the person who likes to look for, there's a banner to say, there's a, the new band in New York. I know that Chase Bank has been doing this. it puts banners to say, there's some show that you can go and attend. Click here, and you can just take it off your account and it needs to know at least that level of engagement with you. so I think that because those are baby steps, you need a technological platform to automate it for you, because for a human to figure it out across 10, 20, 30, 50 million, a hundred million customers makes it impossible. So you want something to do it automatically for you.

Don Finley:

we have a similar view of how we treat some of these problems between our two organizations. And I really appreciate that. and at the same time, I'm going to offer you how we, talk about this. I'm basically like, look, What you're looking to do 30 million times is you're looking to do one thing and do it 30 million times. So if you can define how you want to approach this for one person and that level of personalization that you're talking about, if you can do that, then we can train an AI or instruct an AI to interact with somebody enough to get that information out so that you can personalize it. For the 30 million, instead of, trying to make population assumptions and grouping and bucketing people before you have the necessary information. but coming back to this, how would you recommend that either companies or individuals, take their next step in this, with this new technology and the new AI and or how like they can engage with ecosystem? Yeah.

Jay Van Zyl:

I'm in a series of what we call prediction stories that we've been trying to use to make sure that companies can make sense of it. And maybe just to get really practical for a moment. Let's say that An organization is busy with its digital transformation journey, and that most companies are these days. And a digital transformation is mostly about generic automation. Get a person off paper, obviously nobody faxes anymore, or trying to get them off email, and get them onto Some kind of app where they can take a picture of the ID or just, scan it and send it in and identify who you are and engage with you all the way through to if you're an existing customer, just go check your balance on the account and the like. So we've created a set of prediction stories that basically goes predict X from Y for the purposes of Z. And then using behavioral science. So predict X from Y for the purposes of Z using a science. And we've created some custom GPTs, by the way. So if you're interested and you go to OpenAI's custom GPTs, you type in ecosystem. ai. You'll see that we've got quite a few of these published now. We've got a value proposition, use case designer, smart message recommender, and a couple of the lessons that we've learned in companies over time. We've now turned that into publicly, freely accessible GPT, so people can go and do queries on it. So if you go to, let's say, the value proposition canvas, and you have a problem that you're working on, and let's say that you want to engage with customers that have just joined you. So let's say that you're a bank and they've just opened an account, they just engaged with you. So they're called new to business, so they're NTBs, they're new to business. And you want to activate them. You want to make them feel comfortable that they belong, that they've made the right choice. Coming to you in the first place, what do I need to do to service them? So what you do is if you use, let's say our value proposition canvas, it will then tell you to say, the job to be done of the person using that device will be, can I sign up and because I'm new here, can you guide me in a lot more detail than if I've been here the second, the third, 10th time, because that means that I've learned how everything works. And as we know that most. of the specialized technologies are not really publicly accessible. So your banking app is not something that anybody can just go and access. it's purposeful for the job that it does, or you're going to go top up your, at your telco provider, your data bundle or whatever it might be, or buy additional services. It's quite specific to that purpose. So that means that you need to have a way that you can guide this person. So that means that In those GPTs, we've created some of this to at least help people to understand it. That'll help you to say, the job to be done is to smooth the journey for the customer. The pain that they experience is that it's difficult to navigate. It's not always the best language to use. They don't know if it's for them or not. And the game would be, can I figure out from the beginning who this person is? Can I learn at a rate that is faster than if I had to do it manually? And can I get models to converge on who these people are? So that means that the pain relievers are the essentials. And the gain creators are the things that will separate me from the rest. So we've created a set of these gain creators to say, if I'm going to have it, let's take a classical example. I'll go to a menu option on a gambling site. And it's one of the cases we did that recently. And you always go to check your balance and that the menu option is on a far right, because it's not for everybody. But you, the person who goes there every single time, how quickly should the menu converge to move that to the left, to move it right in your line of focus? So that means that you can set that up. So in our world, you can configure it so that for every single human, it connects with what you provide digitally. You can get banners and panels and language and a whole lot of things to converge and what you offer them, what you're about to sell them will then follow a slightly different guidance because you now have pricing rules and everything that is in it, but you want in real time to figure out if this person is likely to engage with this action or not. So that just means that to start with understanding what the problem is. Knowing what actual pains it is that I need to relieve and what gains I'm, attempting to derive from this technological platform is a starting point for us. And that means that at the end of that, it will then say to you, predict the menu option from the person's continuous behavior for the purposes of immediate action using a behavioral science called let's say loss aversion or continuous engagement or systematic desensitization or whatever the algorithm is. Once you know that and you put it into the use case designer, it will tell you exactly how to construct it in our product right down to actually doing it

Don Finley:

basically your product is like having a team of people available to you that have an understanding of some of these new cases and like interacting. And I think that's a really an awesome piece of like user experience that you just described around Hey, I'm finding collaborators in the problem solving aspect of this. Whereas most no cool, most no code tools that we come across tend to, you're still relying on all of the human ingenuity of being the problem solver, where now You're bringing tools to the forefront that help you to solve the problems as well. Jay, I gotta thank you so much for being on today. It's been an absolute pleasure getting to know you better and also to share your story and your insights with the group. So thank you once again.

Jay Van Zyl:

Thanks Don. I love talking to you. It was a great conversation. Thank you.

Don Finley:

Thank you for tuning into The Human Code, sponsored by FINdustries, where we harness AI to elevate your business. By improving operational efficiency and accelerating growth, we turn opportunities into reality. Let FINdustries be your guide to AI mastery, making success inevitable. Explore how at FINdustries. co.

People on this episode