The Human Code

The Mirror of Beliefs: A Conversation with Mindset Engineer Molly Brown

Don Finley Season 1 Episode 23

Navigating Life and Technology with Mindset Engineering: A Conversation with Molly Brown

In this episode, the host sits down with Molly Brown, an engineer-turned-life coach with a deep philosophical mindset that intertwines with her engineering background. Molly shares her journey from aerospace engineering to life coaching, emphasizing how her experiences with data-driven decision-making highlighted human biases in interpreting data. She discusses the implications of these biases in both personal and professional contexts and explores how AI technology can mirror this process. Throughout the conversation, Molly delves into the significance of understanding our beliefs and biases to create meaningful changes in our lives. Listeners are encouraged to reflect on their own belief systems and how they influence their perceptions and decisions. Molly is reachable through her website, mindsetengineering.com, and on LinkedIn as the Mindset Engineer.

00:00 Introduction to Molly Brown 
00:42 Molly's Journey from Engineer to Life Coach 
02:38 The Role of Data in Engineering and Life 
03:57 Understanding Human Biases in Data Interpretation 
06:32 Applying Bias Awareness in Problem Solving 
12:45 The Intersection of Technology and Personal Development 
23:51 AI and Human Judgment 
32:03 Conclusion and Contact Information

Don Finley:

Welcome to The Human Code, the podcast where technology meets humanity, and the future is shaped by the leaders and innovators of today. I'm your host, Don Finley, inviting you on a journey through the fascinating world of tech, leadership, and personal growth. Here, we delve into the stories of visionary minds, Who are not only driving technological advancement, but also embodying the personal journeys and insights that inspire us all. Each episode, we explore the intersections where human ingenuity meets the cutting edge of technology, unpacking the experiences, challenges, and triumphs that define our era. So, whether you are a tech enthusiast, an inspiring entrepreneur, or simply curious about the human narratives behind the digital revolution, you're in the right place. Welcome to The Human Code.

In this episode, we're thrilled to have Molly brown, an engineer turned coach who combines her analytical mind with deep philosophical insights. Molly's unique journey from taking apart machines as a child to serving as an intelligence analyst in the U S army. And eventually becoming an expert in reprogramming the human mind. Provides a fascinating backdrop for our discussion. Today, Molly, and I will share. How understanding and reprogramming the mind can transform your life and help you achieve your highest dreams. The critical role of self-awareness and critical thinking and personal and professional growth. The exciting potential of integrating AI with human potential to overcome biases and enhance decision-making. Join us as we delve into these thought provoking topics with Molly brown. This episode is packed with insights that will inspire you to rethink how you approach your own mind and the technology around you. you. won't want to miss it.

Don Finley:

I'm here with a friend of mine, Molly Brown. Molly has, an interesting background, a career that has transitioned into a new career, A deep philosophical mind that combines that with the engineering mindset as well and a curiosity about what humanity brings to the table and how we're actually playing in this reality that we're playing today. And. Along those lines, Molly, I just want to thank you. for being here and really to kick it off, would love to hear a bit about your story of how you got to be the Molly Brown of today and really interested in having this conversation on humanity and technology.

Molly Brown:

Yeah, sure. I wasn't aiming to necessarily be here on, as Meeting people online, on a podcast, as a life coach. I wasn't aiming really for any of these things. I don't know, one starting point I could choose to look at would be, back during my career as an engineer, just in normal root cause investigations, like very standard engineering procedure of a root cause investigation. Did tons of them on hardware and there's a pattern on. how they would all go. And so as an engineer and a generally curious, resentlessly curious person, just keep pulling that thread. And so from my perspective, the work I'm doing now as a life coach is along the thread that I started pulling as an engineer on aerospace hardware. And that kind of goes along the hardware is broken. it's part of a system. How is this system? Treating the hardware. All right. this system is driven by software. How is the software? Instructing the system to run such that it's breaking the hardware and then we look into the software and say the software is coded Representation of our understanding of reality so that we can imitate reality in tests and to design something to run within reality. So if the software is such that the system is driving the hardware such that it is broken somewhere. In the software and the requirement for the software in our understanding of reality. We have misunderstood reality. and so pulling all the way from this broken hardware to being like now what is our misconception as engineers and technical people about the technical environment where this airplane is, that. is wrong, such that our designs based off of that are breaking. And that kind of led into the business and the industry at the time had this hypothesis of data driven decision making and designing for, the real world applications using more data to inform it. And I bought it. Hook, line, and sinker makes total sense to me. If we have more data, better data, enough data, and the word enough, now as a life coach, the word enough sends out, tons of red flags for me, but, back at the time, it made total sense. we just need enough. data to make these, decisions. And it's yeah, cool. So then we go out and do a data project. and I, I was at a great place in the company. I had a great boss who gave me like a sandbox and like me and a little small team of people got to go play in that sandbox. And. go get a bunch of data and start putting it together in a low code, no code way. So the engineers could interface with it. And then we got to go on this little dog and pony show around the company with different engineers and technical teams with the high profile kind of problems and say, like literally for decades, careers, people have been begging for this amount of data thinking, we will make different decisions with it. It will solve our problems. It will lead us. The data will show us a solution. it will lead us to a different place, we'll find in it reality, truth, all of these things. And what I saw over and over happen, over and over again, is people argued to support all the data they anticipated and expected to find, and argued against any data that they didn't expect to find. And basically the data that is what you don't know, you don't know, that was, would solve that problem of, Changing our understanding of reality so that we're not breaking parts in the software and the system and everything. and that was getting thrown away. It was getting argued against that. It must be an artifact in the data. It must be bad data quality. It must be anything wrong in the data digital space, which is the new space. And that our old understanding of reality uninformed by data is true and right. Which was, I it was frustrating for a point, but as an engineer and a problem solver is really fascinating because it's a new problem. So I'm watching the human mind work and realizing that this is what the human mind does. It shows up with an arrow placed and then it interacts with data to draw a target around the arrow. So in the absence of data, I can tell you what you're going to find in the data by talking to you and figuring out where your arrow is placed.

Don Finley:

So while you were in engineer, you were basically seeing people add a whole slew of biases into, their analysis of what was coming out and basically using it to confirm their own beliefs. and additionally discount it, but you just hit on a really cool idea there as You're seeing them as part of the system and what you really hinted to at the end is that you're able to talk to people and see where their belief patterns are coming up and where their beliefs are getting in the way of them seeing what's in front of them. Is that a fair summary? Wow.

Molly Brown:

Yeah, so instead of looking at it and saying, All right, the mechanism should be like, judging how we're doing it and saying this mechanism should be, we should be able to unbiasedly approach data, we should be able to follow the story and the objective reality that is in, the data. And there's something wrong with us, we need to train us, we need to do something with us such so that we don't do this, And I'm going to show you how you can use this, bias thing, and instead recognizing, this is how the system works. So how do I work within the way that the natural system of the brain, I don't know, I'll use mind and brain interchangeably. I have tons of feelings about that, but that's neither here nor there. So how the brain works of this drawing the target around the arrow with collecting data based off of your bias, instead of saying, I need to go on Fix this person who is biased, say, how do I help this person design the correct bias so that when I let them loose in the data or out in the real world where we're collecting data, passively all the time, they will then find the path that they need, the data, the tools, the resources are, clustered around The arrow that they actually want to be holding on to, and so it's to become aware of our biases, to become aware of the assumptions that we're approaching things, and this, and it's technical or non technical, it really doesn't matter. So if you're, you an engineer, we're approaching technical problems thinking like, Maybe of the baseline, this is going to be hard because it's come to this level because we've been working on it for this long Whatever it is. So we approach it assuming this is hard and so any easy solutions We're then blind to by our you know by our own mechanism of how the mind works so instead of believing it's hard and then trying to solve a hard problem to back up and, interrogate. Why do I believe it's hard? Don't look at the data because the data is going to show you exactly what you already, believe. So there's kind of two ways to play in the system. If you buy in and that, that is how the system works instead of we need to fix these, with biases. I don't know if We can really do that.

Don Finley:

Okay. And so like finding this pattern and seeing like what we could do, we can create systems that kind of understand. What the biases that we're bringing to the table are, and to create awareness in those systems that. it will be on the lookout for those biases, as well. And I think I'm like you, I really dive into personal development. part of me is like, No, no, don't design the system around my bias and allow me to continue to have that bias. I personally would love to be made aware of, hey, You might want to gut check yourself on this because you're performing this and you have that, bias. Like we do project reviews on a regular basis and seeing both internally and with clients, like what's necessary to complete a project and is that investment necessary? And typically like when a project is going off the rails, so let's say like it's running out of budget, We tend to have this sunk cost fallacy, right? Like that bias of what our previous efforts were getting to that point, and then what's going to be necessary to get to the end. And it is one of those that you have to take a step back. Take a deep breath and then look at the information from almost if I was making this decision and this project needed another million dollars, would I make this decision whether or not I've already invested a million dollars into this? am I hearing you right? That like the system should be aware. The system should understand the biases, but is the system also like raising its hand going, Hey, you're bringing in something here.

Molly Brown:

Yes. and maybe bias is a word that has implicit judgment in it. And to say like, belief or identity, wherever the context is in, instead of the bias and the system. So whether the system is your business or a software system or your life as a system, or your relationships as a system, it will by necessity. Yeah. mirror, your identity, the belief or the bias you bring to it. So in that way, it is always the symptom, the pain in your software, in the, the error code in the business, in your life, in your career, wherever it is. is exactly the symptom you need in order to map and point you to the belief bias identity that you are holding that is not in line with the results that you want to get. So in that way, like, all symptoms are useful. and then we don't treat symptoms as I should turn the volume down on it. If you treat the symptom with a root cause analysis of what is it that I'm, Believing you take the analysis and the design process upstream of the data. So we recognize whatever it is that I'm already believing I'm going to find in the data. So we don't need any data for a design process or really for a root cause investigation. It all comes back to, the understanding of the system, the intentions, the beliefs, the inherent limitations that we're just bringing into it and to explore all of those things, and to design in that space with the intention of what are you trying to create in the absence of data, meaning resources and, just all of the physical subs, subsequent things that you need that design phase. It comes way upstream of all of that, and I think we tend to want to inform the design phase with data, and at that point, we're, burying our beliefs. we're hiding them, and then we're going to believe, no, I have, then we misunderstandingly, we reverse that my belief comes from the data, it never does. That's never the direction that it goes. It works, that we find the data based off of the belief and we become blind to or hidden from anything that, counteracts our belief. So as soon as we think, I believe this because of the data, because of my experience in life, because of other business models, because of the industry, because of the economy, because of anything like that, now we're hiding and we're, the root in The complete wrong domain and you can spin all you want trying to solve the problem there. And I've watched that a lot, definitely as an engineer and now since being a life coach, I've watched that a lot in a couple of different domains and it's always a myth.

Don Finley:

I'm just taken back because I'm going to take the conversation in this direction because I think what you're describing is there are some biases that we can use to our advantage, right? Like some beliefs that we can use or using essentially beliefs in that structure to. To our own benefit, right? knowing that we are creating this and it's like the idea of. If I go out and buy a new car, I tend to see that car 10 times more than I do before the acquisition of it. But I think what you're also describing from any sort of like reality creation or like what we're looking at is to take a step back before we have the data to say like, all right, what is it that we're looking to create here? And that if we are going to use that kind of selection bias in our own world, that, hey, we can actually structure the system to create those results. both in business, but also in our personal life. and I can tell you from my own experience as being an entrepreneur that like, I have found that 90 percent of the time that I have a problem in the business, it is really around beliefs. Like it's around something that is either holding me back. It's like that fear of success, that fear of failure. The imposter syndrome that comes around, right? all those mental kind of I'm making it much more difficult for myself. Instead of allowing life to happen, but additionally to use that benefit of our own, conscious and subconscious engineering to create it. how has this played out in your coaching business then of understanding these biases that we're bringing to the table?

Molly Brown:

So there, in life, which is why it then turns into like, all of life coaching and not business or any kind of, sector. your life is a constant mirror of your beliefs and to take the judgment out of it, which I have had to learn not coaching education, but by being coached by embodying, like learning to do it and having a person. holding me accountable and really showing me how to do it, to take judgment out of things. and we went to, as soon as we see something is good or bad and say like in your business, it's just like how much money you're making, or if your money's meeting your bills, whatever it is in business, we would say, it's good or bad. Then we, That mind drawing the target around the arrow starts either battle planning of a story of like why it's to protect yourself of there's bad but all of this story around it which we've learned and it's a pattern but all of that is just confirming the I don't know. I keep saying in the one analogy, the location of that arrow, instead of taking the judgment off of it, which allows you to go, upstream and that design process and actually just use the data point of basically my life isn't how I want it to be. My business isn't making the money that I want it to make just as, something really. Tangible and measurement. And instead of judging it and saying, this is a problem I need to fix. I need to work on to just use that as a data point. Oh, okay. It's there. Isn't the money that I want there. What in, in me is believing that. The money isn't there. I can't make the money all the things that come up all the seeming data that would be assembled around that arrow to recognize that your mind is populating that list. and so of falling for the source of that list is all the places that it came from, realize your mind is populating that list and go there to look at, be solving the problem. What is it? What story? What is in your mind that supplies these data points and is really connected to the story around where this arrow is? And when we can dissolve that story, usually by recognizing, always by recognizing has nothing to do with space and time, like right now in space and time, that the story of Already existed. And that's how we showed up to this point in business, this point in life or a relationship or a career, and already knew what to do. We already knew how to play it out, who the cast of characters was going to be, how we were going to be treated in it. And all the things that we're either excited about or worried about in the future, that we're making those up based off of a pattern. that we already have. And instead of trying to solve it out here in this instance, to go back to the cookie cutter that is making these instances and understand like why am I committed to telling this story? And that's a cool, thing to do with somebody that's a coach, because I don't know the answers. I just understand the mechanism. And so I just serve as a reminder to go back to the actual place and stop working in the reflection. I just get to point out to people when they are working in the reflection and to go back, you use what you're finding as symptoms in the reflection, the pain points in the reflection. You use those to understand. the story that is creating them outside of that.

Don Finley:

Ah, okay. I love it. It's always fun talking to you because while you're talking, I'm also processing this and seeing like how these things are playing out in my own life and What is the reflection that I am seeing? What is the belief that's participating in the creation of, the reality that. is here today? and then also the software guy in me is how do we use technology to like help with this process to be that, reflective mirror? As well, or additionally, I, if I could just have, a Molly that hangs out next to me all day long and it's Hey, is that the reality that you're like creating? what are your, what's your take on that? Where do you see like the opportunity or the curse of this happening?

Molly Brown:

Yeah, I like there's to be somebody who writes code and I write very little code. but to write software And you write something and you get it, you compile it, you get an error back. There's in other places in life, we can argue the error, we can argue the error codes a lot and people will co sign on that and believe it and go off on a big adventures of, the error code is probably wrong on when you're like in any kind of physical symptom, but as soon as you start, Engaging with a computer it gives you an error in your code. You have to start accepting and accepting quickly and, and without judgment and emotion, the computer, the error code's right. And your code is wrong. And there's for me, there was a training process of that was really. Frustrating to just get like the error code and be like, it's just you're wrong. And there was no argument about it. there's no chance whatsoever that the compiler that it's wrong and my code was right. And I think that's where a lot of people. on an emotional level, won't keep going learning how to write code because that's so offensive to be told like it's your code is you made the mistake go fix your mistake. And so there's like this. Threateningness about being wrong. That's a program that we learned from early in life of if you're, if you do a wrong thing, you are wrong. If you do a bad thing, you are bad. there's a misconception. There's a misunderstanding of the symptom or what's going on that gets accidentally understood and mapped back onto identity. and it's threatening to have it mapped all the way back to your identity that people then don't want to participate in Doing something, you know something that gives that kind of feedback and so they're like they don't want to they won't make it over that hurdle into writing knowing how to write software because it's their own is making something of that error code be about Them. And I, that's, those are like really, broad strokes. I'd have to, to maybe people aren't going with me all the way there. I could paint it out in smaller, strokes, but I, to see it in, in software where it's so obvious, because You always did the typo. You always made the mistake. And the faster that we can just accept, it's not a problem that it threw an error code. It's not a problem that, if the, if throwing an error code in business is, I didn't make the revenue or I didn't make the sale. It's not a problem. I just go back and iterate again. I just go find the error in my logic. And then I try again. And as soon as we say it's a problem that I get feedback, that is, I make the feedback. mean something that stops me from taking the action and making the progress I want to do. The reason we do that is a belief in an entire system and program of beliefs in the mind. And if we go back through those and parent them, re give them a new script to speak through such that getting an error code is. There's no judgment on it. It just tells you to iterate again. Um, then people can go take a lot of actions that they're they're stalling on, procrastinating, doing other little tasks instead, scrolling on their phone instead, a lot of these kind of like filler behaviors take up that space, when we can't do something that would possibly give us this feed, this feedback, because we're, we've been designed to take the feedback and make it have a meaning that it doesn't have.

Don Finley:

Okay. So if the kind of like system that you're describing is basically like We have a desired outcome, or we have something that we desire. And then we have something that happens and we get feedback, right? either the error code happens or something else. And what you're pointing out is that our interpretation of that is really crucial to the understanding of if it went well, then we'll associate that interpretation with it. If it didn't go well, then can be prone to. interpreting it in a way that either abdicates responsibility of it, either doesn't hold ourselves not responsible, might not be the right word, but that's what's coming to mind. and this plays with an analogy for. technology. Um, we're getting to the point with AI now where we're starting to see like reasoning capabilities come through, right? Like we're, we now have a shovel that can make decisions is a really like rudimentary analogy to this. do you see this technology being able to help in that process? or like, how does somebody need to come to the table understanding that these systems can be in place and like, where their beliefs may be interacting. And I would just love your opinions on this as well.

Molly Brown:

Yeah, so the superpower that AI has that we don't, or that is, is the absence of that judgment. It can fail over and over and over again without telling a story about it, without making it mean something. Now, most of us have struggled with that. We can fail maybe once, maybe not once, but if we failed, if we iterated something a thousand times in a row, there's not a lot of Edisons that stick with it until they get to the thing. We usually make it into a story and start making it mean things. And then we follow, off on what do these things mean and loop around. So to us keep pace with that is to just recognize that's not the case. That's what it's doing. It's allowing itself to fail without telling a story. And if we can practice noticing when we're failing and telling a story and just get rid of the story and allow the failure to happen, we will get to grow the kind of the same way and get the things that we're watching. A, I have, which I think is what we're watching when we watch little kids, Learn to walk or something. they fall down, they don't make it mean anything into a story. And so they have this rapid growth of things that AI can have this rapid understanding and growth and change in things they get old enough to learn how to tell a story. And then this is maturation process where we think growth slows down and maybe it's neurological and maybe it's. It has to do with stem cells or maybe has to do with this and also maybe it just has to do with we're making things out of failure and if we stop doing that, we could go back to a steeper growth slope. I don't think that was really answering the question on, where could the AI technology go? I'm not sure. I'm excited for it.

Don Finley:

Oh, that's awesome. But no. And I think I like the answer either way. Mike, the question then is, as we look at how this technology is coming into our, world. It's coming into our workplaces, it's in our homes. and even just a couple of years ago, like the technology was very fit for purpose. Like it, it solved a use case. It did it, hopefully well, like for the most part, Netflix recommends movies that I like to watch, right? Like it does that well. Amazon, absolutely fantastic at finding products that I would like to buy, right? Like these kind of recommendation engines, and then also some bit of predictability, predictive, Capabilities have been really well for that use case. Now we're starting to see LLMs like ChatGPT, and some other ones come into our workspace. They can aid us in getting our work done. I actually use a, an LLM on my laptop. that I have trained to be Carl Jung and it provides some bit of like therapy for me as well, right? Like I can tell it things that I wouldn't want to be on the internet like capability, but also just like work through that space. For anybody that's like coming into the workforce or really taking a step back and saying, Hey, what is this technology? and also the work that you're doing, what would you recommend them to maybe offer up a rubric of Hey, here's how to take a look at this opportunity and check where your own beliefs are in, in this system. Cause there are people out there who are basically like the doomsday and then there's also the accelerationists that are saying, Hey, let's move as fast as possible. Stuff's going to break, but we're going, how could people actually walk themselves through your process or your understanding of like how to apply. Either AI in their lives or understand where they're sitting with that.

Molly Brown:

Yeah, I, if I were to like, Make an AI to do what I do or help me back to do what I do on myself, be like our normal program functions or goes with the assumption of there's something happening in my life. How do I solve it? How do I change it into something different? How do I, how do I work on that problem out there? if instead we make the assumption, something is showing up in my life, And then had the bot, the thing start asking us questions back saying, now make the assumption instead of making the assumption, something has gone on the outside, gone wrong on the outside or good on the outside, whatever. And I now have to respond to it, react to it. said the assumption that I originated this is a reflection somehow of me and to start lobbying questions, which would be, Journaling prompts, things to think about on your run, whatever it is, like long prompts of like, how did you create this? even if as a tangible example, I had my, car and auto insurance due. And I was just like, lost track of Oh, every six months it's like due. And it's not a big deal, but this time it came out of it seemed like I was like, Oh, it comes out of nowhere and it's how did I. With my spreadsheets and all my tracking and all my stuff. Like how did, how would it, something, surprise me. And then instead of, writing it off and being like, not a problem, you just pay the car insurance and move on. It's again, it's like just a curious has pulling thread, pulling individual being like, how is it like, it's how could you lose track or something like that? this is a recurring thing. And so to get curious about it and start asking questions around it. And. know that it has nothing to do with this instance in space and time of what's going on, but I am interested in what kind of pattern in me, what was causing me to be blind to or turn away from a series of things such that leads to something seemingly popping up. Unknown in my life. and, for me, I have a life coach to help me ask those questions. And I buy into the hypothesis that I caused this is somehow reflecting back to me, something going on, some kind of mental ball. I've dropped that had nothing to do with this, but I do want to know what that was. And so I started, she helps me with a bunch of. Questions on what does this, what did you make this mean about you? Did you make it mean, Oh, I'm unorganized. I can't handle money. I like, is unfair. Like whatever, whatever stories come out of it, what am I making it mean? Cause then I'm just, I basically created an instance for me to tell. a story of victimhood, a story of inadequacy, of not worthiness, not good enoughness, whatever it is. And then there it is, we're finding that's my payoff, right? That's what I really wanted. I really just created an opportunity for me to run this part of my identity. and as a pattern, we do that. And then if we, again, like a higher level of it, look at our patterns serve us and the symptoms, the pain points in our world service so much, if you're doing this work, because then everything that pops up and you do that work for it, you find that story. You're like, Oh, I was secretly wanting to tell myself the story of, I suck at this. And so now I've had it, An opportunity in my normal daily life to find, Oh, I'm still telling myself that story so that I know to go attend to that. And that they're not just going on under the surface and we're falling for it and thinking, no, this is just about that. And I can just pay it and make it go away. Or that's just a person was in a bad mood or, whatever. Maybe they cut me off cause they're late for something. And we just tell these other, surface level stories to make all the instances go away. Instead The AI have the thing engage in the opposite model, which is everything in your life is pointing you back to what do you believe? What story are you trying to tell right now? What misconception about you as an infinitely powerful being are you believing into such that you're experiencing these things? In life, and then the bat would just not fall for, wouldn't fall for your life the way we fall for our lives.

Don Finley:

Oh, that's so cool. I think that's a great reflection is to come back to Hey, I'm noticing this. What is it that the belief that I hold that is having me notice it in this way? is really incredible. I got to say, I really appreciate you taking the time to talk to us today. I know that anybody can reach you at mindset engineering. com. is there any other links that you'd like to offer up to anybody

Molly Brown:

that's the one. I'm on LinkedIn as the Mindset Engineer, and that's pretty much it for socials for me.

Don Finley:

it.

Molly Brown:

But I really appreciate you having me on and having this conversation with me.

Don Finley:

Yeah, not a problem. It definitely like hits home and absolutely love what you're bringing to the table. So everybody, we have Molly Brown here today and mindset engineering. com. Thank you for tuning into The Human Code, sponsored by FINdustries, where we harness AI to elevate your business. By improving operational efficiency and accelerating growth, we turn opportunities into reality. Let FINdustries be your guide to AI mastery, making success inevitable. Explore how at FINdustries. co.

People on this episode