• >
  • >
  • >
  • >
  • >
播客 > Ep. 214 - Catching the AI Wave: Staying Ahead in a Rapidly Evolving Tech Landscape
Ep. 214
Catching the AI Wave: Staying Ahead in a Rapidly Evolving Tech Landscape
David Hirschfeld, Founder & CEO, Tekyz
Thursday, January 16, 2025

In this episode, we sat down with David Hirschfeld, Founder & CEO of Tekyz to explore how AI is reshaping the software development landscape, both internally and for client-facing solutions. David shared insights on adopting an AI-first philosophy, leveraging AI for operational efficiency, and integrating cutting-edge tools into products. We also discussed the challenges of deploying AI across industries, navigating geopolitical constraints, and maintaining a competitive edge in an ever-changing tech ecosystem.

Key Insights:

  • AI-first mindset: Leveraging AI in operations and products drives innovation and long-term value.
  • Efficiency gains: AI enhances delivery speed, code quality, and testing, improving productivity by up to 50%.
  • Automation: AI tools enable automated microservices and scalable software development processes.
  • Adoption challenges: Larger firms struggle with AI scaling due to complexity and cautious risk-taking.
  • Privacy concerns: Addressing data security and compliance is essential, especially in regulated industries.
  • Staying competitive: Innovating with AI helps companies adapt to rapid technological changes.

Explore more industrial IoT insights: IoT ONE database

The Industrial IoT Spotlight podcast is produced by Asia Growth Partners (AGP).

音频文字.

Erik: David, thanks for joining us on the podcast today.

David: Yeah, and thanks for having me on. I'm excited to talk to you guys.

Erik: Well, before we get into the topic at hand, which is really trying to understand how AI is being deployed by companies today, let's understand a little bit more about your background. So you've been running Tekyz since 2007. I see you've also had your name on a half dozen plus other companies, including a couple others that you're you're still involved with. Can you give a little bit of the background of how did you first get involved in this work of helping to build up startups really systematically?

David: Yeah, sure. Well, it goes back to my early days which was way before Tekyz when I was working for enterprise, in particular, for Texas Instruments. Then one of the guys that worked with me, one of the technical engineers and myself, we decided we want to start a software company and really just to kind of get our feet wet in the software world. Windows 3.1 was brand new, to give you an idea of time frames. We created a product for logistics and route distribution and inventory management that was very niche-specific. And despite every effort on both our parts, we ended up growing it anyway to 800 customers in 22 countries, and sold it in 2000 to a publicly-traded logistics firm out of Toronto. So that was my entree into startups.

I thought I understood what made startups successful since we had a successful exit, and I was VP of Products for that company for the next three years before I left and cast about for a few years before I started Tekyz. Since then, I've worked with a lot of startups, in a lot of technology domains and business domains, and realized I didn't understand what made startups successful or companies successfully scale. Because it wasn't what I thought it was when I started. So I thought, "Okay. I need to focus on how to help my clients be successful," which, of course, to a certain degree, it means being all over the technology. To a certain degree, it means just being exceptional in terms of discipline and protocol, procedures and documentation, and accountability in your own company and how everybody on the team contributes, and it's constantly improving the status. It also means because technology changes, and sometimes you get these waves like we're experiencing right now, it means always trying to be on the edge, the front edge of that technology curve, especially when it's commanding change. Like when mobile devices came out and now with AI happening, probably the biggest. So, of course, that's why the AI first and the focus on AI. Because there's nothing bigger in technology right now than that.

Erik: Got you. Before we get into the topic of specifically how you support companies, I like to understand a little bit more of who you're working with. So you're working with startups. You tend to focus on B2B or B2C. Do you ever work with larger corporates? For example, corporate startups. I imagine there's also corporates that say, "We'd like to act a little bit more like a startup. Can you help us do that?" So what does the scope look like from industry focus and company type?

David: Yeah, it has been a lot of startups, and it has been existing companies that are doing a startup in a new area. In fact, I really enjoy those because they usually have much better sense of exactly what market they want to go after and why that market needs it than does the nascent startup. Although, nascent startups can do that really well, too. If they come from an industry and there's a gap in that industry that they're struggling with personally, and they have reached other people that are exactly like them in the industry that are struggling with the same problem, then those kinds of startups have a good sense of what the market needs. Not necessarily how to be successful in that endeavor but at least what the market needs. So yes to all the things that you just said. In 17 years, you can imagine that a lot of different types of projects and opportunities come our way, most of it through referral and just kind of the typical process of speaking to people and finding out that there's needs, and they seem to need a company that is really good at what they do. And so it just naturally, we start to move in that direction of working with them.

Erik: Got you. Okay. Then your working model, is it them typically outsourcing development of certain aspects of their tech stack or solutions to you, or is it you and embedding your team into their team? What does that collaboration look like?

David: What makes us exceptional, if you go to my website, it says hyper-exceptional. I don't expect people to believe it just because I put it in the title in the website. But I ask people to ask me for evidence if they're actually interested to know what an exceptional team looks like and how they operate, because I have a lot of evidence. And I think it's important to know that. But if we get embedded inside of a team, in other words, kind of like the body shop type of approach, then that exceptionalism sort of goes away. And it's up to the individual and the internal team in an organization then to be good or not good, right, as opposed to all the processes and procedures and accountability and things that we've put in and automation that we've put in place to run and deliver projects. That's what makes us really exceptional. So we just stick with that model, that project model.

Erik: Okay. Got it. Okay. Great. So you've been, over the past 17 years or so, you've been building up this system for—

David: Coming up on 18 years. That's weird to think about, yeah.

Erik: Right. So you've been building this system for developing software. I can see some of the things that you're emphasizing here, right? So very detailed project estimates, architecting for scale, automation, performance optimization, security and compliance. These are all things that I suppose you had in place before the recent evolution of AI as a critical component of a lot of tech stacks. Now you're also, in addition to this, AI-first company. So what does that mean? What has been the fundamental shift over the past probably two to three years in terms of how you've been working with companies around AI development?

David: Well, both companies and internally. Because we're always pushing the envelope internally about being better and being more transparent and automating internal processes. Because we want all our developers to be critical thinkers and work independently, but we also want them to follow very specific protocols and procedures and account for the work that they do. And to do that, you have to continually build automation that scaffolds them so that they aren't mired with all the procedure and protocol, and if we keep doing that more and more and more. So when AI comes out, then obviously, there are starting to be obvious better ways of doing all the things that we've been doing. And so internally, we've been embracing AI for driving documentation and user stories, for initially generating code and evaluating code that we've already written or refactoring code, and now starting to grow that ability to be much broader in terms of not just function or a small module, but more complete parts of the system. Because AI is evolving faster than we really can get our arms around. So I think of it sort of as being on — we've all been surfing, and our surfing skills have gotten better. The waves have gotten a little bigger as technology advances up total about three years ago. And so we went from three-year-foot waves to confidently surfing five-foot waves, anybody that's been in this industry for a while, or maybe six-foot waves. Right? Maybe as high as we are. Then here's 100-foot rogue wave coming in, and you have one or two choices. You either swim out to meet the wave and try to get up and stand up on your board and get in the curl, or you let it wash over you. Right? So we're just taking that get-up-in-the-curl attitude. Then you have to step on the tip of your board to keep it front to stay out of that wave, trying to just figure out where it's headed and stay in front of it. That's kind of, from a metaphorical philosophy perspective, that's how I think of it. And I try to instill that in everybody and the team. That whatever they can do, get farther out on the tip of the board. And that doesn't mean taking massive risk. That means trying to embrace everything that's new and testing all the things that are new to figure out how this can apply to existing customers or apply internally, for example.

And as a result, we've been building our own internal products that will eventually turn into SaaS products. One of them, we have a methodology for startups for trying to figure out who the early adopter is that they should be focused on, and what are the top one or two problems that they should be talking about, how to message all that, and how to reach that stakeholder and all this. It's methodology called Launch 1st. But it's a tedious process to do this, and so we're building an AI tool to automate all that tedium to make it very consumable for a founder to figure this out very quickly. We're also, our most expensive internal process is estimating projects because we do way more detailed estimation than most companies. Because it's really expensive to do it. They're very detailed. They're very involved. It takes all my top people when we're doing a big estimate to put that estimate together and deliver it to a client. And so we're building an AI model for doing software project estimation. So this is when I say AI-first really embracing it in every aspect of our business. So that's talking about us, not necessarily our clients.

Erik: Let's dwell on that a little bit before we move to the client-facing development. Because I think this is really important, and it's probably the area where a lot of companies can have the biggest near-term impact, right? Specially, larger corporates are quite conservative about putting new things in front of clients, whereas figuring out how to be more efficient internally. But for what I see, at least in the companies we're working with, they're struggling with this. Right? They're kind of looking up at the wave right now. And they're doing some kind of minor, you know. Maybe they're allowing use of Copilot or something, but they're not really out there exploring all the different potential scenarios and tools. What have you found in terms of the — maybe we can think both in terms of percentages. Are there areas where you say, yeah, in this particular area, we're 50% more efficient, or we're 50% faster in terms of a process? So where are the big wins there? And then as you look forward, where are these—

David: You mean fast internally in terms of how we build systems?

Erik: Internally. Exactly. Yeah.

David: On the internal side, it's testing. And now we're seeing big improvements in delivering functionality faster, building automation around the testing, building the scripts for what we should be testing and then creating automation scripts around that. So that as we're building new functionality, we already have automation scripts for the new functionality instead of it being an afterthought or a back-end process. So that's a big benefit for us. It gets us cycle time to happen much faster between dev and test to do code validation. So as we upload code, part of our process is to make sure the code fits into standards and make sure there's no vulnerabilities inserted in, since this is all automated, but to then also evaluate the quality of the code, the reusability of the code, the duplication of code and things like that. And so we're using AI to automate these processes. So as we enhance existing systems, we're increasing the maintainability of our code.

One of the areas that we're looking at implementing right now is microservices code generator. Because we try to build everything scalable. So we build everything in a microservice and containerize is always the goal. When we're starting with an MVP, we just kind of scaffold our organization and structured it so that this is just how we think and what we do. These microservices, there's a lot of things that are very common about them. So we're building right now, we're working on an AI model to be able to generate these microservices for us. So we can just describe the microservice provide maybe use the AI to say, "Here are the fields that I want." Have it produce some kind of structured output for that and then feed that into this microservice processor. It integrates the interfaces and all the connectivity and the business rules as well, and all nice in-packets and encapsulate it in. So these are the things that we're doing that are speeding up and speeding up and speeding up one step at a time. Can I say 50%? Probably, we're achieving at this point 50% improvement. Somewhere between 30% and 60%. It just depends on what we're working on and the teams. People in my teams question and say, "How can I use AI to accelerate this particular effort?" Because I think that's the biggest mental shift that my team has to make, as well as companies, if they want to start adopting AI throughout their organization. It's that people get into this culture of asking AI, "What's the best way to approach this problem?" Because they don't do that. They just ask the question of, "Here's the problem, and I want you to do this for me." But not step back and say, "I'm not sure different ways of approaching this problem." Or, "Here's my organization. What problems may I have?" Even stepping farther back and say, "I want to implement AI in my department, and my department does this. But I'm not sure what the right approach would be, what the right plan is. What questions should I be asking?" And start there. I don't think people are doing that very much, thinking of AI as a really good friend and business strategy consultant and just asking questions.

Erik: Yeah, I think your intuition is right. People are not — very selectively, what I tend to see is, you know. I was talking with the head of legal for Google here in China a couple months ago, and he's using AI a lot. They're hiring fewer entry-level legal analysts. But it's that, he's personally made this a decision, right? I think that's where we are today. That individuals are making these decisions, maybe individuals like you that have influence over your organization. But then large organizations, it's hard for them to put this mindset into place. Peter, let me ask. Because Peter has a lot of experience with large-scale deployments, ERP cloud integration, and so forth. Then you're really looking at kind of corporate level, hundreds of people working on a problem. Where's the role of AI in those types of deployments today? Are we anywhere near 30% to 60%? Are we just tickling the surface here?

Peter: No, no. We're just stretching the surface actually.

David: I'd rate it at 5% maybe.

Peter: There's a long way to go, I would say.

David: It's such a layered thing, you know. When you say 30% to 60%, it might be 30%, 60% if you look at — I mean, let's say you could say something was like that, but it'd be at a very basic layer of it. Whereas the next layer, the deep you go, it's probably almost nothing. You have to think of it kind of as a layer to thing that you evolve into. Tell me what your experience is, Peter. The first layer is just realizing that there's something that can answer questions or that can do work for you. Because a lot of people are saying, "Oh, can I ask it to do this work for me," and then it's producing something that you can then review. And now your job is thinking how to give it instructions better and prompt it so that it's giving you the kind of output that is useful for your business. Right? But that's like layer one. That's before you get start taking real any advantage of what AI can do for you. Because maybe the output you're trying to create is a waste of time when you think about how you can automate workflow to eliminate the need for that output, for example. But that requires you stepping back and thinking of using AI in a much broader way and, again, asking those questions. Peter, what's your experience?

Peter: I think it's more on the delivery side, actually. It's not really in the project management or the customer side. It's more on the delivery side. Then you're probably a very good example for that. Companies we work with in those projects, they deliver their solutions. They employ AI technology in their delivery, as you do as well. It's not really on, I would say, on the customer side.

David: You're saying, they are using it more on the delivery side than on the customer side?

Peter: Yeah, exactly. Yes. Then the bigger the project, the more difficult it is to really generalize the use of AI. Because there's so many different areas. But I got a question to you as well. Do you believe this is as an elementary factor in order to stay competitive? I mean, the adoption of AI in the delivery, especially in software development, you were saying you're using it in—

David: Absolutely. It's easy for me to see somewhere between 6 months and 18 months, where we can build entire applications with AI. I already can do it to a certain degree if I sacrifice the sophistication of the architecture of the application. There are some tools that do that now. And for certain types of applications, they're really appropriate. For example, we need a partner portal. Because we have a referral partner program, and we need a portal. There's nothing really great on the market that doesn't cost a lot of money. It just doesn't make sense because what we need is very simple. I can build that application. It won't be a microservices implementation. But I can build the whole thing with one AI tool — the user experience, the business logic, the database. That's new. That's so new to be able to do that. Two months ago, I couldn't do that. But the tools are evolving right now to give you that capability. And for our needs for that partner portal, this will satisfy us. We might turn that into a SaaS product for all of our partners that want to use the portal. Even to certain degree, with that, I can do that with this one monolithic app. Because the cost is so low to deliver this, right? Now, at some point, if it's really going to grow, then it's got to be refactored. But by then, I'd probably ask the app to refactor it. So I'm thinking, this is what I mean by the tip of the surfboard, right? This is my whole business, and my team is building apps. So if we're going to get, we almost have to think in terms of eating our children, sort of, be willing to just abandon everything that we believe in. Because, clearly, the world is changing so fast that we have to embrace it and figure out how to embrace it and leverage that to our benefits and to our customers' benefits.

Erik: Yeah, there was an analogy I was reading recently, which is from the manufacturing era, right? So you come out with a machine that does a particular manufacturing process twice sufficient. Where does the value of that go, right? And so the value of that goes to the machine builder who now has a more competitive machine to sell, and the value of that goes to the end consumer who now has cheaper products. But it can often be very difficult for the manufacturer to actually capture that, right? They have to buy the machine to be competitive. But all their competitors also buy the machine, so they're actually not gaining an advantage. So they end up making an investment in new technology. But all the value is being passed down to the customer, or it's being acquired by the technology developer. And so, really, it sounds like you're being very proactive and thinking about, how do you become also a product owner, right, how do you turn your know-how into products that then can have scalable value? Because this could put pressure on the—

David: Yes, scalable value and some kind of life beyond just doing custom software development, right? Because I just see the world changing so quickly. And it's very hard to predict, too. It may be a much slower change than I'm imagining. But so far, I've just seen how much things have evolved and how quickly. We went from a year and a half ago where we had this ChatGPT that was just hallucinating everything, right? It was writing really bad code 90% of the time if you just wanted to write a simple function. And it would get it right, well, if it didn't have to do — if it could just run with its own input and output, it would get it right more often than not. But if it had to do it inside of, let's say, a Google App Script or something and it had to communicate with the Google Sheet and have smart enough logic to do something, it was wrong almost every time. And it took as much time to debug that as it did to write it yourself to begin with. Today it will write the whole thing right the first time most of the time and much more sophisticated, complex stuff than it was able to do a year ago or even six months ago. So I'm thinking, okay, this curve end, plus the smarter it gets, the faster it learns supposedly, right? I'm trying to figure out where exactly. How do I point at something that's 18 months away and actually not get swept up in three months or four months? Because I was just way too short. So that's what I mean. I keep saying the tip of the surfboard, right?

(break)

If you're still listening, then you must take technology seriously. So I'd like to introduce you to our case study database. Think of it as a roadmap that can help you understand which use cases and technologies are being deployed in the world today. We have catalogued more than 9,000 case studies and are adding 500 per month with detailed tagging by industry and function. Our goal is to help you make better investment decisions that are backed by data. Check it out at the link below. And please share your thoughts. We'd love to hear how we can improve it. Thank you, and back to the show.

(interview)

Erik: If we turn to the customer-facing topic, walk us through your thought process. I mean, when we think about building AI tools for a customer, for clients that are then going to be facing the client's end customer, then we start to think about a lot of other challenges, right? Because this is not just, "Does it work," but, "Does it work up to a reliability level? Is the end-customer comfortable with use of data to enable this application, et cetera?" So there's a different set of challenges. Also, you have probably tech stack challenges, where there might be digital native apps that are really AI from day one versus apps that have been on the market for 10 years, and now you're adding some new functionality. What is the thought process that your team goes through when you're trying to figure out what's the right technical and business architecture for a new product?

David: Well, so if it's an existing product, it's been on the market successfully for any length of time, and the tech stack isn't just ugly but it's reasonable and relatively modern, then adding AI capabilities to it is pretty an easy reach. So it just depends. And when we're thinking about adding AI to it, I want to think about what almost is not possible yet. And that's what we want to be building now. Because by the time we're ready to release something, it will already be possible. Or, what might be really difficult now won't be that difficult to do in another six months from now. So yeah, I encourage all my clients to just embrace it wherever they can. Because, first of all, you said something about whether customers are worried about the data being used in an inappropriate way and some of those privacy issues. But remember where we were with the cloud seven or eight years ago and the trust factor that we had to overcome with the cloud in maybe 10 years ago. It was like, nobody trusted the cloud. How could you possibly want to take it off my server and put it up someplace where everybody has access to it? Right? And of course, today it's like, you're still running something on your server? So it's like, what a flip. So AI is not going to be any different. People will start to trust it. Because unlike the cloud, it's way more invasive in terms of how it's working its way into everybody's life.

My wife said, "I don't know if I'd really understand this? Would I trust it,” and all that when it first came out, when ChatGPT came out. Because obviously, it's been around a lot longer than that. But when ChatGPT came out, I kept talking about this how now consumable it was. Because I was using OpenAI about a year before that, and I was telling her about what I was able to do with it and how amazing it was, that it could produce stuff that sounded like human when it was still relatively new. Then ChatGPT comes out. Now, all of a sudden, it's like so consumable to anybody. I'll give you an example of how invasive — invasive is the wrong word, but it's kind of not the wrong word — but how much and how quickly it'll just penetrate everything. This was a year and a half ago when ChatGPT still was hallucinating a lot and everything else. But it was still very decent for certain types of things like just conversational, asking questions, getting information. So we were standing in our backyard. She wants to do gardening. She's a big gardener. We had just moved into this home that we're in now. She said, "How many 4x4 beds do you think we need if I want to grow all our vegetables?" We're not vegetarians. So I said, "I don't know, but maybe I can ask ChatGPT," and she's rolling her eyes. Then her sister calls. And so she's on the phone talking with her sister about the move and about the backyard. While she's doing that, I'm having a conversation on my phone with ChatGPT about, we live in Vista, California. So considering our climate and considering we're two people in our early 60s, and we're not vegetarians but we love vegetables, how many square feet would we need, 4x4? How many 4x4 beds would we need to do all the square foot gardening to grow all our own vegetables? And it comes back with a number. And it gives me a reason why it came up with a number. Actually, it was pretty close to what I was thinking it should be. And I said, okay, great. Well, considering companion planting — because when you're gardening, you want to make sure things you put in one bed are all compatible with each other. Certain things hate to grow together, and certain things love to grow together. I said, for each bed, come up with a companion planting plan. Then it did, and it said, "You can do this for each season, because then you want succession planting." Because you don't plant something that onions were just growing in and expect it to grow, unless it likes the soil that onions were growing in, right? So it came up with that whole plan, and it did it by season. Then I said, "Okay. What companion flowers should each bed have for each season?" Because then you want certain flowers there because they draw the bugs away, but only for a certain thing. Other things will draw bugs that actually like vegetable that's planted there even better, right? And this is all I know about gardening, it's these topics. So I said, "Okay. Now create a table for each bed, for each season to show us what." This took me about 8, 9, 10 minutes to do, partly because it was a lot slower back then and because I kept asking it more and more questions, right? It was done after 10 minutes. She had just hung up with her sister. And I said, "Here. Take a look at this. Is this what you're talking about?" Her mouth fell open, and she turned her head and looked at me like, "You've got to be kidding me." And of course, ever since then, she's like, she wants to ask ChatGPT everything, right? This is the thing. It's so seductive, if that's even the right word. But once you start to realize what you can do with it and how you can do it with, then you start to realize this is like your best friend and business coach and advisor that happens to know everything about everything.

Erik: What tools are you using? Are you still using ChatGPT as your primary tool when you're thinking about this, not coding specifically but project management, thinking through problems? Are there other tools that you found more useful for specific types of challenges?

David: It's my first go-to. If I don't get the answer I want, then I go to Quad, or I go to Perplexity. Those are probably the three. I go to Gemini sometimes. But for the most part, I'll just ask because it's an app on my phone. It's just consumable. Most of the time, I'm getting a lot of what I need out of it but not always. Sometimes I need something more in-depth and more structured or whatever. And I can pull that out of ChatGPT, but it'll just be more available in some of the other. Or if I need to read more of a research topic, then I go to Perplexity. It's just really good at that. It just organizes everything from a research thinking perspective, right? This is just conversational stuff. If we're building something that's going to use a large language model, or it's going to be a RAG model, which means that we build our own AI machine learning database, and we use that to inform a large language model — rich, augmented, I can never remember what G stands for. But that's what a RAG model is. Like for our estimator product and for our niche analysis product, those are both RAG style models of AI tools that we're building. But for just the daily asking questions, it's just whatever tool is most readily available and consumable depending on the level. And sometimes I ask all three or all four if it's something that's really important. I'll take the result of one and say, "Here's the question I asked, and here's a response that I got for this thing. Do you feel it's complete, it's correct? How would you expand on this?" So it just, you know, I go back and forth between the tools all the time.

Erik: So what you've described here is kind of a decision-making use case, right? And when I talk to our clients, often, those are the things that are the more challenging. Because the front-end interface is pretty easy. There's great applications now. Also, if it's like content generation or something, you're kind of using something off the shelf. Those are more standardized, so it's easier to find a product that can do what you want. But when it comes to decision making for corporates, you have kind of unique data sets. You have unique types of decisions that you're making. What is your sense, maybe not just of today but of the future, that are we in a position where everybody's going to be building RAG models with proprietary data for this? Or do you see niche products coming out for each kind of vertical, horizontal decision?

David: That's a really good question. But even with the tools today, decision making and corporate environment, considering all the different data, it's just a matter of how you get the data into the model. You may get a report that's giving you insights that has a lot of details. If you export that and load it into one of these models, into the large language models, then have it do some assessment of the data and some of the assessment of the insights that you got back from your — maybe you've got a predictive analysis tool that you're using. Take the report from the predictive analysis tool, and load it into there. Then just start asking questions about the validity of this prediction. Are there other patterns that I'm missing? These tools, just the readily available stuff right now, is really, really useful when you start to get creative in terms of how you use it. And yeah, I think there will be RAG models being produced that are very niche specific, because then it'll make it really consumable and really quick for that decision-maker to surface important insights and help them make decisions faster. But it's not that hard to do right now. And then with all the workflow automation capabilities that are available today, with tools available, you can just wire up the ability to pull all that data into your own, not a rag model but an input-based model using NotebookLM or something like that, and load all the data in automatically into some of these tools with very little effort from a coding perspective.

Erik: David, just one more question from my side, and maybe Peter has some things that we haven't touched on yet. But I'm sitting here in Shanghai, China today. And so top of mind for me is also the geopolitical aspect of AI.

David: Oh, wow.

Erik: You know, we don't need to get into export controls and so forth.

David: No, I just had somebody else asked me that same question. That's why I'm laughing, yeah.

Erik: But this is a challenge, right? Not just for China. I think for other, for the EU, for example, right? So do we have to have different tech stacks, different functionality for different markets based on their regulations? How much complexity is there today? Is it just kind of China, maybe Russia, Iran, a couple other countries and the rest of the world? Or do you really have to start looking at the EU and treating that differently from the US in terms of how you're architected?

David: That really depends on who you are as a company and what your exposure to these various countries are, right? If you have a global footprint, so you're in the US and you're — probably, just let's talk about the US and Europe, right? Because different privacy requirements in Europe, which are pretty strict, and the US is getting stricter but nowhere near as strict as, nor near as much compliance around privacy as you need for the EU. So you have that, something that you have to consider when you're architecting a solution if it's going to have a bit — that's more of a data set problem as opposed to an AI problem, at least I think so. Because whatever AI you're using in that environment, you're probably not building your own large language model. You are probably building your own RAG data set to inform it, though. So whatever large language model, it's got to already be EU compliant for you to have access to that data in that environment. It's got to be SOC 2, level 2 compliant. It's got to be restrictive in terms of what data can be seen by the large language model, that isn't specific to just your own little walled-off garden of that data. These are all architectural things that you have to consider. But for companies with big global footprints, they already have a lot of this factored, at least I think so, from a data management perspective to make sure that they're staying compliant from a privacy perspective. Peter, what's your experience in this?

Peter: I think that's still a big question mark, yeah. I'm currently running a project as well, which is globally spend. This is not a topic at all because there is no answer to it, honestly. I mean, every company is taking a different turn there, I guess. But the one I'm working with right now, it's rather a big question mark still. There's no move in that direction at all.

David: Is it a big question from the perspective of, how do we protect the data, or how much data can we expose to the LLM? Which one?

Peter: I would say it's the fundamental question. The fear is still there of, what would be the impact? We haven't gone into any details there even. And it's a multinational company, yeah.

David: Right. Okay.

Peter: They have a lot of science data. So it's a science company. They're very, very careful in exposing their secrets, you know.

David: Yeah, their intellectual property.

Peter: Intellectual property, yeah.

David: So that's a tough one, yeah. I don't really, I try not to spend too much time thinking about the geopolitical kind of landscape when it comes to AI. Because AI is evolving way faster than any legislation is even pretending to do anything about it.

Peter: Yeah, for them, the biggest challenge is actually not the difference between the US and the EU. It's more between US and Asia, China now. It's a big topic.

David: In terms of exposure of their intellectual property? Yeah, that's different. I thought you meant in terms of just privacy and customer data and things like that as opposed to intellectual property.

Peter: This all leads to the result of not adopting anything right now.

David: Right. And so they're just staying away from AI at the moment?

Peter: Yeah, they're looking at the wave, yeah.

David: Yeah, they're looking. They're watching.

Peter: They're looking at the wave.

David: And it's getting bigger and bigger.

Peter: I got a question. So I do a lot of development as well. So what is the key component, the key tool, that you would say gives you the biggest competitive advantage in your business that you use on the delivery side?

David: I don't know. There's not any one tool that gives us the — well, okay, what gives us the competitive advantage is when I show people evidence of what it means to be an exceptional software development team. Because we produce a lot of artifacts in the process of doing all the things I was talking about that other software development teams don't do and their internal teams don't typically do. Because it's hard to do it. You don't just decide one day you're going to produce all this stuff. You evolve it over a long period of time, and you have a couple key people that just constantly drive this forward. That is what our competitive advantage is. And I found that people really connect with it when I show them this. They go, "Yeah, we don't do that," or, "No, we've never seen anybody do estimates quite as detailed as that," or, "The way that you track all the information about a project so that your status reports are so rich, other companies don't do that." And so I get that enough. I thought, "Oh, why don't I lead with this? Because it seems to be what people really care about." And so that's why I put at the banner of our website, you know, "hyper-exceptional software development team." There isn't a single tool. I'd be surprised if anybody could point to a tool today that is giving them that competitive edge, unless it's in a specific niche and maybe a specific RAG tool that they're using to accomplish something that was really hard to accomplish previously. But then, that won't last for very long as the world evolves.

Peter: Of course.

David: So I'll ask you the same question. What tools have you seen that give you that big, competitive advantage?

Peter: Yeah, competitive advantage. I'm mostly managing large projects or also delivering. But to be honest, it helps with bugs, finding bugs. You basically paste a lot of code in there. You just hit enter and, oh, wow, there's the solution, which would have taken sometimes weeks or days before. Oh my gosh. That's one big gain. Then it's getting out the essentials out of a lot of information. So far, a very hands-on case would be, let's say, there is a meeting across continent involving China, India, Australia, US, and someone in the UK. 15 people, two hours, three hours. Well, you can summarize everything from that meeting within a minute.

David: Yes, you can summarize it. You can pull out insights. You can ask all kinds of questions of things that happened in the meeting. You can combine that with documents that were discussed in the meeting to enrich the — I mean, it's all set, and it takes seconds to produce that and then have it give you a plan going forward. So the one tool, like what you said though about the code and how powerful that is, that's a capability that is available in many tools now, right? Pasting code in something and having it give you, figure out the best way, you know, how do you refactor this code.

Over a year ago, a year and a half ago, one of my developers, when we were still kind of getting our arms around what we could do with this, he took a query. That was a stored procedure that was 50 or 60 lines that was pretty inefficient that had been written. This was ChatGPT before it got as good as it is now. He asked it to refactor that one stored procedure to run more efficiently. And it came back with 20 lines of code from the 50 or 60 it had, and it worked the first time in this case. And it ran like four or five times faster. Now, for any buddy expert stored procedures, it doesn't matter, it would have taken them — somebody who's a real expert would have taken many hours. But just a good developer might have taken a couple days of writing and testing, and writing and testing and trying to come up with a better way of approaching it, and thinking about how the efficiency from a database perspective of how the queries are being called, in what order, and what are the indexes and everything else. Then he didn't do this. But he could have asked, how should I re-index the database or restructure the database so that it'll perform even faster? So things like that. But that's going back to what you were saying. That ChatGPT was capable of quite a while ago in doing simple refactoring like that and often getting it right, as long as it wasn't too big or too complex. Yeah, it's hard to put your finger at any one thing.

Erik: Okay. But this is a good conclusion today—that it's not about finding the magic tool. There's going to be a lot of good tools out there. It's really about building the processes and the mindset to kind of use these. That's where the competitive advantage comes in.

David: The last comment is, probably the most important tool I learned to use in terms of function is, I forget what something's called. And in a few seconds, I'll get the name of that thing by giving it the most cryptic description of what it is I'm asked looking for. Somehow it knows, "Oh, are you talking about this?" And sometimes I'll say, "No, it's not that, but it looks like that. But it's for this purpose." And it'd go, oh, and then they tell me. Or the name of a movie. I'm able to get it every single time with very little effort. So that's probably the most important tool I get out of large language models. Everybody thinks I'm so smart because I remember these things, and I don't remember any of it.

Erik: Great. Guys, with that, I suggest we wrap up. David, thank you for taking the time to walk us through your thought process for how you're adopting AI today. Really appreciate it.

David: Yeah, thank you, and I really appreciate being on your show. Thanks guys.

Peter: Thank you, David.

联系我们

欢迎与我们交流!
* Required
* Required
* Required
* Invalid email address
提交此表单,即表示您同意 Asia Growth Partners 可以与您联系并分享洞察和营销信息。
不,谢谢,我不想收到来自 Asia Growth Partners 的任何营销电子邮件。
提交

感谢您的信息!
我们会很快与你取得联系。