In this episode, we spoke with Sean Hehir, CEO, and Jonathan Tapson, Chief Development Officer, of BrainChip about neuromorphic computing for edge AI. We covered why event-based processing and sparsity let devices skip 99% of useless sensor data, why joules per inference is a more honest metric than TOPS, how PPA (power, performance, area) guides on-device design, and what it will take to run a compact billion-parameter LLM entirely on device.
We also discussed practical use cases like seizure-prediction eyewear, drones for beach safety, and efficiency upgrades in vehicles, plus BrainChip’s adoption path via MetaTF and its IP-licensing business model.
Key insights:
• Neuromorphic efficiency. Event-based compute minimizes data transfer and optimizes for joules per inference, enabling low-power, real-time applications in medical, defense, industrial IoT, and automotive.
• LLMs at the edge. Compact silicon and state-based designs are pushing billion-parameter models onto devices, achieving useful performance at much lower power.
• Adoption is designed to be straightforward. Models built in standard frameworks can be mapped to BrainChip’s Akida platform using MetaTF, with PPA guiding silicon optimization and early evaluation possible through simulation and dev kits.
• Compelling use cases. Examples include seizure-prediction smart glasses aiming for all-day battery life in a tiny form factor and drones scanning beaches for distressed swimmers. Most current engagements are pure on-edge, with hybrid edge-plus-cloud possible when appropriate.
IoT ONE database: https://www.iotone.com/case-studies
The Industrial IoT Spotlight podcast is produced by Asia Growth Partners (AGP): https://asiagrowthpartners.com/
Transcript.
Erik: Sean, Jon, thanks for joining us on the podcast today.
Sean: Hey, thanks for having us, Erik. I look forward to the conversation.
Jonathan: Yeah, it's a pleasure to be here.
Erik: Yeah, and I'm really looking forward to this one. I think this is one of the most critical technology nodes that's being developed right now, this on-device edge AI. We really are seeing quite a transformation in AI more generally, but I think on-edge device in particular is a very interesting area for the industrial space, where I am, where a lot of the cloud solutions aren't really suitable for the use case. So I'm really looking forward to getting into this market and technology with you.
Sean, before we get deep into the company, I would love to understand where you're coming from. So I understand, from our conversation earlier, you're not one of the founders. But you were brought in as a CEO in 2021. What's the back story for the company? And then for yourself, personally, why did you decide that this is the challenge that you wanted to take on at this particular point in time?
Sean: Sure. BrainChip has an amazing story. It has two founders, Peter van der Made and Anil Mankar, both of which have retired from the company at this point; although Peter's on our board of directors still, and he's on our scientific advisory board still, which is an industry group of people who advises on technology direction. But Peter, particularly, had a vision about what to do around computing. He looked and wants to challenge the kind of traditional ways of computing to be done, which is this very inefficient kind of computation around AI, using kind of classic matrix multipliers, which is the traditional DLA model and things like that that the whole world uses.
He simply wanted to try something different. He said, "I want to do something that's very efficient." Of course, the most efficient computational device known to man is our human brain. And so they've determined — our industry is called neuromorphic. And so Peter went out to design in his head about how to do AI model acceleration in a neuromorphic fashion, which is the most efficient way. Anil Mankar—it was a long-term, three-decade Silicon venture in building dozens of chips—they got together and henceforth, the brand name Akida, which is the brainchild of the two of them together for our neuromorphic technology. So they always had a vision. Let's do it different. Let's do it better and more power efficient. I mean, you cannot read the paper today more than one minute and see the challenges in the world about power consumption in AI. They were one of the early thinkers and continually that charge about, that you can do it differently and much more power efficient. And we'll have a lot of conversation here about why the Edge makes sense as well. But that's really what drove the company.
You asked me a little bit myself. You know, I'm a long-term tech person. I've been in this industry for many, many decades. I've worked with some of the biggest companies in the world and some of the smallest startups in the world. You asked me, why did I want to come take this challenge on? When I was given this opportunity by the board of directors, I did some very important steps. I was living in Silicon Valley at the time and had access to some of the brightest minds in the industry. I asked them to take a good hard look at the technology, and they all gave me a resounding thumbs up. Incredibly forward thinking. Incredibly sound. Breakthrough, et cetera. I then talked to the team. I talked to the team here, and I saw we have some of the very best scientists and engineers in the world. So I was very enthused by that. Then I had the opportunity to meet some of the prospects and customers. Then seeing the passion in their eyes to deploy our technology, it was just something quite frankly I could not resist at that point.
Erik: Yeah, great. And really a timely period to be getting into this technology given the trajectory of the fundamental technology base. I think as we go through this podcast, we're going to have to define a few terms. So you mentioned neuromorphic computing. This will certainly come up. So can you help us understand more what is a neuromorphic chip? How does it differ from a more traditional chip? You've already mentioned that a little bit, but I think it'd be important to understand in more detail. Then what's the difference between TOPS and SOPS in terms of how do we measure performance here?
Sean: I think this would be a good time to let Jon talk a little bit. Jon, why don't you take a definition first and I'll add to it if I disagree.
Jonathan: Yeah, sure. So, neuromorphic computing is very straightforwardly trying to build chips that sense the world and compute like the human brain. The argument goes that the human brain is still five orders of magnitude more efficient than the computer chips we have right now. So if we can understand how the brain works and instantiate that in silicon or in other compute technologies, we'll get some of that efficiency and some of that marvelous creativity going on.
Then the question about TOPS and SOPS, so TOPS would be tera ops per second, describing the rate at which a regular chip can perform computation. SOPS is not a widely-used term, but it could be spikes per second. It's more a sense of how quickly an operation, a neuromorphic chip, is having to do to get the job done. So they aren't really equivalent to terms. Most regular Silicon people would say a chip that does a high number of TOPS is good. It's doing a lot of compute. Most neuromorphic folks would say a chip that's doing high SOPS is bad. It's using a lot of spikes to get the job done. So there is a little difference there in interpretation. Generally speaking, we find both of those terms quite unhelpful because, firstly, they can be gained. Secondly, they don't connect to how good a chip is, as well as you would expect. So we prefer to just say, how much energy are you using to do an inference? How many joules per inference? That's probably the most ungameable, closest-to-real common-sense assessment of how well a chip works.
Sean: I hope that makes sense to you, Erik. In the end, people are just looking for efficiency, right? And so when Jonathan says joules per inference and things like that, it's really how little power can you do to do the inference in your model. And that's where we focus: exceptionally efficient.
Erik: Yeah, it makes sense. I was a little bit struck by a comment I read maybe two months or so ago. I'm not a deep expert in this industry, right? But if we look forward 10, 15 years, and we look at kind of competitiveness between countries, it really all comes down to power. Because everything else, they'll kind of hit these bottlenecks, right? Power is going to be the fundamental bottleneck — that, at least in the West, we will have trouble getting around. I think, in China, where I'm sitting, they're able to set up power infrastructure pretty well, and so efficiency is less important. The West tends to be quite difficult, right? So we really have strong technology in many layers of the stack. Kind of power bottleneck is going to be really significant.
Sean: If I could just make a comment to that, it's not just absolute power. I agree with all your comments: East versus West, the difference is there. But in the particular use cases themselves, even if you had unlimited power, it doesn't mean you have unlimited power for the device and the application that you're doing, right? And so that will always matter. So if you're doing something that isn't a battery, or in the field, or something like that, that efficiency matters for that device. And so that metric will never go away. I understand the comment when you're saying absolute power. Yeah, that makes sense for big models. But when you talk about edge, absolutely, efficiency matters. And we'll get into this, I'm sure, if you go through this. That's the beauty of AI. In the end, it all starts like a lot of compute models, very centralized and very data center-centric, like all compute models. But it becomes very apparent to people that, at some point, you want to move some of the workload out of that data center. I always like to say the right tool for the right task. You don't have one size fits all for any other compute in the industry, and why should you for AI? So their efficiency will always matter on the edge.
Erik: Yeah, great point. Well, maybe it's a good time to get into the use cases. We'll come back to the technology and get more into detail, but I think it'll be helpful to understand where is this technology applicable. So, on your website, you prioritize or highlight hear, see, sense. So those are the three kind of clusters of use cases. There's one that I found quite interesting, which is a seizure prediction smart glass that you've co-developed with a partner. And so when you talk about some of these things that you just mentioned about power efficiency and the importance of all-day battery life for high-performance solutions using a tiny form factor, I think this could be a great example. But can you walk us through a bit at a higher level? What are the clusters of use cases? Maybe we can then dive into this seizure prediction smart glass and understand why is your technology important for specific applications such as this.
Sean: Sure. I mean, one of the pleasures of my job is I get to talk to many different prospects or customers around the world on a daily basis. I'm constantly amazed with the use cases that are put before us. Just saying, "Hey, we're going to do this and whatever that this happens to be." And so what we do here is, we supply enabling technology for companies that are building devices to do something, right? And so if you're going to create, whether it's a seizure detection glass or a piece of defense industry gear that's going to be out in the field or something like that, you have a vision saying, "This is what I want to accomplish with that device. In order to do that, I need some AI models running locally, and I need to run them really well." That's when they engage with us. So, around the use cases that we constantly get, that we see a lot of trend on, is these cases where mobility matters, latency matters, battery life matters. Of course, all that is the things like we just said: medical devices for sure, defense industry for sure, industrial IoT for sure. Even, quite frankly, in automobiles right now, automobiles are nothing but a group of computers and sensors rolling around right now. Since the whole world is going to electric cars, the more efficient these devices are in the car, the better it's going to be. So those kinds of industries are all interested in this technology.
Erik: When you look at what are the critical decision factors — you're talking to a potential customer, they're evaluating it. They're probably trying to educate themselves also about how should I be considering my tech stack, right? I think there's a lot of companies that the industry is evolving in many areas quickly, right? So they're probably not also crystal clear on how they should be prioritizing the different specifications. But as you're discussing through, what tend to be the critical—one, or, two, or three—specifications that would direct them toward — I mean, you've already mentioned this a bit. Latency and so forth, power. But help us understand a bit more. How does that factor into the business case for your customers? How does that factor into the functionality or the type of capability that they're able to bring to the end user?
Sean: When you're talking about running models, you're going to run them on a piece of silicon. And so, every customer of the world is trying to get the most performance out of the smallest piece of silicon as possible at the lowest power. The industry calls that PPA (power, performance, area). And so when we talk to prospects and customers, they're always saying we need to optimize that specifically for our use cases. You don't want to overbuild it because you're going to use too much power, take too much area. You don't want to undersize it. So you really want to get it absolutely correct around the PPA associated with it. That's the one of the first criteria. And so it comes back to that efficiency: what can you do in a very small piece of silicon?
The next thing is ease of adoption. For us, even though we've got groundbreaking performance, most of the industry engineers have been trained on a couple of industry frameworks for creating models. If you create those models on industry steering and framework, you deploy our software stack, you can then put them onto our platform very easy. So world-class PPA, ease of adoption, those are the biggest criteria. Then I would also say, for customers, when I talk to them, one of the things they like about BrainChip quite a bit is: this is all we do. There are other companies that perhaps maybe do something like this and then have a bunch of other offerings. But this is all we do. We're highly, highly focused on this mission of bringing the very best edge technologies to us and doing that. So you have the entire company focused on it.
Then one more thing. I'll pause and I'll ask Jonathan if he wants us to roll in more on there. We've also made some noise as of late. We have a proprietary set of models that are called states-based models. Our proprietary name is called TENN. So now we have the most efficient silicon. We're also working in exceptionally efficient models as well on that silicon for those customers who want to take a little bit of both. Jonathan, anything you want to add to that?
Jonathan: Yeah, sure. Obviously, we're a neuromorphic chip company. The customers, by and large, don't much care what inspired the technology. They care only about: does it give them a significant advantage in their system? One of the things we have is that our neural compute engine can run without any intervention at all from a host processor. So almost all of these systems consist of a host processor that's doing the high-level stuff and a compute engine that does computation efficiently. But in our case, they can put the host processor to sleep. Our compute engine can just run in the background. That's kind of our secret for, as you mentioned, these situations with tiny form factor and long battery life. The host is asleep and the Akida chip or IP is just running in the background, waiting for the particular vent that will cause it to wake up, the larger system. And so, as Sean was alluding to, we're now looking at a couple of extremely small form factor systems, particularly the one we call Pico, which will be this kind of tiny little piece of silicon. They can run in the background and wake up the big system appropriately when that is required. And so I think that's a tremendous competitive advantage. And as I say, customers don't much care how it does it. They just know that they're getting a tremendous performance advantage.
Erik: I completely agree. Customers care about the performance advantage. But I would like to understand a little bit more, how does it do this? I mean, I guess there's other systems using more traditional hardware that could have one chip set that's running running the background and just doing a kind of preliminary process on data and then waking up another chip. So how do you differentiate your capabilities from these? Or what makes you uniquely able to do this in a more efficient way?
Sean: So I'll give you the high level one. I think it's always good to start high. I don't know if you've ever heard the term event-based. Basically, we take full advantage of sparsity. Meaning, unless there's a change, we don't do any compute on it whatsoever. When you think about the edge environment, everything on the edge is typically streaming. Because that's the environment that the edge happened. But if there's no change, we don't do any compute. That allows it to be incredibly efficient. That's probably the first and foremost element associated with that, which is take full advantage of sparsity.
Now, we'll talk a little bit later on above the models, some of the models we're developing now, which give you the same level of accuracy. We're developing models that are third the size of other competitor models. So if you've got a really efficient piece of silicon that takes full advantage of sparsity and doesn't compute when there's no activity, and then you put on a very efficient model, you've got something that's much, much more efficient than anything in the industry. Now, to contrast it to more traditional ways of these things, in a classic multiplication or multipliers kind of accelerator, they're always going to compute whether something's happened or not. So you can see the inefficiency. Where we ignore it, they're computing. Jonathan, anything you want to add to that?
Jonathan: I think you covered it pretty much all. Just to say, I mean, we say, internally, the most efficient computation is the one that you didn't do because you didn't need to do that computation. So if you can actually avoid doing computation at all, it's tremendously efficient. Our systems are just looking for change. If something changed, then they'll compute. If nothing changed, they won't compute.
Sean: And then I'll throw a couple more, Erik, just a little bit of refinements on there. You want to minimize the data movement of things. We've got a memory way closer to a compute element, so you're minimizing the data movement. Again, all these efficiencies add up over time.
Erik: You mentioned one other key aspect of your proposition, which is ease of adoption. I understand that that's critical. Can you help us understand a bit more, how do you provide that ease of adoption? How does working on your platform differ from working on a more traditional AI accelerator or a TOPS-oriented platform?
Sean: Again, I think this would be a good one for Jonathan and I to tag team. But just what I said earlier, most of the industry creates models on a couple of traditional frameworks. We support them. Then you just basically utilize our platform called MetaTF, which is our software stack, which ingested then which actually maps it to the hardware. It sounds pretty simple, but it's really just that simple. I like to give a lot of examples what it can do. We bring students in here all the time. They learn if they do models just like that. We have, I think, about a dozen interns here this summer. They learn MetaTF and importing models, and no big problem. So it's just really that simple. If you know an industry framework, we train a little bit of MetaTF, and then you map it to the hardware. It's just that simple. So you want to keep the creation of the models for the hundreds of thousands or a million machine learning engineers in the world, but make it simple to put on. That's it at the very basic core of it.
Erik: Okay. Got it. Got it. Okay. So it's really just around adhering to industry standards and avoiding adding complexity.
Sean: You know, it's interesting, Erik. Because it's defined — actually, you know, I did a podcast. We do our own podcasts here. I did one about a year or so ago with a very famous business author, a friend of mine, Geoffrey Moore. His most famous book was Crossing the Chasm, right? In that book, he talks about technology which is undisputedly better, but sometimes it doesn't cross the chasm to mass adoption because people don't want to have change, or it's too hard to change and adopt. One of the key principles we have here at BrainChip is, we want to get that breakthrough performance that we've been stressing on the first part of this podcast. We want to make it incredibly easy and friendly for people to move their models onto our platform.
Erik: Makes complete sense. Makes complete sense. One other aspect of this that I think is quite important for our listeners to understand is to what extent the work that you're doing is pure on edge. So, let's say, it's a pacemaker. It's making a decision and it's acting. It's 100% confined within that device. Versus on edge, plus cloud—meaning, that it's doing some kind of pre-processing. Maybe it's then communicating to a cloud system—what have you seen in terms of the mix of the use cases that you're working on, on a pure on-edge versus a hybrid model?
Sean: I mean, most people approach us for the pure on-edge elements because that's where we excel. But there's absolutely no limitations in that technology that you could not do exactly what you're describing. What I said earlier, do the right compute at the right place. And if you got to do more later, then you do it more later up in the cloud. But no, right now, the vast majority of the conversations are driven by on-edge computing.
Erik: Got you. And so if we drill into on-edge, can you help us understand where is the cutting edge? What are we kind of pushing the boundaries in terms of capability on edge? Right? Because of course, there's compute kind of bottlenecks because of the space available. There's power bottlenecks. And so what are we able to do on the edge, in the more ambitious use cases that your customers are bringing to market maybe today? Then if you can help us understand a bit more of the trajectory, if we look forward 12 months, 36 months, what are you anticipating that will be possible on edge that might not be possible today?
Sean: We've got about three hours to talk about that. I would start first by saying this is an industry of rapid change. It does not stand still. Now, I've had the honor of being the CEO of BrainChip now for about three and a half years. The model types that we support today were just dreams when I took this job. Now these model types are mainstream. That pace of innovation is not slowing. It's coming quicker and quicker. And so what it requires is companies like us that offer acceleration technology. It's to ensure that we keep developing our platforms that can keep supporting all the model types of the future. And so those capabilities are limitless depending on how people can dream, what kind of models, and what capabilities they can bring out.
You know, I'm glad Jonathan is here. Because we just did our shareholder meeting last month. Jonathan showed our roadmap. And in there, we've got some very interesting kind of future stuff. When i say future, it's not real long kind of future, including LLMs on the edge. I believe you're going to ask me some questions about that a little bit later, but it'd be a good time to introduce that right now. Just the thought of those capabilities maybe eighteen months ago was really conceptual. Now it's very, very practical. We've got very strong customer interest. Our deployment, that work that we have is unique in this industry. We are beating the industry on all those PPA kind of things I talked about earlier by a factor. We're not talking 10%. We're talking X number of times better. And so the ability to deploy the functionality of an LLM on an edge device, I think that speaks a lot in a few short years. Jonathan, anything you want to add to that?
Jonathan: Yeah, sure. There's a kind of threshold with LLMs, for example, that if you have less than a billion parameters, you just don't get a really human kind of response. It's not grammatical. It's not logical. It's words, but it's word salad. And above a billion, suddenly, there's almost like a phase change. You get a human quality response. And so our ability to put a model above that billion-parameter threshold on the edge with no web connection or cloud connection is transformative. I think that's the most exciting thing that we see coming along in the next two or three years. And again, to Sean's point, I mean, these models got bigger at a rate of four or five times per year. So that's 125 times in three years if you compound it, so to speak. We really have seen that growth and been able to keep pace with it. I've got to be able to deliver at that scale, which is great.
Sean: I would say one thing, Erik, if I could. This company, the vast majority — I don't know the percentage here in front of me. The vast majority of this company is nothing but scientists and engineers. We have world-class scientists that are doing nothing about keeping abreast and leading model advancements and world-class engineers building the best silicon and software stack. That's all we do. We're thinking about this night and day, about how to get these models around exceptionally well on the edge.
Erik: So, Jonathan, it sounds like LLM on the edge is not quite here yet, but you're expecting it within one or two years to be capable. Can you explain?
Jonathan: Yeah, I would say less than that. We've demonstrated it, and we expect to have a product by the end of the year.
Erik: Can you help us understand what would that first product look like in terms of battery life, in terms of constraints? Is there any more detail about that first generation of LLM on the edge capabilities that you can describe?
Jonathan: I think I'm going to be cagey about that because the sales guys will kill me if I oversell or overexpose anything. I might give Sean that one to tackle. He's probably got a better picture of that. I can give technical detail.
Sean: I think the right answer to that, Erik, is what I said earlier, which is, we know for a fact that we will be able to get better performance, to find whatever performance level at a token rate or something like that, at a power element that will be many times lower than the industry right now. If you think about that, that's a big bold statement. Where most technology companies make a claim of being 10% or 15% better, I'm making a claim that we'll be many times better.
Erik: Right now, there's a company, for example, that we're working with that does a little snap-in clip. You go into a meeting and you say, "Hey, I'm going to be recording this meeting." Or there's other ones that will kind of do a direct translation. They tend to be a bigger device, like a pen-shaped device, that will do a direct translation. I'm actually not familiar with the tech stack of those. Are those all connected to the internet? I guess that the pen doesn't actually do any processing. It's just recording and later it'll translate. The pen actually does the processing in real time, so it can translate. Is that doing that via cloud computing to your understanding, or is it just a larger form factor that has more kind of sizable compute on the device and therefore is able to do these types of jobs today?
Sean: I think it's a little bit of both, but I think the predominant model is a cloud model. That's why the latency is usually pretty bad on this kind of things to do it. But there are some devices that are coming to market or out in the market right now. They can do it local. Not quite as well yet, but they will get there.
Erik: There's a topic that's come up in a couple of conversations I've had recently. I don't know if this is pertinent to your tech stack or not. It's the concept of kind of a mesh network of edge devices that are doing AI in consortium. So if you're talking about drones, for example, in a military use case, yes, each device kind of has its own compute, but they're also coordinating together. Is that something that is kind of important to the use cases that you're working on, or are you more typically working on use cases where the device is kind of acting independently? I'm asking that because, hypothetically, then if you have a hundred drones communicating together, you would have a hundred X. Potentially, you could do more sophisticated computations than you could with one individual device.
Jonathan: We're not actively working on any projects like that. We do keep an eye on that space. We are very interested specifically in the drone space. My observation of that field is that it's not nearly as mature as it's sold to be. One of the specific problems is, if you have a single device and it's computing to the fullest extent of its ability and then you network it to a bunch of other similar devices, well, none of them actually has any capacity left over. You can swap information, but you have to have spare capacity to make use of that information. That tends to get forgotten in the sort of rosy picture of networked intelligence. Putting on some opportunities where that data is getting fed back to a central system, central AI system somewhere, I see that as having immediate value and doable in the short term. But the kind of distributed swarm intelligence is a much harder problem than it's generally paid.
Sean: Probably just to amplify with Jonathan said in the front end, we are in conversations with several drone companies. Because if you think about what we talked about, the value proposition, exceptionally high performance in a small piece of silicon, a long battery life, it's ideal for those kinds of devices that are trying to have extended life out in the sky.
Erik: Got you. Well, guys, I'm obviously not an expert in AI on the edge, and so I'm sure there's a lot of things that I'm missing in this conversation. What are we missing here that's really important for folks to understand, whether it's on the technology side or on understanding which applications are feasible on how to optimize the outcomes for the solutions that your customers are building?
Sean: You know, I think it's an awareness thing. I mentioned earlier I've been in the tech industry for several decades. What I said earlier is the right tool for the right task. Again, AI itself, if you think kind of the current bloom of AI in the last five, six years and all the energy and excitement around that, the edge market is still a little bit behind that. But it's growing very quickly. I mean, all the analyst reports are talking about the edge market being the fast grower. In fact, I read an analyst report the other day. It said it's simple. Last 10 years was all about big data center and training. The next 10 years is about edge and inference, right? So it's just the right tool at the right time right now, as people start to wake up and think of these kinds of things that they can possibly do that they never could do before. Now, you talked about one with the seizure prediction glasses. Well, who would have thought of that? They can't do that at a data center. We have a partner right now that is flying around on a drone, scanning a beach, looking for distressed people in the water. Couldn't do that. So these use cases, she's coming up and more and more. So I think we're just really at the beginning of this. And so we're going to see a lot of growth in this market. Again, every analyst in financial or technology is saying edge technology, edge AI, is the future right now.
Erik: Sean, when you're having sales conversations, where does this typically start in the organization? Is it to the point where they've already — you know, engineering is kind of specked in that we need certain edge AI capabilities. Then procurement contacts you and says, "We're doing a tender. We evaluated the top five vendors. We'd like to invite you." Or is it more kind of having early discussions with the engineers about, "Hey, here's what's possible today. Let's educate you and kind of work together on the product development"? How mature are the organizations that you're working with in terms of understanding the landscape of what's possible, who the vendors are, and how are you approaching those conversations?
Sean: Sure. If you think about if somebody's going to buy AI acceleration technologies, they're doing it to put into their core product, which is their revenue stream. So it's a very, very strategic decision, right? And so to answer your question directly, most of the conversations originate out of the product management function, where they have this vision for this thing they want to build and they recognize they need some capabilities. And so they're trying to figure out who can do that. It's not at that, like you said, procurement level. It's using the product management which usually has some kind of function from the CTO office or somebody technical to say, "I pictured my product doing this. Mr. or Mrs. CTO office, help me find the right technology to do this with the characteristics." So it's usually kind of we call them a technology scout and product management that we work with. Usually, when they come to us, they say, "Hey, we've got this thing. We want to do this. Will it run this kind of model at this performance?" We do. We have a lot of simple ways to try. You know, I mentioned MetaTF. They can simulate it on MetaTF. We can give them a little development kit with one of our chips on there. They can do it themselves. They can send it to us, and we verify it for them and show the same and say, "Yeah, here's your results." Then you go into the business conversation.
Erik: Got you. Right now, your business model, does it tend to be — So you have a development kit. They can kind of build, do a POC, and make sure that it functions. Then are you typically selling this more from a project perspective saying, "We're going to work with you together in some kind of fee plus cost per device," or is it typically just a pure, "Once we figure out that this works for you, you're paying per chip"? I know you also have the software stack. What does that typically look like? Is it just kind of a per chip, or is it like integration plus chip, plus software or license?
Sean: A couple of things. Our primary business model is: we're an IP provider. So the people who want to build their custom chips, we sell them an IP block called the NP or neural processing unit to do that. And so our primary business model is licensing of that IP for people that design a custom chip in there. And so, all of our IP licensing, there's a license fee associated with it. Then there's a royalty associated when those chips come off the production line. We get a percentage of sales or ASP for that, for every chip that goes forward. We do typically charge some NRE to help them with the project style like that.
Depending on the customer, some have outstanding machine learning capabilities, and they're totally fine themselves. Some ask us to help them with their models, and we can charge them for that. But that's our primary business model. It's around IP. We sell on the license, get royalty, but we also help them with the models if they need to as well. Because in the end, we're experts in all of that, so we can help them in that case.
Erik: Yeah, got it. And for the customers you're working with right now, does it tend to be more kind of medium-sized, kind of OEMs that are building drones, they're building a smart glass, some kind of, I would call it kind of a new category product? Or does it tend to be kind of the very traditional companies? That, "We're doing construction equipment. We have sensors. Now we'd like to do some on sensor." Who are kind of the first adopter wave? Does it tend to be more of the mid-sized new category type companies or the traditional products?
Sean: I don't like to give kind of vague answers, but I will this way. It's a little bit of both. But if I had to pick one, the medium-sized companies are more innovative. But typically, large companies are very interesting too. Because the larger companies know that their business models are under threat if they don't step up and keep making competitive products. Meaning, their products, they have to have more and more functionality. So we see it from both sides. Very large companies and more aggressive mid-sized companies that are trying to penetrate market with innovative products, those would be the classification of that.
Also, I think I want to draw another dimension, since we're talking about business models. Some companies are building products, so they're going to build a custom piece of silicon and go in that product and they're going to be captive. There are also companies that say, "I want to build a custom piece of silicon to sell to other companies." And they'll make that so they can sell and go into multiple companies. So the way to look at it is almost a 4×4 matrix or a 2×2. Larger companies, innovative mid-sized companies, and those that are building for their own captive use, and those that are building chips for the market that sell into the market.
Erik: Yeah, got it. We actually invested in an event-based platform, probably now six or seven years ago. And so I guess they would be the type of company. That they're building a platform, but potentially they could also be designing a chip set to use with their platform on your IP, and then selling that through to their customer base as that additional category of customers there.
Sean: I'll take that sales lead, Erik, after the call.
Erik: The reason I'm asking, absolutely, Sean, is that we work quite a bit with the larger companies. We really do see that there is a significant challenge at a lot of these. They've had a lot of success in the past. They have certain business models that have worked well. And so it's just often much more difficult for them to figure out how are we going to modify our business model to make use of this new capability, whereas these kind of medium sized, they're a bit more like digital native companies. The business model of using data and providing services on top of hardware was baked in from day one, right? So it's much more intuitive for them to figure out how to use these technologies as they mature. Whereas the more traditional ones, it's just harder to shift once you've had a model that's been successful for 30, 40, 50 years. Kind of selling mechanical engineering in a lot of cases, or electrical engineering, and now selling data-enabled services on top of that, right? But of course, that's such a huge part of the economy. That's really critical, not just for your business but also more broadly for adoption of the technology, that these companies figure out how to make effective use of this technology as it matures.
Sean: Absolutely. Well said. But I would also say: if you asked 20 years ago, would you ever think you'd see the day when these old, traditional car companies—the vast majority of their product line—would be electric? People have to change. The whole AI revolution, I personally believe, and many analysts believe, is going to be a greater productivity revolution than anything we've ever seen. Whether that's the information used in industrial age, agricultural age, the productivity enhancements to our world are just going to be unbelievable. Every company will be affected. Then those that don't decide to modernize their product lines, I don't think, will survive.
Erik: Yeah, agreed. Well, in automotive, we're already kind of seeing that, right? So you either adapt or you're just — it'll be a slow death, right? These are behind us. But you adapt, or you're not going to be around in 50 years. Sean, last question. So if some of our listeners are interested in working with you or understanding more about your technology, what's the right way for them to reach out to the team?
Sean: Absolutely. I would love to talk. We're at brainchip.com. There's a way to just go to, I think it's called sales@brainchip.com. That's one. Of course, if you want to reach out directly to me, I'd be glad to do that or direct anybody in our sales organization.
Erik: Awesome. Well, Sean, thanks for taking the time today. I really appreciate it.
Sean: Likewise. It was really a fun conversation, Erik.