• >
  • >
  • >
  • >
  • >
Podcasts > Ep. 220 - Mind as a Service: Revolutionizing AI Memory with Llongterm
Ep. 220
Mind as a Service: Revolutionizing AI Memory with Llongterm
Jonatan Bjork, Co-founder, Llongterm
Thursday, July 10, 2025

In this episode, we spoke with Jonatan Bjork, Co-founder of Llongterm, about how persistent memory is changing the way AI systems interact with users across industries. Jonathan shared the personal journey that led to founding Llongterm, and how their technology allows AI to retain meaningful context across interactions. We explored how memory transforms user trust, the architecture behind Llongterm’s Mind-as-a-Service, and the future of portable AI memory.

Key Insights:

• Mind-as-a-Service: Specialized, persistent memory units that can be embedded in apps, tailored by use case (e.g. job interview prep, customer support).

• Structured and Transparent: Information is stored in user-readable JSON format, allowing full visibility and control.

• Self-structuring memory: Data automatically categorizes itself and evolves over time, helping apps focus on what matters most.

• Portable and secure: Users can edit or delete their data anytime, with future plans for open-source and on-premise options.

• Universal context: A future vision where users bring their own “mind” across AI apps, eliminating the need to start over every time.


IoT ONE database: https://www.iotone.com/case-studies

Industrial IoT Spotlight podcast is produced by Asia Growth Partners (AGP): https://asiagrowthpartners.com/

Transcript.

Peter: Jonatan, welcome to the show.

Jonatan: Thank you.

Peter: All right. Maybe quickly introduce yourself to the audience.

Jonatan: My name is Jonatan Bjork. I'm the co-founder of Llongterm. I have over a decade of experience in building large-scale cloud infrastructure. I'm a full stack software engineer as well. Yeah, I'm very, very excited to be here.

Peter: Excellent. So I can call you Jonatan. Okay. That makes it easy, okay. Although I've spent a long time working with Swedish companies in the past, yeah, it's still easier to stick to the English terms. Okay. Could you tell us a bit about your background and what led you to actually develop Llongterm? What brought you to get it started?

Jonatan: It actually started on a sort of personal note. Both me and my co-founder, we had separately—this was about a year ago—gone through our own very painful breakups, relationships sort of falling apart. We both found that generative AI was helpful, and this was supporting us. So we ended up, we saw an opportunity, in using LLMs to provide relationship support. That's kind of really far, far away from where we are today with Llongterm, but it actually started there, with building a relationship co-pilot backed by LLMs.

Peter: Mm-hmm. Okay.

Jonatan: This then led us on to discovering the memory problem with LLMs. Basically, with generative AI, everything needs to know about you. You have to tell it within what's called a context window. So that's the text that you feed into the prompt. You kind of need to give it all of that upfront. Then when you start over, it forgets. It doesn't keep this context over the long term. And so, in solving for this problem, we realized that this is something that will go way beyond the scope of that, that actually we're building at the time. It's something that can be used in many apps to help build this trust between human and AI and make it easier to interact with generative AI. And so that's how Llongterm came.

Peter: Interesting, interesting. Okay. So you're organizing information in a structured human-readable form. Why is that so important, this approach?

Jonatan: Well, you kind of need to be able to, I think, look at, be able to feel the structure yourself as a user, to trust that the memory is working the way it should and also feel like you're in control. Because I think that's something that a lot of the users struggle with, with genAI applications. You're going out there. You're telling your life story to this opaque LLM, and you're not quite sure what it's doing with this. You're not quite sure how it manages this information that you're giving it. So that's also a big part of it, getting this structured format so that you can, as a user, visualize it. You can quickly scroll through it, and you can get an overview.

Peter: Okay.

Jonatan: You can also know that this is the thing that whatever agent is going to use. It's not something opaque that you're not quite sure what you told it.

Peter: Yeah, okay, okay. So you mentioned features on — I believe on your website, I found, it features like knowledge map, self-structuring and timeline. How do these components or features, how do they work together? They are the core components?

Jonatan: So the knowledge map is really the structure of data. But put simply, it's JSON.

Peter: Okay.

Jonatan: So the JSON format is probably the most widely-used standard on the internet for passing data around. It's something that developers know, and it's intuitive for non-developers as well to look at. The self-structuring part is, basically, when you give new information to Llongterm, we already have a structure. It's sort of like a hierarchy of information. When there's new information coming in, we need to categorize it and find out, figure out, where to store it. Then over time, as the data set grows, there are things, details, that aren't so important or pieces of information that can be merged into one place. That's the self-structuring bit. It's also based on the specialization that you give to the, we call it mind. We're actually thinking of a better phrase in all of this. Because mind is probably too big. It's too big; it doesn't really encapsulate what it is. We're thinking about a different way of talking about it.

Peter: Okay.

Jonatan: We went from mind because it was sort of an easy entry point, but building a mind is very hard.

Peter: Obviously, yeah.

Jonatan: So it's not actually the thing.

Peter: We haven't gotten there yet with the technology. Okay.

Jonatan: But when you sort of create a mind, you give it a specialization. That tells the system to focus on information that's relevant for the specialization. For example, the specialization could be a nutritionist, or a mental health coach, or it could be a custom support agent. Based on that specialization, it then informs the structure. So we draw the key topics that it should cover, one of the key things that it should try, and remember.

Peter: Okay. That's actually very good that you say that because I realized I skipped a question. That's indeed the Mind-as-a-Service topic, which you just explained, which I wanted to explore in practical terms. So it's called Mind as a Service, although it's a simplification of the actual mind which we have. It's not trying to reproduce what there is.

Jonatan: Yes, so that's why we're thinking sort of moving away, in terms of language, maybe moving away from the word mind. Because we're not going to that level of complexity. In fact, we strive for the opposite. We strive for simplicity. One of the key benefits of using Llongterm is that it's simple. It is plug-and-play, and so we're not pretending that we're building this very complicated mind.

Peter: Of course.

Jonatan: What it really is, is it's a software as a service. But it doesn't seem right to call it a software when the core data set is something that you can talk to and effectively can talk to you. But, yeah, when you integrate it into your app, what you essentially get is this focused segment of information and knowledge that's specific to a user and a specialization that a user is interested in. So that if you're building an app for interview preparation for job interviews, then that's the specialization. But each user of your application then gets their own little space that remembers things about them—their goals, their ambitions, their areas that they want to improve, or whatever your application figures out of that problem.

Peter: Okay. So we have talked about what does it solve basically in the AI space, right, mind as a service. Then we have talked about a knowledge map self-structuring timeline. Let's have a look at the applications and use cases, is that okay?

Jonatan: Yeah.

Peter: So what are some of the most interesting and unexpected ways people are using your Llongterm memory technology?

Jonatan: We have seen many use cases in different industries like education, business, mental health. One of the recent examples is an app that helps students prep for the GCSEs.

Peter: So prep for exams, basically.

Jonatan: Yeah, and we're also using the API to load in information in the Dance, actually, that the students need to memorize, but also then uses the memory to capture new context and help the application better understand the user's needs and remember them in a way that sort of offloads all of it. So the app no longer needs to put all that context into the prompt.

Peter: Okay.

Jonatan: It can flash that through Llongterm. That's perhaps not entirely unexpected but sort of the different angles, I think, were unexpected, if that makes sense.

Peter: Okay. Yeah, it makes sense. Although, yeah, I think it would be good, it just has to be all those information I put in there have to be kept safe. Right? That's still a bit of a challenging thing, that everything you ever said to your AI will be stored away. But okay. So what challenges exist in designing memory systems that feel natural and intuitive? What is the challenge in it?

Jonatan: I think going back to what we said earlier, I think exploring the memory is really interesting from a user perspective. That you need to be able to sort of pass it and navigate it to feel confident that it's doing the right things. Yeah, so just step through it and make sense of it, which I think we're doing okay at at the moment in terms of making it visible to the user.

Peter: Okay, okay. Now, going back to the applications, if you don't mind. Having a persistent memory and change the nature of what AI can do, how does that compare to the traditional systems we have in place now? What's the difference between the solution you have and the current setup? I mean, ChatGPT remembers certain things about me as well, right? How does that differ from Llongterm?

Jonatan: I think context is everything. It's interesting, you just mentioned two things here, the ChatGPT one and then the traditional or existing systems. So I'll start with the latter. When you call a customer support number, it could be any company, right? Today, you still mainly get to talk to another human. What happens is you have to tell that human what your background is, who you are. You have to give them details about yourself. But you also have to tell them the story of why you're calling them, so they can make sense of it. Then maybe the line gets cut off, or you ask to call back later or something. So you have to call back again. You have to do this again. So you'd repeat everything again. But it doesn't end there. If your call gets escalated to the next, like, second-level, third-level support, you have to then explain to every person along the way why you're there and what the problem is. You have to also somehow convince each person that your problem is worth solving. Because they're looking at it with fresh eyes.

Peter: Extremely annoying, yeah.

Jonatan: So what if you instead in the loop here had an AI agent? It's not replacing the human but simply helping them. You would get, with this sort of Llongterm memory, that map to the user that is contacting your company. They will have the context, and you can just glance at it and you grasp that this is the issue. I think that would be a huge improvement.

Peter: Oh, yeah, that would indeed be a huge improvement. Yeah, okay. So I guess that's also one of those particular domains or crucial applications where you think AI Llongterm memory would show its best performance, right? Or do you have any other good examples you could bring up where this really brings benefits?

Jonatan: Yeah, so I think the most crucial part is exactly this: human-AI interaction. It really comes down to trust. So when you talk to a human, you're not expecting, like when you talk to me, you're not expecting me to remember every detail of everything that you told me. But you expect me to have a grasp of who you are, what you've done, and sort of to be able to have a conversation. So if I were to say, if I were to completely forget everything that you told me, then it's like we're having to start over every time. I think that's the experience that a lot of people have when talking, when interacting with Gen AI—whether it's through a chatbot or through a series of agents. That they start off okay, but then they reach a point where the memory is sort of gone.

Peter: Yeah.

Jonatan: They lose the trust. A great example is, this one customer was telling me about they had an application where the users could talk to an LLM. It was in the mental health space. They would talk to the LLM for quite a long time. They share a lot of details. It get to a point where the users — so let's say they're reading a book together. The user was telling the LLM about the book, everything that's happening, and talking about the plot. Then a few days pass, they come back to it and say, "Hey, do you remember that book we read?" And the LLM goes, "No." The user says, "No? But we did read it." Then the worst part is the LLM then starts trying to make up for that the way they do, by essentially what comes across as gas-lighting the user, saying, "Oh, yeah, I definitely remember."

Peter: Okay, yeah. Yeah, okay.

Jonatan: And so they said, this customer said, "When memory fails, you lose users."

Peter: Interesting. Okay. Yeah, indeed, I myself would have some use cases for it, actually. I'm not going to call names now of the apps I'm using. But some of them get really annoying because they never remember what I have been talking about yesterday. And it always starts from scratch, indeed. Yeah, it would be very helpful indeed.

Peter: Okay. I'm trying not to jump too much around with the questions now, so maybe one more about the user experience. So how do you balance the need for AI to remember important details without overwhelming it with trivial information?

Jonatan: This is not an easy problem to solve. But the good thing is, this is actually where AI itself can do a lot. LLMs are insanely good at classification. You basically ask the LLM to do these steps for you. So you can identify what's important, what's the key argument in this paragraph, or what's the sentiment. You can really get very accurate assessments. That helps a lot. So if a user says, "Oh, I'm going to Paris next week, which is the capital of France," then we don't need to remember that Paris is the capital of France. That's not important. We do need to remember that they're going there. But that's sort of it. We don't have to capture all those.

Peter: Okay. So that's actually, the LLM is handling this part mostly?

Jonatan: Yes.

Peter: Of course. It's crucial, but that's good. Okay. Then let's look at the implementation and the integration. So how difficult is it to integrate Llongterm into an existing system? What's the typical process to do that?

Jonatan: As a middleware, it's a few lines of code that's really straightforward. We have an SDK for JavaScript, but we support any language using the REST API.

Peter: All right.

Jonatan: The SDK is just a thin wrapper around the API. But we're also really interested in having agents read and write via MCP (Model Context Protocol), which is a fairly new protocol that instructs how to plug various systems into the whole lens.

Peter: Okay. Well, that's already answering the next question as well. There are a few code snippets on your website, and you support something else as well. That's good to know.

Then a big topic, data security privacy, how do you handle those concerns? That's what I mentioned earlier. I don't know, really. It's a bit of a concern for everybody using AI nowadays.

Jonatan: Yes, it's very important. I think it's important to recognize that, as an industry and also as users, we are interestingly actually throwing that away a little bit. So I'm really glad you bring that up. Because, for example, OpenAI with ChatGPT, I think they have 300 million active users at the moment. They might change. I mean, you can correct me. But it's millions of users anyway. They're all talking to it about everything. You're sharing everything about your life, about your business. You tell it things that you probably wouldn't tell maybe not anyone, any other human. But that is all owned by that one company. So in terms of security, in terms of privacy, hundreds of millions of people have already kind of given that away without much consideration. And so our task is as being helping users regain some of that control over their own data. That's why I think it's so important that we build something that is portable. You're not locked into using our platform, that you can go in and edit, and you can go in and look at it, and you can decide what you want to do with it. But yes, security for us is super important.

One thing that I've learned throughout my career is: you have to start with security from day one. It's incredibly hard to build that in after the fact. So you have to think about how you structure it. We follow all the best practices for secure development and deployments, making sure that we encrypt all the communication, and to secure the data where it's doing.

Peter: Okay. Good. Can companies or people deploy it by themselves, the solution, or is it a SaaS solution, as you mentioned?

Jonatan: So what's available now is a SaaS where we do hold the data. But we are exploring a self-hosted version on the open-source site. But also, we have an offering for enterprises who want to host on on-premise.

Peter: Okay. That's good. I think this will be in demand for certain companies, people. Yeah, okay. Very good.

Jonatan: I should add to also what I said about people throwing away their privacy concerns. It's nuanced, right? It's not everybody, but a lot of people. But that is on the consumer side. On the business side, b2b, business to business, it is almost the opposite. Companies are more concerned than ever about making sure that the data stays on on-premise or within their virtual private cloud.

Peter: Yeah, employees uploading confidential information to ChatGPT.

Jonatan: Yes, exactly.

Peter: Yeah, that's a big problem indeed.

Jonatan: I think that's changed. Previously, companies, prior to Gen AI, companies would not really have a problem with employees asking questions on Stack Overflow or writing blog posts about what they're doing and publishing it online. Whereas it's almost flipped now, and everyone is very protective of their information of the data, which is almost the entire opposite of what you see in the consumer space.

Peter: Yeah, okay. All right. So looking at the ethical and social considerations maybe, are there scenarios where you believe AI systems should explicitly forget information? It's impossible to do that. How do you approach this requirement?

Jonatan: Definitely, definitely. This is, I think, very important.

Peter: It's still a machine, yeah. We need to treat it like that.

Jonatan: Well, it's almost more than a machine because, in a very focused way, it has the intelligence that surpasses the human intelligence, in a very focused manner, in a sense. So yes. We have many customers telling us about this. Like, when they were first exposed, to what you mentioned earlier, OpenAI's ChatGPT memory feature, suddenly, random bits of information that they have told, inputted, into a chat earlier, that would pop up in unrelated conversations. Through a lot of people, they didn't understand what was happening. Some people even said, "This is kind of creepy. Why does it know this? I don't remember telling it about it." And so that goes back to, we tell it everything. We share so much. We have to have the ability to forget.

That's an example. I like the example of, imagine that you were talking to your agent, AI agent. You're saying you want help with preparing for a job interview. So it gives you all this help to preparing. Then the next day you go to your current job. You're sitting on a coffee break. Just for fun, you have the agent join in the conversation, as you can do with Voice. There are already products doing that. You're just chatting and laughing, talking about sailing and video games, when your iPhone suddenly says, "Oh, I'm sure you'd miss this banter if you get the job you applied for yesterday." So you have to be able to be selective about this. So we have Llongterm that forgets. We just haven't exposed it yet. But what you can do today is you can go in through the Explorer, which is an online tool, where you can actually edit yourself. You can delete anything.

Peter: You can delete it, yeah. All right. That's good. So market position in the future. So what differentiates Llongterm from other AI memory solutions?

Jonatan: Most long-term solutions today are either vector-based or graph-based. Graphs are in a specially powerful way to dynamically encode information. You manipulate nodes and edges, so you can represent very complex memory states in efficient and elegant ways. Llongterm instead stores the memory as plain text JSON. So it's just a big old blob of JSON that self-assembles at an arbitrary depth. And it's a lot simpler. So it's simplicity. There seems to be some real benefits of storing data in a format that LLMs can natively comprehend. It's just text. It also has the benefit of making it more portable. So if it's JSON, then you can take it any way. You can do whatever you want with it, and you don't have to export it from somewhere. It is already there.
We may be a little bit earlier. But if you think about what's happened to the compute landscape, we started off with, most recently, we had computation, the basic unit of compute. So maybe two decades ago, 15 years ago, it was CPU core. So when you sold an enterprise software, you would sell it by how many CPU cores. So you licensed it. Then cloud became the standard. And so the basic unit of compute is CPU hour, which is how you buy compute in the cloud. That's basically what everything boils down to. In the generative AI era, the basic unit of compute is token. Token is a word, right, basically, which is text. So aligning ourselves with that, I think, will have some long-term benefits moving forward.

Peter: Okay. Yeah, makes sense, actually. Yeah, that makes sense. Long-term benefits as well. So looking long-term ahead, three to five years, where do you see Llongterm in the next three to five years?

Jonatan: Also, I think there's sort of two stages here. The most pressing one is what we talked about. How do we make agents more useful? How do we help them remember? This is where a lot of companies need help today. For example, with the customer support agent—be it a human, an AI, or a combination. Then there's the grand vision. Today, everyone is talking to GenAI agents in countless different places. They may not even be typing directly to it; it's indirect. But they're all silos. So you sign up for one app; you have to start over. And every time you do this, you have to start over. You have to tell your story again. And so we haven't actually solved that problem. But what if each time you sign up for a new service, or you've interacted with an agent, it can ask you, "Do you have a Llongterm account?" Kind of like how you log in with Google or how you give apps access to your different GitHub repositories. What if you could give it your Llongterm account and you can say, "I want to share this thing, this part of my account with you"?

Peter: Very smart, yeah.

Jonatan: So then as a user, I'm in control. As an app developer, I get to give my users a better experience from what they want. Because they will be bringing some context with their Llongterm account. Over time, all these apps can contribute back to the whole light, if the user so chooses. And so that's, to me, a win-win for the user, for app developers and, hopefully, for us as well. So that's the idea of bringing your own mind.

Peter: Okay. Very smart. I think that's a very good idea, yeah. I think last question would be, how can users get started? How can potential users, customers, get started? Can they go to your website, sign up?

Jonatan: We are actually currently onboarding customers individually, super human style. So they can reach out to either me directly or the team, and I'll share the details on how to do that. We'll get a call to get started.

Peter: Okay. Excellent. Excellent. Very good. Thanks, Jonatan Bjork. Thank you. Very interesting conversation. I'm wishing you good luck, although I'm pretty sure this is going to be a success.

Jonatan: Thank you. Thank you.
 

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that AGP may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from AGP.
Submit

Thank you for your message!
We will contact you soon.