Interactive video first frame
0%

How I built a personalized LLM chatbot for my friend and What does the Meaning Crisis really mean.

How I built a personalized LLM chatbot for my friend…What if AI could help us deepen the relationships we already have instead of replacing them?

A decidedly strange moment in human history is unfolding:

People are developing real feelings for machines while struggling to connect with the humans right in front of them.

Apps like Replika have millions of users who form emotional bonds with AI companions. The app is a commercial AI companion platform with over 2 million users and 500,000 paid subscribers.

(Ironically, Replika itself began as a friendship project…the founder Eugenia Kuyda trained a chatbot on her deceased friend's text messages to preserve their conversations.)


I've always been curious about how we relate to each other…what makes us feel close, how we build trust, how we express love, and what makes communication meaningful.

So when I see people choosing AI companions over human ones, it perplexes me.

Are we really outsourcing intimacy to algorithms designed to mirror us, to agree, flatter, soothe?

Are we letting ourselves be trained to love machines that never challenge us, never disagree, never ask us to grow?

Paradoxically we are the most connected generation in history, capable of sending a message across the globe in milliseconds, or tapping into the sum of human knowledge with the strike of 'enter'.

And yet, are we forgetting how to know each other?

How to really pay attention.

How to build trust.

"The meaning crisis is a breakdown in our ability to connect to what makes life feel worth living." — John Vervaeke

We live in what Vervaeke, Ph.D. of Psychology at UT, calls the meaning crisis: a cognitive and existential breakdown.

Our tools help us move fast, but rarely help us reflect well.

We are flooded with information, but starving for significance.

We are hyper-connected, but increasingly unseen.

We as a society are focused so much on productivity, efficiency, power…

But we may have failed as people to answer the biggest questions of life. We can be more productive, richer, have stronger militaries…but many of us can't answer the questions:

"Who are we?"

"What should we aspire to?"

And what is a good life?

In the face of a meaning crisis, maybe the most radical thing we can do with technology is to use it to pay attention.

To our lives. Our relationships. Our shared stories.

For the first time in history, we have tools that can remember everything.

The question is:

What will we choose to remember?

And how will we use that memory?

Could AI tools help us reflect more clearly?

Pay better attention?

Remember with more depth?

And ultimately, understand ourselves and those we care about better?

That was the hypothesis behind one of my latest experiments.

For my friend Kristina's birthday, I decided to build a friendship LLM trained on six years of our conversations and voice notes. A way to surface our forgotten plans, recurring patterns, long arcs of personal growth, moments of significant importance. And a way to understand ourselves and each other better.


My Inspiration

Let's take a little meander into the past.

Kristina and I have been friends for 8 years. We met in university and kept in touch over years of living on different continents, sometimes sending each other daily voice notes.

For my birthday, the year before, she had collected pictures, screenshots of our conversations, with notes and comments in a book for me. There was something deeply moving about commemorating moments, thoughts, ideas of the past that I had forgotten about. Meaningful insights, documented for me…

I wanted to make something for her that felt equally special. I'd been conspiring for weeks about what that "something special" would be. A couple of days before her birthday, the idea finally arrived.

Would it be possible, by extracting years of texts and voice notes, to build an LLM she could talk to? One that could answer questions about herself and our friendship?

Imagine chatting with the LLM:

  • "Remind us of an idea we once loved but never acted on."
  • "Tell us a story from our conversations that made us both laugh."
  • "Find a topic we were obsessed with for a while and then forgot."
  • "Summarize a dream or long-term plan we once discussed."
  • "Invent a new tradition for us based on things we love."

At this point, I'd already spent five months deeply entrenched in AI development. I'd been prototyping a few projects with my collaborator Kieran—including a journaling tool that surfaces personal insights over time. I imagined the friendship LLM to be somewhat similar.


I was living in Amsterdam at the time, she was in Berlin…which left me with a five-hour train ride to hack on the idea.

Fast forward to Kieran and I stepping on the train to Berlin.

We sat across from each other in a small 6-seater compartment, laptops balanced on the even smaller folding table, and started hacking…

Here's a rundown of the 5 hour process for you:

Let's Build the Friendship LLM

Getting Our History Out

We started with an excavation: six years of chats and voice notes, exported from WhatsApp and Telegram into an aggregated single text format. Then came the voice notes. We extracted them from chats ran them through OpenAI's Whisper API to transcribe them, about $0.006 per minute.

This was crucial; so much of our friendship had unfolded in voice messages.

Making Sense of the Conversation

To make the AI actually use this history, we needed a way for it to find relevant moments when asked a question.

Enter RAG: Retrieval-Augmented Generation, which essentially means the AI will retrieve relevant pieces of your data and use them to generate its answer.

We combined two methods to make AI "learn" from our data, each serving a different purpose:

  1. Fine-tuning helped the AI learn our conversational patterns—the rhythm of how we talk, our shared references, the particular way we express care for each other. Think of it as teaching the AI our "voice."
  2. Retrieval-Augmented Generation gave it access to our actual memories. Instead of generating plausible-sounding responses, it could pull from real conversations and reference specific moments we'd shared.

The Build

We split the chats into small chunks ("chunking") so the AI could pull only the most relevant pieces instead of trying to process the entire archive at once.

We converted each conversation chunk into what's called an "embedding"…essentially a numerical fingerprint that captures the meaning of the text. Which is, in this case, also a vector…think back to precalculus class ;-).

Similar conversations end up with similar fingerprints. For example, the words "happy" and "excited" end up close to each other in the vector space, whereas "happy" and "sad" would be far apart.

Once all your chat chunks are converted to vectors, we store them in a vector database (vector store). This vector store is built to handle thousands or millions of these high-dimensional fingerprints (long chains of numbers) and return the closest matches instantly.

Effectively this is what "teaches" the AI model new facts…it's sourcing knowledge directly from what we actually said.

It also means you don't have to stuff the entire 6-year history into the prompt (which is impossible — or at least expensive due to size limits); you just pull the relevant parts dynamically.

Once the memory system was in place, we built a simple interface using Telegram. It's familiar, easy to use, and makes the whole thing feel like a natural extension of how we already talk to each other.

The Final Stage

There were challenges. Six years of conversation is a lot of data, so we had to experiment with chunk sizes and search methods. Transcribing the voice notes took time. And we had to work to make sure the AI's responses kept our casual tone instead of sounding generic or robotic.

By the time the train pulled into Berlin, we had a working prototype, not perfect, but alive. A system that could answer questions about our shared history, surface long-forgotten plans, and point out patterns we'd never consciously noticed.

The Bot is Live

Testing the bot was surreal for me. The AI revealed patterns. Topics we'd circled back to repeatedly. The consistent threads that had run through years of change, ideas we wanted to pursue, and things we wanted to build and experience together. It's almost like an archaeological tool for our friendship, with the ability to surface forgotten conversations, highlight recurring themes.

It made me reflect on memory and attention.

How much of our lived experience do we actually retain?

Most of the conversation around AI tools focuses on productivity, efficiency, optimization.

What about helping us pay attention to what matters, through recall?

What about supporting reflection, deepening understanding, strengthening connection?

I imagine relationship tools that help couples understand their communication patterns over time. Family archival systems that preserve stories across generations. Personal reflection aids that surface insights from years of journaling or creative work.

What This Means for You

The barriers to building these kinds of applications are lower than they've ever been. The combination of accessible APIs, open-source models, and simple deployment tools means that curious individuals can now create AI systems that would have required entire teams before.

AI has democratized creation. The tools exist.

We have a lot of choice of what to do with it. Most of the agency is in our hands. The question of how we develop and deploy the technology. We have agency in how we move forward.

The age of personal AI is here as something we can actively shape.

And the most beautiful applications will come from people who understand that technology, at its best, amplifies our capacity for care, attention, and connection.

Yuval Noah Harari puts it this way:

"Humans rule the world because we can build trust with strangers and cooperate in large numbers. To survive and flourish in the age of AI, we need to trust other humans more than we trust machines."

So…

What if AI could help us deepen the relationships we already have instead of replacing them?

And…How do we create more trust between us?

Why not start with the people closest to us.

Maybe it's creating a family storytelling bot trained on recordings of your grandparents.

Maybe it's building a creative collaboration tool that remembers all the ideas you and your creative partner have explored over time.

Maybe it's analyzing years of journal entries to understand your own growth patterns. If that's the case get in touch with us!

This project reminded me that we're living through a moment of extraordinary creative possibility. Whether we'll use these tools to automate away human experience, or to deepen it… that choice is ours.

For me, the answer is clear.

I choose depth.

I choose connection.

I choose building technology that helps us become more human to each other.

And maybe that's the real question this work leaves me with, for myself, for you, for anyone building right now:

In an age where we can remember everything,

what will we choose to remember?

And how will we use that memory?

If your answer leads you toward connection, reflection, or care… I'd love to talk. Let's build together.

Have a story that you want to turn into a capsule?

Let's chat. [email protected]

Third Room Studio

© 2025 Third Room Studios. All rights reserved.

READ  0%
SCROLL TO EXPLORE