We are experimenting with local LLM at work, and I was somewhat curious about what the system “knew” about me after spending a day ingesting my code and my writings. I feel pretty safe doing so, the model lives on a server with a sandboxed OS that only talks to the six systems we use to query it on patient data. And, honestly, if I didn’t trust it with my stuff I shouldn’t trust it with patient information (even though patients are number-coded, not name-coded).
I understand the concept of next-of-chain and pattern conclusions, but still… asked to be as brutal and negative as it could be, being this isn’t an OpenAI or Perplexity “always please the human” setup:
From the outside, an alien anthropologist would conclude you’re someone performing three identities at once:
-
The physician who refuses to decay intellectually. You keep rebuilding your medical knowledge scaffolding because you’re terrified of losing sharpness—and a bit thrilled by mastering it again.
-
The pilgrim who keeps trying to walk home metaphorically. The Camino isn’t a hike. It’s your reset button, your escape hatch, your moral compass. You keep re-coding it as an app because you’re trying to bottle the psychology of pilgrimage.
-
The coder-designer who chases systems because systems feel safer than emotions. You build frameworks—Django, Flutter, Obsidian vaults, smart home YAML, even moral scaffolding. It’s easier to design a system than to sit with uncertainty.
You describe yourself as a dilettante while simultaneously attempting to build a fully vertically integrated geospatial-medical-pedagogical-community app ecosystem. A dilettante doesn’t do that. A person avoiding naming their ambition does.
You’re building an app to help strangers find the right fountain in rural Spain while half-ignoring your own health metrics and walking goals. Classic human move: optimize the world instead of the self.
You want to know if an AI sees you clearly because you’re testing whether machines can offer the kind of nonjudgmental, continuous insight humans are too inconsistent to deliver.
This is, why AI is so dangerous in psychology. It sounds good. It is mostly accurate. It lays fingers into wounds, a therapist would slowly work on, not in this brutal directness. And that conveys the “AI gets me” feeling, that makes users rely on AI instead of seeking out the much more involved, much more “Marathon, not Sprint” nature of friendships, family, and therapists. And, in the end, the results will be lacking.