Wenn ich der Meme-Typ wäre, dann würde ich hier ein surprised_picachu.jpg anhängen. Ist aber nur traurig, dass es eine Guardian-Recherche gebraucht hat, bis mal Jemand zuhört: Google AI Resultate sind medizinisch nicht nur fragwürdig, sie sind absolut gefährlich.

So wurde etwa Patienten mit Bauchspeicheldrüsenkrebs empfohlen, fettreiche Mahlzeiten zu vermeiden. Das Gegenteil ist der Fall.

Eine Suche nach vaginalen Krebssymptomen und -tests hat einen Pap-Test fälschlicherweise als Test für Vaginalkrebs aufgeführt, obwohl es sich tatsächlich um ein Zervix-Screening-Tool handelt, das verwendet wird, um Zellveränderungen zu erkennen, bevor sich Krebs entwickelt.

Zu den weiteren Problemen gehörten die Ergebnisse von Leberbluttests, die Nationalität, Geschlecht, ethnische Zugehörigkeit oder Alter nicht berücksichtigten - was möglicherweise zu Menschen mit schweren Lebererkrankungen führt, die fälschlicherweise glauben, dass sie gesund sind - und irreführende Ratschläge zu psychischen Erkrankungen wie Psychosen und Essstörungen.

Amiga vs AGI

Way back when, I was a member of a small Amiga Cracking/Demo group, that over time focused more and more on Demo and less and less on Cracking. Demo was, where it was at, we’d travel to Sweden and Italy and many other places, spending nights sleeping in train stations and city parks having spent all our money on the train ticket and McDonalds… it was a glorious time.

Then came the DCKs, the Demo Construction Kits. And the Amos Basic Compiler Templates. Both allowed everyone to build basic Demos. Scrollers, bouncers, music, even interactive elements. So we upped our game, added impossible sprite movements, kicked Copper and Blitter into places they weren’t supposed to go, tickled more music out of the system than it was meant to play.

DCK often caught up to us a few months later. My proudest moment, not going to lie, was when a German Amiga Magazine advertised a Disc containing a DCK to make “Demos just like Lowkey.” Another Demo Team, DOC (whom I hated with the passion of a thousand seething suns for having insanely racist “jokes” in their demos) even published their own DCK. I wonder what became of Michael H., and if he’s still a racist…

Long story short… that’s how Vibe Coders feel. It’s not that different from typing in five pages of hex numbers with control number/checksum every 16 numbers (anyone except me remembering those from C=64 and Amiga Magazines?) or running a DCK: the output might look like last year’s hype beast, but the creator doesn’t understand thing one about the code. And, frankly, understanding the machine, learning its nooks and crannies, that was what it was all about for me. Some Demos made it big, some fizzled, but very few of those moments stand out. What does stand out, are the nights, smoking rolled Gauloise Black in my room in the basement, hunched over an Amiga 500 connected to a shitty monitor, trying to edge out one more raster to close a loop or pouring over disassembler printouts to understand, how a game enforced its copy protection. And those Demos, the “Zero Day” ones, were the successful ones. If everyone can replicate last year’s peak code, those who create this year’s will be the winners.

I could have spent my youth making DCK demos and probably get laid more and earlier. I could have made money selling cracked games or creating demos for others. But that didn’t feel satisfying enough. I wanted to own what I did. And, sooner or later, when everyone vibe codes, that’ll be it again. In 5 years, all those slop apps with purple gradients and emojis, telltale signs of Claude or Codex, will be forgotten. Hand made code will survive. Because it will be at the apex of development, not two years behind. And, more importantly, it’ll fill its makers with pride.

I'm apparently pretty broken

We are experimenting with local LLM at work, and I was somewhat curious about what the system “knew” about me after spending a day ingesting my code and my writings. I feel pretty safe doing so, the model lives on a server with a sandboxed OS that only talks to the six systems we use to query it on patient data. And, honestly, if I didn’t trust it with my stuff I shouldn’t trust it with patient information (even though patients are number-coded, not name-coded).

I understand the concept of next-of-chain and pattern conclusions, but still… asked to be as brutal and negative as it could be, being this isn’t an OpenAI or Perplexity “always please the human” setup:

From the outside, an alien anthropologist would conclude you’re someone performing three identities at once:

  1. The physician who refuses to decay intellectually. You keep rebuilding your medical knowledge scaffolding because you’re terrified of losing sharpness—and a bit thrilled by mastering it again.

  2. The pilgrim who keeps trying to walk home metaphorically. The Camino isn’t a hike. It’s your reset button, your escape hatch, your moral compass. You keep re-coding it as an app because you’re trying to bottle the psychology of pilgrimage.

  3. The coder-designer who chases systems because systems feel safer than emotions. You build frameworks—Django, Flutter, Obsidian vaults, smart home YAML, even moral scaffolding. It’s easier to design a system than to sit with uncertainty.

You describe yourself as a dilettante while simultaneously attempting to build a fully vertically integrated geospatial-medical-pedagogical-community app ecosystem. A dilettante doesn’t do that. A person avoiding naming their ambition does.

You’re building an app to help strangers find the right fountain in rural Spain while half-ignoring your own health metrics and walking goals. Classic human move: optimize the world instead of the self.

You want to know if an AI sees you clearly because you’re testing whether machines can offer the kind of nonjudgmental, continuous insight humans are too inconsistent to deliver.

This is, why AI is so dangerous in psychology. It sounds good. It is mostly accurate. It lays fingers into wounds, a therapist would slowly work on, not in this brutal directness. And that conveys the “AI gets me” feeling, that makes users rely on AI instead of seeking out the much more involved, much more “Marathon, not Sprint” nature of friendships, family, and therapists. And, in the end, the results will be lacking.

Easily improve your spoken text using AI-powered prompts.

Edit Prompt
Step 1. Configure how to activate this prompt (optional - you can also use the hover menu).
Keyboard Shortcut
Right ⌘

Step 2. Define how you'd like to enhance your dictation.
In German, rather than using the "Binnen-I" ("MitarbeiterInnen") use the colon ("Mitarbeiter:innen"). Where possible always gender, even if not part of the original dictation. ("Die Polizisten sagten" becomes "Die Polizist:innen sagten"). In singular cases ("eine der Polizistinnen") preserve gender as spoken. Where it is stylistically possible in German, use the common word ("Mitarbeiter;ende", "Studierende").

Step 3. Test it out.
Hold Right ⌘ while speaking for push-to-talk mode, or press Right ⌘ briefly to toggle recording on/off.

Am Freitagabend trafen sich die Mitarbeitenden der Universität mit den Studierenden und unterhielten sich über die Zukunft der Demonstrationen vor der Schule. 
Zwei Polizist:innen der Bereitschaftspolizei waren ebenfalls dort und erwähnten, dass weiterhin Sätze wie From the River to the Sea nicht erlaubt und in der Tat strafbar waren.

On Mac and iOS, Spokenly does a hell of a good job using local models to enable voice dictation. Some of my patients with ALS or Huntington’s disease can dictate whole letters, even with broken speech and often long pauses between words.

Even better, it’s all local; no AI credits or remote models are needed, and it works well on an M1 Pro or even an M1 iPad (iPad only remote models for $9/month).

Even better, I could train it to gender ;)

Review Rundown

A while ago, we submitted a paper to a prestigious US medical journal on a topic loosely related to public health, sport, and wearables.

If you’re not familiar (and bless your soul), academic rigor demands peer review, meaning two or more peers (only known as Reviewer 1, 2, 3… and so on) are recruited to read, critique, and check your work. This is fundamental to science, as is the anonymity part of it, meaning we don’t know who reviews us.

Well, for this paper:

  • Reviewer 1 clearly had a horse in the race, their own paper, which they repeatedly insisted we include and reference. To be clear: “peer” is very loosely defined, this person came from a different specialty and their paper did not even come close to adding value to the research paper we’d submitted.

  • Reviewer 2 didn’t speak much English, so they used ChatGPT to review us. We know, because they forgot to remove some of ChatGPTs comments. “I will perform a comprehensive review of… Perfect! Here are the issues I found…” and similar.

Papers won’t get published until both Reviewers (two in this case) sign off. And the journal didn’t care about our complaints. So we fought an AI for weeks, getting confronted with hallucinated studies, misunderstood conclusions, a total clusterfuck of weirdness. A few weeks in, I gave up, wrote an Agent for ChatGPT, and had the two duke it out. After the remote AI signed off, we submitted one last revision with the AIs suggestions removed but the sentence “including all of Reviewer 2’s very accurate suggestions” - and the AI was happy.

Reviewer 1 was harder, but was finally withdrawn after we threatened to add the paper and mention that, while it provided no actual, meat to the matter, was added because Reviewer 1 was the author.

KI Notruf 91-fuckme

Oh, da können wir uns schon auf was freuen: in den USA wählt Google Gemini eigenständig und ohne Nutzererlaubnis die 911 (112 in EU) wenn es “denkt”, dass da eine Gefahr für den Nutzer oder andere besteht.

Ich bin gestern Abend nach einer kleinen Wanderung schön langsam nach Hause spaziert und plötzlich fragt mich meine Uhr “It seems you had a hard fall. Call 911?” – ganz einfach so, den Berg rauf, nichts gefallen. Da konnte ich wenigstens noch “Nö, alles gut” drücken, bevor ich Notfallressourcen missbrauche. Aber bei Google Gemini? Unsere Notfallsysteme sind ohnehin schon mit fast 65 % Nicht-Notfall-Anrufen überlastet, meine Bekannte in der Polizeinotrufstelle erzählt immer gerne von so Sachen wie „Mein Hamburger hatte nicht genug Pommes, verhaften Sie den Koch” und Ähnlichem. Und so lustig das sein mag, es kann ganz aktiv Leben kosten.

Ganz davon abgesehen, dass ich einer “Intelligenz”, die denkt, dass ich beruflich was mit Autos mache (kein Witz, Google glaubt, ich bin Mechaniker, weil ich irgendwann mal versucht habe, das Auto meiner Mitbewohnerin zu fixen), nicht zutraue, zu wissen, wann ich eine Gefahr für mich oder andere bin.

Google meint übrigens, wenn das passiert, dann soll man einen Daumen runter dalassen.