Read
oslo-condition-386

On Mac and iOS, Spokenly does a hell of a good job using local models to enable voice dictation. Some of my patients with ALS or Huntington's disease can dictate whole letters, even with broken speech and often long pauses between words.

Even better, it's all local; no AI credits or remote models are needed, and it works well on an M1 Pro or even an M1 iPad (iPad only remote models for $9/month).

Even better, I could train it to gender ;)

1 comment
Read

A while ago, we submitted a paper to a prestigious US medical journal on a topic loosely related to public health, sport, and wearables.

If you're not familiar (and bless your soul), academic rigor demands peer review, meaning two or more peers (only known as Reviewer 1, 2, 3... and so on) are recruited to read, critique, and check your work. This is fundamental to science, as is the anonymity part of it, meaning we don't know who reviews us.

Well, for this paper:

  • Reviewer 1 clearly had a horse in the race, their own paper, which they repeatedly insisted we include and reference. To be clear: "peer" is very loosely defined, this person came from a different specialty and their paper did not even come close to adding value to the research paper we'd submitted.

  • Reviewer 2 didn't speak much English, so they used ChatGPT to review us. We know, because they forgot to remove some of ChatGPTs comments. "I will perform a comprehensive review of... Perfect! Here are the issues I found..." and similar.

Papers won't get published until both Reviewers (two in this case) sign off. And the journal didn't care about our complaints. So we fought an AI for weeks, getting confronted with hallucinated studies, misunderstood conclusions, a total clusterfuck of weirdness. A few weeks in, I gave up, wrote an Agent for ChatGPT, and had the two duke it out. After the remote AI signed off, we submitted one last revision with the AIs suggestions removed but the sentence "including all of Reviewer 2's very accurate suggestions" - and the AI was happy.

Reviewer 1 was harder, but was finally withdrawn after we threatened to add the paper and mention that, while it provided no actual, meat to the matter, was added because Reviewer 1 was the author.

View detail
Read

Oh, da können wir uns schon auf was freuen: in den USA wählt Google Gemini eigenständig und ohne Nutzererlaubnis die 911 (112 in EU) wenn es "denkt", dass da eine Gefahr für den Nutzer oder andere besteht.

Ich bin gestern Abend nach einer kleinen Wanderung schön langsam nach Hause spaziert und plötzlich fragt mich meine Uhr "It seems you had a hard fall. Call 911?" -- ganz einfach so, den Berg rauf, nichts gefallen. Da konnte ich wenigstens noch "Nö, alles gut" drücken, bevor ich Notfallressourcen missbrauche. Aber bei Google Gemini? Unsere Notfallsysteme sind ohnehin schon mit fast 65 % Nicht-Notfall-Anrufen überlastet, meine Bekannte in der Polizeinotrufstelle erzählt immer gerne von so Sachen wie „Mein Hamburger hatte nicht genug Pommes, verhaften Sie den Koch" und Ähnlichem. Und so lustig das sein mag, es kann ganz aktiv Leben kosten.

Ganz davon abgesehen, dass ich einer "Intelligenz", die denkt, dass ich beruflich was mit Autos mache (kein Witz, Google glaubt, ich bin Mechaniker, weil ich irgendwann mal versucht habe, das Auto meiner Mitbewohnerin zu fixen), nicht zutraue, zu wissen, wann ich eine Gefahr für mich oder andere bin.

Google meint übrigens, wenn das passiert, dann soll man einen Daumen runter dalassen.

1 comment