what i'm reading§
what to do when you and your patient are fucking the same person§
This advice column from Parapraxis covers a situation we've all been in: we're a therapist in a fuck-triangle with our client. Apparently the best solution is to actually just don't do anything.
what i'm listening to§
This has got me white-girl dancing in my kitchen.
what i'm watching§
The Sopranos On Transference§
The dramatic depiction of transference in The Sopranos got me thinking about what I think my therapist thinks about me. The version of me in the mind of the version of my therapist in my mind. Basically I just hope he thinks I'm normal and honest. Which I'm not.
what i'm working on§
undisclosed fiction-writing project§
I'm 7k words into the first draft of a fiction project. It's a concentration of effort that's totally new to me, and the knowledge that the bulk of this draft is going to have to be cut and rewritten is anxiety-inducing. Up until college I was a single-draft, no copy editing essay writer. Then I started printing my papers, editing by hand, then re-typing the whole thing. It's a humiliating process. But now I write a blog, so there's no shred of dignity for me to cling to.
something i liked§
a funeral§
I was hype as fuck to go to a funeral this week. It's an excuse to call out of work, dress up and eat free food. I want to become a funeral crasher.
something i hated§
peer or tool?§
ChatGPT was used for copy-editing support and locating a few references.
Buried in the Acknowledgements section of an academic article I read[1] is the above disclosure, leaving a very bitter aftertaste to an otherwise commendable work that celebrates the messy and human. I can't stop thinking about it.
There's a tension here: the narrow scope of the author's ChatGPT usage, i.e. for "copy-editing support and locating a few references," made, downplays its role to just a tool, but it follows a list of human mentors and peers in the Acknowledgements section. If the the use was so simple, why not also disclose the use of Microsoft Word's built-in spell checker, or the use of Google Scholar, or of a particular brand of laptop?
How can a glorified spell-check-cum-search-engine require both elevation and elision; both acknowledgement among human peers and reduction to "copy-editing support" and "[just] a few references". Is it a tool? or is it a research assistant?
Culpability Blast-radius§
The disclosure above is made necessary because of the degree to which LLMs can fuck with your intentions. The framing of LLMs as "intelligence" instead of a tool or, more accurately, a roulette wheel, opens their users up to liability.
This comment in a thread about browser-integrated AI pretty starkly lays the dilemma of liability with LLMs[2]:
...[I]f a user of a website clicks "summarize" on a comment of an article, such that Google's Generative AI Prohibited Uses Policy is violated, who is Google going to go after?
- The user - because they clicked "summarize", which initiated the action that violated the policy, on their machine.
- The author of the comment - because they wrote the content that violated the policy.
- The owner of the website - because they created the facility to feed the violating comment to the user's UA's LLM.
An LLM can violate the policy, but it can't be held responsible. Instead, culpability is a blast-radius for the nearest flesh-and blood
However, the Trump administration tried to invert this argument in the lawsuit against DOGE's unconstitutional cuts to humanities grants:
[The judge] rejected the government’s argument that there was no constitutional problem because any viewpoint classification was ChatGPT’s doing, and not the government’s. [Emphasis mine]
who's to blame?§
We're going to see the fallout for LLM-generated fuck-ups distributed on class lines: the overworked interns that turn to ChatGPT to write their error-filled deliverables get fired, while the CEO that skims the AI summaries of everyone's emails while he golfs gets another bonus.
The customer service agents that can't do anything lose their (shitty) jobs to AI that does less and lies confidently. Everyone's service requests go straight into the trash: life gets worse, stock price goes up.
The government and its podcaster bottom-feeders pump out industrial levels of AI slopoganda. Your grandma can't tell what's real; she gets convinced to panic-buy survival buckets of food; her bank account is subsequently emptied by AI Jim Bakker.
Teens and tweens generate deepfake nudes of their classmates; the kids get punished instead of the trillion-dollar companies pushing nudify apps to the front page of their app stores.
Confusion and exaggeration about the agency of LLMs benefits the trillionaires profiting from unfettered data-center build-outs and mass lay-offs. Delusional hype around 'AGI' justify unprecedented venture capital spending for little to no results. In exchange, we get an unprecedented consent-manufacturing apparatus, a highly centralized internet under direct government-crony control, and the death of personal computing to make way for a total SaaS-based rentier tech economy.
a picture:§
my wall spaghetti§

You want to touch my wall spaghetti so bad it makes you look stupid.
Kahn, Ummni, "The Uh-Oh Test: when bad erotics excite you", Porn Studies, 1-13, https://doi.org/10.1080/23268743.2026.2634656 ↩︎
For context, this is in regards to a proposal from Google for a standardized implementation of built-in LLMs in browsers which would require implementors to agree to Google's AI Policy. ↩︎