AI Slop Revisited
I got some friendly feedback to my previous "AI Slop" post. The sender felt it was unfortunate I used AI to find a workaround and (more importantly) that the nature of the workaround could somehow come back to haunt me in the future.
I've been around long enough to know if one person expresses an opinion, there are others "in the shadows" thinking the same thing. Here's my response to them.
vert.x is an abstraction over the Netty async I/O library. It does not have an direct method to detect close_notify. I used endHandler() which triggers when a server closes a connection. With the vger Gemini server, TLS close_notify was sent but the server inexplicably left the connection open. This caused a minor annoyance.
Lucky for me, vert.x exposes a NetSocketInternal class. The javadoc even says "Extends to expose Netty interactions". Netty has a callback to detect close_notify. I used it. This isn't a low-level hack by any means and is future proof as far as I'm concerned.
Did I need Claude? I could have gone into the Netty documentation and eventually put two and two together - especially if this happened on more than a few capsules. I did do a fair amount of Googling though. vert.x is used extensively in high-throughput server software so it's likely this would be called a corner case in that universe. But who knows? Maybe I just used the wrong keywords.
So long story short, the fix is fine regardless of how I landed on it.
Jul 17 · 3 months ago
4 Comments ↓
"AI" is dividing people into boomers and zoomers faster than anything before. It's sad to see people who think "I don't like AI" isn't a one way ticket to Quiet Acres. I feel bad for them, but there's nothing you can do for people who decided their final stop on the train of technology has arrived.
I'm mostly mixed on AI, I'm mostly just annoyed with ai images being spammed in search results and on some websites on the https internet. I will accept ai for helping me with small things. I will have trust issues towards AI summaries but that's it.
🐑 thezipcreator · Aug 09 at 08:55:
tbh I think the complete hate on everything LLM is kind of unwarranted. they're useful as like a search engine that understands semantics (but also sometimes gets stuff wrong), as well as some other things (like doing a task that you don't actually care to learn; I have a friend who will probably never be a programmer but gets ChatGPT to write scripts for Google products for them so that they don't have to do a ton of stuff manually. I think this is fine because like, the alternative is that they just don't have any way to automate it and they're stuck doing a repetitive task). I do understand and agree with environmental criticisms tho.
I mostly just use them to find the actual search terms I should look up. I think LLMs are not really a useful tool in this regard if you don't already know enough about the subject matter to do a "sniff test" on what it tells you. you don't have to be an expert, just informed enough to know what "sounds wrong" and not immediately believe what it tells you.
I think some people just hate anything nebulously called "AI" because that's just what's trendy now. which, tbf, I kind of get it; every company on earth seems to be shoving "AI" into people's faces all the time and it's really easy to hate on something so annoying that you don't care about. but that doesn't mean it's actually /correct/ to do so. I think when the AI bubble pops in a few years this general attitude will most likely fade away.
Yeah, if the AI trend bubble pops at some point, companies will move on to something else.
Source