I’ve had captchas with more opportunity for input than the AI feedback form they gave me

Work had one of those “lunch & learn” type things recently on ChatGPT, which I skipped because 1) I’ve been exposed to enough hype on AI/LLM/ML whatever you want to call it, and 2) I’m not a religious man, but lunch downtime is sacred. A couple days later they followed up the lunch session with an email linking to the video of it that more or less read, “Did you see it? Did you see it?” And that email was quickly followed by yet another message with a survey asking for input/interest in our unit using ChatGPT in some way.


So like a tired parent with a hyperactive child obsessed with the latest craze, I put aside what I was doing to engage. The video was fine — an overview of what the various acronyms mean, how the technology works, and a couple demonstrations. The latter consisted of asking ChatGPT to generate Python code for a quicksort algorithm, then asking it to do the same in Rust. They then asked for an explanation of the same in the voice of Jean-Paul Sartre, Ayn Rand, and Kanye West. It did manage to do all of this fine. There was some discussion about how these LLM tools have a tendency to just make things up, as well as copyright issues, but it was only a 30 minute session so it didn’t get too deep.

On to the survey form, which was basically two yes/no questions and a pair of text fields that looked like they would accept maybe 50-100 characters. That may have been enough to write, “ChatGPT and generative AI are bullshit and I want no part of it,” (63 characters) but swearing is kinda frowned upon at work, so I just went on with my day.

But it’s possible I’ve been stewing over it ever since, because ChatGPT and generative AI are bullshit, I want no part of either, and yet they keep getting shoved in my face like the greatest thing since the invention of undo. I suppose I should pause for a moment to acknowledge that there are reasonable uses for machine learning, but I would argue they are limited and best leveraged under the hood. For example, Pixelmator Pro has image tools based on machine learning that are genuinely useful. They are also very different from how ChatGPT and its ilk are sold.

Let me blow through some basic things first. Yes, tools like ChatGPT can spit out code for self-contained, well-defined problems like the above. I would argue that this ability is no more remarkable than being able to punch “quicksort algorithm python” in a search engine, which will give you multiple results with working code and/or explanations. A web search can’t give you the results in the style of, let’s say, Dr. Seuss, but that’s just a parlor trick and not very practical anyway. It can also give you answers to non-technical questions. Many of these answers might even be right! Either way, you can get paragraphs of authoritative-looking text out of it. Tools like Stable Diffusion can even generate images that look vaguely like what you’re asking for as long as “surreal” is part of the mandate. You would be a fool to rely on it for professional work, though (<cough> Secret Invasion)


Let’s take a recent example of something I had to do for work. One of the apps I’m responsible for has a feature that allows users to send a bulk email to pre-defined sets of recipients. There’s a mail merge function as well, so these messages can be customized. This basic system has been in place for roughly a decade now. This summer, I was tasked with altering this to allow a user to send a message to a group of users, and this message was to contain a hyperlink with a draft email to a different set of recipients. This called for a mail merge within a mail merge, complicated by the fact that one of these had to generate encoded content. The source code for this app is not publicly available, and the frameworks it relies on are not widely used, where they’re not completely custom. This is not a problem ChatGPT is going to be able to help with, any more than I can hit Google for a solution.

Let’s also consider the tendency for tools like ChatGPT to just make things up. I would argue this is not just one of the tool’s greatest weaknesses, but also their greatest danger. If you’re not familiar with the lawyer who used ChatGPT as a research tool to his regret, you should read up on it. In the lunch session, someone did bring this up, and they suggested you should basically ask the tool, “are you sure?” But can you trust that answer any more than the first? The problem is, ChatGPT will sound absolutely confident in its responses, whether there is any truth to them or not. And you can’t really ask for citations without having to then follow up to verify those are real, too. Here again, I would argue you’re better off with a more traditional tool like Google that directs you to source materials instead of synthesizing content and spitting it out context-free.

What really gets me about all this isn’t just that the tools are more limited than the hype would have you think, and that they can be dangerous if you’re not careful. For the limited results we get from them, neither the infrastructure required to make them work nor the changes they would in turn demand from users are sustainable. Remember, in order for something like ChatGPT to work, it not only has to have vacuumed up massive amounts of content, it has to have humans annotating the data in order for the computer to do anything intelligent with it. Further, this data intake and annotation has to be ongoing, because spoiler alert, the world is a dynamic place. But surprise, annotation doesn’t pay well. So what’s already happening? People are using AI to help train AI. But if AI can’t be trusted to get things right…it’s like trying to build a skyscraper by taking girders from the basement to support the 20th floor.

And on the other end, in order to use ChatGPT well, you have to be skilled in how to prompt it, to the extent there are university courses on prompt engineering.

And I’m not even getting into copyright or the ethics/legality of essentially hoovering up the entire Internet without any compensation or acknowledgement, let alonefair compensation to the original creators. Last I heard, OpenAI won’t even disclose where its data set comes from.

So yeah, I don’t think these tools are going to stick around in any kind of generalized form, and I’ll be happy when they go back to being the province of specialized tools and research. Hopefully they’re already on their way. In the meantime, I’ll just stick to what I’m doing, thanks.


Buh-Bye, Facebook

Been meaning to do this for a long time, and finally pulled the trigger. I suppose technically the account won’t go belly-up for a couple weeks, but as long as I don’t log in again my Facebook account should be deleted. Of course, given past history (just Google “Facebook privacy issues” for a Russian novel’s worth of stories) it wouldn’t surprise me if Zuckerberg & company hang on to my data, but at least they won’t be getting any more of it.

Of course, I’m well aware that Facebook is not the only entity out there tracking my internet habits for its own gain. However, there are two things about Facebook that has led me to sever my relationship with it. First, the company is notorious for violating user privacy. I know of no other company that treats its users with as little respect as what it seems to show for personal data. Second, what I get out of Facebook is frankly not worth the potential downside. There’s not much that happens on Facebook that I can’t get either from my own (sadly neglected) blog or from Twitter. Unlike Google, which provides a number of highly valuable services, Facebook has always just been nothing more than a diversion. So…buh-bye.


Always read the release notes

New version of BBEdit! Among the changes is this:

The ponies learned that their saronite shoes were not RoHS compliant and had a huge carbon footprint. So, they’ve switched to Five Fingers and Birkenstocks. They’ve also been studying the post-apocalyptic arts, because fortune favors the prepared.


Stay away from French servers

Mon dieu…

France’s new data retention law requires online service providers to retain databases of their users’ addresses, real names and passwords, and to supply these to police on demand. Leaving aside the risk of retaining all this personal information (identity thieves, stalkers, etc — that which isn’t stored can’t be stolen and leaked), there’s the risk of requiring providers to store plaintext unhashed passwords, as Bruce Schneier points out.

“unhashed” of course meaning “unencrypted”. “In the clear”. “Ripe for the picking.” This idea is, how you say, “trés stupide.” I can’t imagine that tech companies all over France aren’t now looking to move their operations elsewhere.


Bring it on, Fireball

I doubt I’ll ever need this, but it looked so easy how could I not implement a cache system for the blog?


iPad 2 Smart Cover Teardown – iFixit

Yes, they really tore apart the cover, which is apparently a one-way trip. It’s interesting to see how the magic is done on it– there are more magnets than I would have expected.

…and yes, “magic.” I think Apple’s been a little silly with their prolific use of the word lately, but the cover does at first appear magical to me.


BBEdit not the same on the app store?

There’s an update to BBEdit today, in which they list the following in their fixes/notes (emphasis added):

If “Make Writeable” fails because you have insufficient (non-elevated) privileges, BBEdit will flag the file for authenticated saving instead of refusing to let you do anything. (Note: This does not apply to App Store versions of BBEdit, which are not able to perform authenticated saves.)

I’ll admit I haven’t paid a lot of attention to the App Store yet, but this concerns me. Are there functionality differences between what the App Store will allow and what might be possible outside of it? If so, that’s a problem.

Politics Tech

Steve Wozniak to the FCC: Keep the Internet Free

Some excerpts:

The Internet has become as important as anything man has ever created. But those freedoms are being chipped away. Please, I beg you, open your senses to the will of the people to keep the Internet as free as possible. Local ISP’s should provide connection to the Internet but then it should be treated as though you own those wires and can choose what to do with them when and how you want to, as long as you don’t destruct them. I don’t want to feel that whichever content supplier had the best government connections or paid the most money determined what I can watch and for how much. This is the monopolistic approach and not representative of a truly free market in the case of today’s Internet.

. . . .

I frequently speak to different types of audiences all over the country. When I’m asked my feeling on Net Neutrality I tell the open truth. When I was first asked to “sign on” with some good people interested in Net Neutrality my initial thought was that the economic system works better with tiered pricing for various customers. On the other hand, I’m a founder of the EFF and I care a lot about individuals and their own importance. Finally, the thought hit me that every time and in every way that the telecommunications careers have had power or control, we the people wind up getting screwed. Every audience that I speak this statement and phrase to bursts into applause.

Someday the government will begin siding with us instead of corporations. This is not that day.


Perhaps “OS X Poacher”?

Multiple sources are reporting a “Back to the Mac” event Apple is hosting. There’s basically no information on content beyond it about Macs (duh) and OS X, but what I find interesting is the teaser image (available at the link) indicates this revision’s cat may be a lion. This is probably inevitable given the last decade of cat-based OS names, and I’ve been wondering when they would use it. The question I have is, if “Lion” is indeed the name of this release, what will Apple do for the next release? Where do you go after you’ve used the King of the Jungle that isn’t a step down?


Politics Tech

Google and Verizon sell out the Internet

Not cool:

Google and Verizon, two leading players in Internet service and content, are nearing an agreement that could allow Verizon to speed some online content to Internet users more quickly if the content’s creators are willing to pay for the privilege.

The charges could be paid by companies, like YouTube, owned by Google, for example, to Verizon, one of the nation’s leading Internet service providers, to ensure that its content received priority as it made its way to consumers. The agreement could eventually lead to higher charges for Internet users.

Such an agreement could overthrow a once-sacred tenet of Internet policy known as net neutrality, in which no form of content is favored over another. In its place, consumers could soon see a new, tiered system, which, like cable television, imposes higher costs for premium levels of service.

Emphasis added. As someone who both makes a living on the Internet and enjoys its fruits in multiple ways, I find it beyond disappointing that Google would be a party to this. So much for “don’t be evil.”