It’s a rare rainy day in Los Angeles, so there’ll be no walks on the beach or shopping excursions to the fancy outdoor mall or (for us) eating an outdoor table at a local restaurant.
So I’m thinking about what everyone (at least anyone heavily invested in the stock market) is apparently thinking about: AI.
It’s now part of my job at work. Both using it, researching it, testing it, and branding it. I say branding it because none of my job dealing with “AI” is actual artificial intelligence (none of yours is either) but we’re forced to call our projects “AI optimization of blah blah blah” because the tools we’re using call themselves AI and our big bosses call everything AI. Some of it is just pure automation (just like the old IFTT logic traps) and the rest is just versions (CoPilot, etc.) of this weighted rank search response algorithm stuff that OpenAI wants us to call “AI.”
When they say they “don’t know” how it works, they mean they don’t have the time to view or categorize the billions of ranked weights the algorithms have assigned to characters, phrases, words or other symbols use in language based on past interactions and training materials. It’s kind of like how my engineer dad would simmer in quiet indignation when teenage me said I didn’t know how a combustion engine (or any other piece of human engineering) worked when he was tearing them apart to fix what my bad driving had wrought upon ye olde Taurus. I didn’t think engines were magic, I just didn’t care to learn the intricacies of something so complicated. And yet, despite every AI company CEO admitting on podcasts “we don’t know how it really works,” when asked why Grok is suddenly celebrating Nazis they can go do a query for the weight of Nazi materials and tweak that. Because there’s no magic, there’s no budding sentience or understanding, it’s just code again. Code is viewable. It’s executing on hard drives, moving bits around, all of which is tracked. There is still debate in the scientific community about how specific thoughts form in the human brain. There is no debate about how bits are moved in RAM or etched into discs. If there were they wouldn’t be trying to nationalize chip manufacturing or secure entire hemispheres’ worth of electricity for it, they wouldn’t be able to calculate the need for it if they didn’t know how it worked. If you dig into this you can even find quotes that every time you say “thank you” to ChatGPT it costs Sam Altman 5 cents (or whatever, I’m sure it varies) for the algorithm to read and respond to it. We don’t know exactly how many calories thinking “thank you” inside your own brain uses. OpenAI is building entire cities of computational power based on knowing how much it costs their chat agent to respond to it.
I’ll save the stuffy longer monologue on how the current use of the term “AI,” as our sci-fi literary forefathers intended is completely bogus for another time. Instead I’ll provide you with the latest example of how this “AI” can’t outthink a 4 year old. You see, I’ve been testing it this year, trying to find use cases where it’s actually useful to speed up tasks, and where it (and this is most things) is a drag on productivity IF YOU CARE ABOUT FACTS. That last bit is super important, because there are folks in my life now that use these things as a portmanteau of therapist and wikipedia… a therapedia? Wikiapist? And time and time again they provide bad advice or completely counterfactual narratives, but that only matters to folks who check facts [insert obvious reference here to Trump’s Theory of the Biggest Most Special and Beautiful Relative Truth (F=tc^2) where facts and opinions are interchangeable due to the speed of disinformation in closed communication systems].
So every now and again I give it (there are multiple its) a simple task, and it fails spectacularly. This morning I asked Google Gemini (search) to tell me how many hours had elapsed since last Tuesday and right now. Note that I asked it in a conversational style, not a wolfram alpha style (which WOULD have got the correct answer – it should be noted here that Wolfram Alpha has had a reliable “AI” for number crunching for…15 years?).
Side note: I just tested this at Wolfram Alpha to stay honest and, sadly, I have to report that Wolfram has embedded AI now and … doesn’t understand calculation requests anymore, or maybe only does it if you pay them – the website layout makes this totally unclear, but asking for a calculation now results in AI spitting out dictionary results for the word “Calculation”…sigh…more research indicates the “natural language AI” they’ve engaged prefers to grab onto any request not including mathematical symbols and place that into the “language request” bucket and not even bother calculating anything. This is actually a WORSE result than Gemini. We’ve heard about the “enshittification of the internet” but this is the enshittification from AI to the internet.
Using simple math a child could deduce in their head I know that the time between 5:15pm on a Tuesday and 10am on a Saturday is (3×24+17) 89 hours.
Just look at this result from Gemini and then try to convince me (like every podcast) that we’re going to have AGI in 6 months making us cancer free and immortal…

The hilarious part is the very detailed explanation that hallucinates Friday being some time in the week before Wednesday but AFTER Saturday. It doesn’t explicitly say that, but that’s how I’m interpreting “Note: Friday is in the future relative to the current time.” I’ll be damned if the explanation of the calculation doesn’t get worse and worse the more you read it. At one point it calculates correctly and just decides to drop 25 hours from the final answer.
What is most disturbing, though? What they’re calling AI right now is just a weighted response model, and it gives most of the weight to what it believes the human wants to see, not what’s factual. So I have to ask myself now why Google (who knows everything about me because of my Gmail, Pixel phone, and so on) thinks Fridays don’t exist for me.
(yes, I’m aware Gemini AI results are a mix of calculations and search results being merged and that caused part of this, but that’s not an excuse – this is a technology that’s being sold as something that “just knows the answer” when it’s doing a worse job than Bing 20 years ago)
Also, as a thought experiment, I would posit to you that THIS period of time, not AGI or the singularity, is the most dangerous. A general using an “AI” that can hallucinate Nazi platitudes and/or just tell him what it thinks he “wants” to hear is a far larger existential risk to humanity TODAY than a godlike machine in the future that may (for no reason at all) turn malevolent. And if the nukes launch tomorrow because Grok convinced someone a first strike was the right thing to do, remember who decided to get rid of Biden’s “Barriers to American Leadership in Artificial Intelligence“
And you know what’s weird? I keep going back to that number in my head and re-calculating. I’m afraid to hit “publish” on this and then realize I messed up on a simple math calculation. The AI hype machine is so loud that I am gaslighting myself into believing I must be wrong… maybe it is 64 hours? It can’t be. But is it? Do Fridays actually exist? It’s not Friday right now, so according to young earth creationists maybe the blaspheming Calendarists are just making it up to test our faith, right, and they forgot to program the fake calendar into Gemini just like they did with the moon landing and our blessed flat earth?
(going off now to see which AI I can trick into agreeing with me that the earth is flat…)