Skip to content

On Losing Our Senses

Geek humor from XKCD Comics

This month’s offering is on the longish side, so my blurb will be brief: I was able to lay my hands on printed copies of my forthcoming novel, admire its two-tone cover, leaf through it and smell the paper and ink. Grant me that satisfaction, even though you can’t have it until September. While waiting, you can order an advance copy where you buy books or from my bookstore.

“Cannot you see, cannot all you lecturers see, that it is we that are dying, and that down here the only thing that really lives is the Machine? We created the Machine, to do our will, but we cannot make it do our will now. It has robbed us of the sense of space and of the sense of touch, it has blurred every human relation and narrowed down love to a carnal act, it has paralyzed our bodies and our wills, and now it compels us to worship it. The Machine develops – but not on our lies. The Machine proceeds – but not to our goal. We only exist as the blood corpuscles that course through its arteries, and if it could work without us, it would let us die.”
― E.M. Forster, The Machine Stops

You might not notice Artificial Intelligence applications creeping into everyday life. It’s happening by stealth and quickly metastasizing across business models and media content. You also might not notice how AI diminishes cognitive capabilities, something that technology does and has always done to us humans. Like, having typed and used calculators for for so long, I can barely read my own handwriting and mess up long division. But thanks to keeping in practice, I’m still able to navigate on my own without a disembodied voice telling me where to turn.

We’re told AI’s rise is inevitable, along with every other tech innovation that has been force-fed to us. At least that’s the thinking among tech gurus and titans, who conveniently gather its fruits and to their profit. And when stuff goes wrong, what do they do? Well, develop new tech (e.g., “patches,” “updates,” “new features,” “premium services”) to fix problems tech has caused, and then laugh all the way to the bank. Wash, rinse, and repeat.

The good news is that there seems to be a lot of healthy skepticism out there about AI applications — and big tech in general — but that’s not good enough. The bad news is that there are evermore AI deployments coming on line to assault the agency and dull the minds of millions and millions of unprepared people.

Back to the Future

Don’t say you weren’t warned. Step back some 115 years to see what might await us. E.M. Forster’s prescient 1909 novella The Machine Stops, quoted above, imagined a future in which toxic air pollution forced humanity into underground cities connected by tunnels, with vents supplying purified air from above ground. By then, humans have degenerated into pale, wan creatures who mostly confine themselves to comfy cells where they take meals from packaged food delivered via tubes and communicate via audio-visual machines. Theirs is a life not of action (Forster doesn’t mention gyms), but of ideas, except that novel ideas are actively discouraged by educators. Operations are managed by anonymous committees remote from the populace. Nobody seems to know who’s in charge or seems to care. If you’re interested, see my essay on Forster and his novella at The Technoskeptic.

Although Forster essentially predicted voicemail, email, word processing, and Zoom-like apps, his was still an analog world. Machine intelligence had no part in it. That started happening in the 1960s — within his lifetime, actually.

Back then, I occasionally conversed with an AI psychotherapist at MIT over what was then called the Arpanet. Eliza (named for Eliza Doolittle) was programmed by MIT Prof. Joseph Weizenbaum (1923-2008) in SLIP. He wanted to see if humans would find their interactions with it at all like a real counseling session. It was pretty responsive but had a stock of replies that quickly became evident, and had the annoying habit of replying to a question with a question, but what therapist worth their salt isn’t annoying?

According to Ben Tarnoff’s 2023 profile of Weizenbaum in The Guardian, Eliza confirmed to  Weizenbaum that it was best to confine computers to tasks that only required calculation. But thanks in large part to a successful ideological campaign waged by what he called the “artificial intelligentsia,” people increasingly saw humans and computers as interchangeable. As a result, he said, computers had been given authority over matters in which they had no competence. (It would be a “monstrous obscenity”, Weizenbaum wrote, to let a computer perform the functions of a judge in a legal setting or a psychiatrist in a clinical one.) That sense of interchangeability led humans to conceive of themselves as computers, and so to act like them, mechanizing their rational faculties by abandoning judgment for calculation, mirroring the machine in whose reflection they saw themselves.

Eliza was a one-trick pony, unlike today’s AI engines, none of which could function without vast amounts of data swirling through their voracious Nvidia-powered innards. Much of the data used to train them is scraped from the Web, including our search behavior and interactions with websites we visit. For all we know, hackers could have filched training sets from businesses, schools, hospitals, and governments. So, do AIs keep dossiers on us? For sure, HR departments are using them to mine résumés and make judgments about us, just as Weizenbaum worried.

Literary agents and editors use AIs to sift through their slush piles, and for all I know, artists’ portfolios. Artists and writers are up in arms, not about that, but because their copyrighted work is being gobbled up as AI training data. (Here’s a rundown of lawsuits.) Regarding images, view Gary Marcus’s Substack blog that shows examples of AI art from generic prompts that blatantly duplicate copyrighted cartoon characters.


AI isn’t the only technology to which we’ve ceded human agency. Consider what GPS (the US Military’s Global Positioning System) has done to us. Most people vaguely understand that GPS is an earth-satellite-based technology used for route planning. When you move around, the GPS chips in your phone or car can fix your whereabouts within a meter or two, which gets recorded and shared who knows where.

If you rely on GPS wayfinding apps like Google Maps, beware. There may be road closures or hazardous situations ahead it doesn’t know about that you might want to avoid. What are called geospatial attributes — data items that say what’s in a given location — are often missing or outdated. The missing information can include washed-out bridges, detours, unimproved roads, and emergency street closures. As a result, too many fearless GPS navigators run afoul of reality; people drive off boat ramps or bridges under repair, often ignoring warning signs posted to deter them. A study of 158 “death by GPS” incidents reported in the press (not all deadly but all causing physical damage) concluded that GPS navigation doesn’t give users good ways to tell what data is real and what’s misinformed and shouldn’t be as taken as gospel.

But even if GPS never gets you into an accident or incident, it is robbing you of your sense of space. Should you hike a trail studying a map, it impedes your perception of the environment you are walking through. GPS navigation has that effect on drivers, too, and in time, relying on it will cause your geospatial IQ to atrophy. In general, whatever we rely on computers to calculate becomes that much harder to do with our brain, like handwriting has. As with muscle fitness, it’s use it or lose it.

Seeking Provenance

Harkening back, The Machine Stops depicted people who regarded themselves as writers or critics regurgitating received wisdom and spending most of their time critiquing other writers doing the same, recursing on culture. Besides people hooked on social media, is that not what chatbots like ChatGPT and image generators like DALL-E do — just riff on received wisdom?

You might say, well, AI has many shortcomings, but it’s only going to grow more competent. And that will hasten the day when it’s ubiquitous, driven by AI companies seeking profits and clients seeking economies. The artificial intelligentsia will say it’s good for us, good for the economy. Funny how those bespoke benefits always elide the messy details of who wins, who loses, and what is lost, perhaps forever.

Let’s face it: If we can’t discern whether texts, data graphics, images, and videos displayed to us are created by human or artificial intelligence, pretty soon we we’ll say, “It doesn’t matter,” just as we tend to accept the what spews from the mediaverse not as simulacra, but as reality itself.

We may be hapless, but not helpless. As much as we need to fight theft of our senses and culture and resist new conveniences we need to be ever mindful of tech’s threats to our personhood. One form of mindfulness is to practice navigation by fixing the route to your destination in your mind. Then turn off the voice that tells you where to go. Look at the road and what’s along it. Taking note of street names, landmarks and intersections will help you navigate back home. It’s easy and fun too.

Another thing you can do is when you’re online, ask yourself whether the entity presenting information or conversing with you is human or not. That is, conduct the Turing Test as the examiner early and often. Judge your interlocutor’s provenance whenever you’re online. You deserve to know who you’re talking to and best know the difference.

Visit Perfidy Press Bookstore

You can find this and previous Perfidy Press Provocations in our newsletter archive. Should you see any you like, please consider forwarding this or links to others to people who might like to subscribe, and thanks.

Visit Perfidy Press

And you can unsubscribe here.

Published inEssayHuman BeingsMediaNewsletterTechnology

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.