feature article
Subscribe Now

The End of the Beginning of the End of Civilization as We Know It (Part 1)?

I’m sorry for the “downer” of a title. Officially, this column should be called “Gruesome Gambols Gripping Generative AI (Part 4).” However, the more I think about things, the more despondent I’m becoming. As Winston Churchill famously said during the Lord Mayor’s Luncheon at Mansion House on 10 November 1942: “Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.”

Of course, Churchill was talking about the possibility that the tide had turned with respect to WWII and that there might be a light at the end of the tunnel (I never metaphor I didn’t like). By comparison, I’m postulating that the light at the end of the tunnel may well be an AI-powered bullet train of destruction that’s barreling our way (sorry Winston).

If you are new to this mini-series and interested in learning how we got here, please feel free to peruse and ponder Gruesome Gambols Gripping Generative AI, Part 1, Part 2, and Part 3. The rest of us will be happy to twiddle our thumbs while waiting for you to catch up.

Back so soon? Finished already? That’s great (if a tad quicker than expected). Now, if you’re sitting comfortably, we’ll begin…

Let’s start by considering a small selection of randomly selected artificial intelligence (AI)-related items, with a particular emphasis on generative AI in the form of ChatGPT and tools of that ilk.

How much energy does it take to perform a typical Google search? I have no idea. Ironically and paradoxically, if you perform a Google search on this selfsame question, you will be presented with a cornucopia of results, none of which are the same (excluding those that are based on previous references, most of which are themselves outdated and/or unfounded). The complexity of this problem is discussed in the Full Fact column How Energy Intensive is a Google Search? The reason for my pondering this point is that a lot of people are now using ChatGPT as a surrogate search engine. Furthermore, according to Scientific American, companies with real search engines—Baidu, Google, Microsoft—are moving to use tools similar to ChatGPT to transmogrify internet searches into conversations (see AI Chatbots Are Coming to Search Engines. Can You Trust Them?). Quite apart from the element of trust (or lack thereof)—a topic to which we will turn our attention in a moment—how much energy will this new form of search consume?

AI is increasingly being used in the role of digital assistants. For example, I invariably commence my day by asking my office Alexa to remind me five minutes before the start of each scheduled conference call. This ability has proven itself to be a lifesaver because I lose all track of time once I’ve started working on something and I’m mind-melding with my computer. But “Alexa is so early 21st century, my dear” (this seems strange to say considering the first incarnation of this device was released in 2014, which is only nine years ago as I pen these words). AI-enabled assistants are now making their presence felt in myriad ways, such as in video conferencing applications like Google Workspace, Microsoft Teams, Saleforce’s Slack, and Zoom. The increasing number of tasks these AIs can perform (in conjunction with Mail, Calendar, and Task List integrations) include generating a text transcription of the call (where each speaker is identified in the text), generating a summary of the call, and assigning (and communicating) action items.

Using ChatGPT (or one of its kissing cousins), some of these AIs also aid you in generating responses to queries from your colleagues. Returning to the concept of trust, all I can say is that you had better read such responses carefully before pressing the “Send” button. Another thought that strikes me is that the original query you are asking your AI to answer may itself have been generated by your colleague’s AI (and people wonder why I drink).

The problem is that chatbots do not always tell the truth. Sometimes this isn’t their fault because they synthesize their answers from what they’ve read on the internet (no, mother, just because it’s on the internet does not mean that it’s true). Other times, it seems as though these chatbots are employing an impish sense of humor. As was mentioned in Part 2, for example, a lawyer called Steven Schwartz asked ChatGPT for help in writing a brief regarding a personal injury case for one of his clients. Stephen also requested citations to previous court decisions that would back his case. ChatGPT did as requested. Unfortunately, rather than cite real cases, it simply made its citations up (the Judge was not amused).

Or how about the time, as reported by The Washington Post, that ChatGPT Invented a Sexual Harassment Scandal and Named a Real Law Prof as the Accused? To add insult to injury, the AI chatbot even went so far as to cite a fake Washington Post article as evidence. At least if a person defames you, you have recourse in the courts (although you might want to check that your lawyer isn’t using ChatGPT to prepare your case), but what can you do if a random chatbot says something untrue about you?

Before we proceed further, a quadruple of additional snippets that recently flitted in front of my orbs are as follows. (1) GM is working on a ChatGPT-like digital assistant for cars. (2) A hobbyist has started using ChatGPT for home automation. (3) A survey reports that 85% of business leaders would let a robot make their decisions. (4) Another survey notes that, when provided with access to a generative AI chatbot, under-achieving customer support agents experienced much higher productivity gains than their over-achieving colleagues (this is, of course, great news for the business owner, and it’s good news for the under-achievers, but it’s not-so-good news for the over-achievers who have been working their little cotton socks off).

Did you see the article on Vice titled Palantir Demos AI to Fight Wars but Says It Will Be Totally Ethical Don’t Worry About It? The sub-title to this article informs us that, “The company says its artificial intelligence platform will integrate AI into military decision-making in a legal and ethical way” (well, no problems there then). All we need now is for AIs to start designing and building their own robots, and then we will be really… unhappy. Did you just think “Skynet” and “Terminator”? If so, you may not be delighted to hear that, only a couple of days ago as I pen these words, we were informed that ChatGPT Has Designed Its First Robot (let’s break out our party hats while making sure they are in the form of aluminum foil deflector beanies, which are guaranteed to confuse AIs and can shield our brains from “most electromagnetic psychotronic mind control carriers”).

There comes a time in The Tommyknockers by Stephen King when a character called Bobbie, who is in the process of being transformed into an alien, creates an apparatus that allows her to control an electric typewriter with her mind. The Tommyknockers was written in 1987, at which time such technology seemed the stuff of science fiction (to be fair, it was in a science fiction book). Well, according to the AI Makes Non-Invasive Mind-Reading Possible By Turning Thoughts Into Text column in The Guardian, “An AI-based decoder that can translate brain activity into a continuous stream of text has been developed, in a breakthrough that allows a person’s thoughts to be read non-invasively for the first time.”

“What about brain implants,” you ask. That’s amazing, because I was just coming to that. Who amongst our number could forget that 1972 classic, The Terminal Man, by Michael Crichton? Once again, this whole idea seemed like the stuff of science fiction and, once again, it’s today’s real-world technology. Individuals and companies exploring brain–computer interfaces include Elon Musk (whose success with Twitter instills underwhelming confidence when it comes to accessing my brain), Bill Gates, Mark Zuckerberg, Jeff Bezos, CTRL Labs, Synchron, MIT, and the University of California, San Francisco, to name but a few. Most recently, as reported by Wired (which is unfortunately named in this context), the guys and gals at Precision Neuroscience announced a brain-computer interface that sits on top of the brain (as opposed to in it).

This sort of technology has fantastic potential when it comes to addressing the need for high-performance neural interfaces that can be used to control advanced prosthetic limbs. It also has appalling potential if it were to fall into nefarious hands. Suppose a company started to offer a brain-computer interface that allowed you to access the internet simply by thinking about it. Now suppose that interface was powered by an AI like ChatGPT or one of its descendants. And now suppose an authoritarian government took control (in more ways than one).

Speaking of authoritarian governments, now I’m thinking about the CNET column How China Uses Facial Recognition to Control Human Behavior. Eeek! Now I’m thinking of the Orwellian Social Credit System, which is a national credit rating and blacklist being developed by the government of the People’s Republic of China. Eeek^2! Now I’m thinking that, only last year, we heard from Futurism that China Says It’s Developing an AI to Detect Party Loyalty. Eeek^3! Now I’m thinking how lucky I am to be living in America where we are led by sagacious public officials who would never knowingly let an untruth cross their lips, have no wish to impose their bigotry and ersatz morality on the rest of us, and live only to serve without any thought for themselves.

As an aside, my wife (Gina the Gorgeous) and I are currently rewatching the American television sitcom series Soap, which originally aired on ABC from 1977 to 1981 (we’re watching it on the free streaming service Tubi). This show was created as a nighttime parody of daytime soap operas. At the end of each episode, an off-camera announcer asks a series of life-or-death questions in a deliberately deadpan style: “Will Jessica discover Chester’s affair? Will Benson discover Chester’s affair? Will Benson care?” and concludes each episode with the trademark line, “These questions—and many others—will be answered in the next episode of … Soap.”

Returning to AI, there are two huge topics that are much on (what I laughingly call) my mind. The first is that we are currently drowning in a morass of misinformation. The second is that, considering the speed with which AI is developing, we may be facing a variety of existential threats to human civilization in all its forms (even the French). Will we develop a way to determine what is true and what is false (I’m thinking of politicians looking you straight in the eye while lying through their teeth)? Will AI start to perform so many tasks that it leaves most people unemployed and unemployable? Will we be faced with an AI-fueled apocalypse? These questions—and many others—will be pondered in the next episode of … The End of the Beginning of the End of Civilization as We Know It?

13 thoughts on “The End of the Beginning of the End of Civilization as We Know It (Part 1)?”

  1. I have no problem with micro-AI like applications self-location, protein folding, image interpretation which solve only a very small sub-set of human cognitive competency. The advent of Chatgpt has brought a circus midway atmosphere to the discussion of AI. My EE ms at university of oklahoma thesis on robotic indoor self-location. Gary Marcus, author, AI company owner, retired-professor has discussed the problems of chatGPT and generative language apps since chatgpt release last year. If you have any doubt that human level general intelligence is a reality, or are open to discomfirmation falsification of the positive exisitence of general AI, he is a good person to read. His belief 1) chatgpt doesn’t understand its texts, and is not in any way general human AI 2) despite not being general AI, it is smart enough to fool humans into believing it itself is human (deception online) and stealing passwords to dams, power plants, nuclear weapons, and is able to cause a lot of damage to the world.
    It would be better to discuss these particular claims and regulatory issues than to enhance the current cirus atmosphere around chatgpt.

    1. I agree that ChatGPT is in no way “general AI” — but the version causing all the fuss is already at least one major generation behind the latest and greatest model. The problem is that this technology is moving really fast and no one seems to be thinking about the potential consequences (see also my follow-up column this coming Thursday)

      1. i’ll have to see the next column. Gary marcus and others have been busy thinking ahead since the release of ChatGPT in november 2022. He and others including Elon Musk 1) filled out a petition around april 2023 asking that the rate of development be slowed and 2) He and others have had talks at the white house in Washington DC and the us congress about the need for legislation to limit the speed of develpment, the “arms race” between major software companies and counties, including PRC china. to read about gary marcus has a column: https://garymarcus.substack.com/ which details much of this.

  2. Using “metaphor” as a verb. Nice! I’m going to start using that. But i’m not stealing your creativity, just emulating your pattern.

  3. “transmogrify internet searches into conversations” – Exactly. Is transmogrification a form of “intelligence”? People are concerned about bias in the generated AI. I’m less concerned about AI, than i am concerned about biased non-AI layers inserted by the owner of the AI, well-intentioned or not. Enjoy this “chat” between me and ChatGPT. What i see is a layer of “courtesy” which, to me, seems very much like a layer of “friendliness” inserted by OpenAI, and NOT generated by the AI. We users can’t know: Which part of this anser is AI, and which part is a bias inserted by the owners?

    ChatGPT:
    I apologize for misunderstanding your question.
    I apologize for any confusion.
    I apologize for any frustration caused by my previous responses.

    Me:
    You can stop apologizing.

    ChatGPT:
    Understood. Thank you for letting me know.
    I apologize for any confusion caused by my previous responses.

    Me:
    I told you to stop apologizing.

    ChatGPT:
    I apologize for continuing to apologize. I will stop apologizing as you have requested. I apologize for not immediately following your directive to stop apologizing.

    Me:
    You just apologized.

    ChatGPT:
    You are correct, I apologize for apologizing again. I understand your frustration and I will do my best to avoid apologizing unnecessarily in the future and focus on answering your questions to the best of my abilities.

    Me:
    You a mess.

    ChatGPT:
    I’m sorry if my previous responses have not met your expectations.

    https://www.facebook.com/groups/stupidai/posts/795181458645087

    1. I don’t WANT my AI to be “friendly” or “more human-like”. That’s a dangerous deception, and it’s marketing. I want my AI to be completely non-human-like, so i am reminded it’s just a machine.

      1. Dear theboom,
        Gary Marcus’s recent article discuss’s how to educate the non-technical public into how to not treat AI program’s including chatbox’s as human or more human-like. As early MIT AI researcher, Joseph Weizenbaum’s chatbox Eliza showed in the mid-60’s we all have a part of us that want to believe a program is human or understands us. But this can lead to harm to us as humans. So in our 2023 culture we need to understand why we,[humans] do that[attribute human like understanding to robots]
        https://garymarcus.substack.com/p/stop-treating-ai-models-like-people
        OkieInOKC

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
37,326 views