feature article
Subscribe Now

The Artificial Intelligence Apocalypse (Part 2)

Is It Time to Be Scared Yet?

In Part 1 of this 3-part miniseries, we cogitated and ruminated on the origins of artificial intelligence (AI). We also started to look at some of the “happy face” aspects of AI in the form of speech recognition, voice control, and machine vision.

Let’s continue with our happy faces for a moment. AI is now turning up in all sorts of applications, many of which are truly exciting with the potential to make the world a better place. Later, when we’ve lulled ourselves into a false sense of security, we’ll take a look at the dark side of artificial intelligence. Be afraid; be very afraid…

Slap on a Happy Grin

I’m sorry, I was just thinking of the song Put on a Happy Face, which was introduced by Dick Van Dyke in the musical Bye Bye Birdie. The lyrics by Lee Adams are rattling around my head, including the lines: “Wipe off that full-of-doubt look / Slap on a happy grin.”

So, let’s wipe off our looks of doubt, slap on happy grins, and consider some of the positive sides of AI. One application that impresses the socks off me is MyScript Nebo, which is available for $9.99 on the Apple App Store for iOS devices. Based on a technology called Interactive Ink, this little scamp allows you to take handwritten notes using the Apple Pencil. It uses multiple layers of artificial neural networks to determine which of your pen strokes are associated with which characters, and which characters are associated with which words. It can even decode my handwriting, which puts the unintelligibility of doctors’ prescriptions to shame.

Now, you may not think that handwritten input is particularly necessary, but it really is a fantastic approach to capture your thoughts in the most natural way. A related app is the MyScript Calculator, which allows you to use your finger or a stylus to write out your desired calculation much more intuitively than using a keyboard.

How Are You Feeling?

AI also has numerous applications with regard to public health. In an article on the Healthcare IT News website earlier this year, it was noted that IBM’s Watson for Oncology was used to inform clinical decision making, to provide evidence for newer treatments, to suggest more personalized alternatives, and to offer new insights from genotypic and phenotypic data.

One very common issue that is of interest to almost everyone on the planet is that of skin cancer, which is often caused by exposure to harmful ultraviolet (UV) radiation from the sun. A new 2019 study published in the Journal of the European Academy of Dermatology and Venereology determined that the deep learning-based SkinVision smartphone app can now detect 95% of skin cancer cases.

A less-obvious AI application is to detect cyber bullying. This is made more difficult by the fact that almost any phrase can convey multiple intents when used in different contexts. If you saw, “I hate you!” on your teenage daughter’s social media feed, for example, then this would probably be cause for concern, until you realized it was sent from her best friend in response to hearing that your offspring had just taken delivery of a pair of much sought-after tennis shoes. This explains why Instagram is currently employing AI-powered text and image detection and recognition tools to root out cyber bullying in photos, videos, and captions.

Another interesting example features children with autism spectrum disorder. In addition to being highly sensitive to stimuli in the form of sound, sight, and touch, these youngsters can have a difficult time expressing their emotions. A recent article on the IEEE Spectrum website discussed how AI-powered humanoid robots can be used to teach coping skills to children with autism.

AI is also of use when it comes to detecting outbreaks of disease. At a local level, apps like nEmesis can use AI and natural language processing to monitor social network feeds for rises in incidents of food poisoning, and then use geo-tagging to trace the source of the outbreak. At a global level, AI and big data have the capability of identifying pandemics before more traditional techniques even think about waving a metaphorical red flag.

Anyone for Tennis?

Do you enjoy watching the Wimbledon tennis championships on TV? Well, when IBM’s Watson isn’t busy diagnosing cancer, it can now be found watching Wimbledon. Using sight and sound, it can monitor all of the courts simultaneously. It knows where we are in each game, and it can tell from the facial expressions and body language of the players whether they are happy, sad, angry, frustrated, etc. It can also listen to the sounds of the crowds and spot roars of excitement, cries of disapproval, and so forth.

Using all this data, Watson, can create a “highlights reel” within minutes of the final game of the day, and it can comb through thousands of hours of video to create a highpoints (and low points) summary of the entire tournament — something that would take a team of human editors a humongous amount of time and effort.

Artificial Intelligence Meets Augmented Reality

I personally think that, in the not so distant future, the combination of AI and augmented reality (AR) is going to dramatically affect the ways in which we interface with our systems, the world, and each other.

Many people find it hard to envisage how AI+AR might affect them, so let me offer just one simple example. Suppose I’m wearing my AR goggles while I’m reading a book that mentions how, back in the 1850s, Ada Lovelace talked about the possibility of computers composing music. I read a lot of books and it’s hard to keep track of which facts are in which tomes.

Now suppose that my AI-equipped AR goggles was reading the book with me. Six months down the road, I might mutter to myself, “What was that book in which Ada Lovelace….” and the AI might respond, “You are thinking about…”

“Hmm,” I might respond, “but where did I put that little rascal.” Now imagine the AI causing my AR goggles to guide me to one of the bookshelves in my office, and to say, “The book you are looking for is behind this pile of books you placed here last week.”

Will having access to this sort of technology make its users less intelligent? It’s hard to say until we try it, but just about everyone I know spends part of each day trying to locate something they’ve misplaced. I know that when I’m at my workbench surrounded by tools and components, I can put something down on the table in front of me, remove my gaze for a moment, and — if I don’t keep my finger on the little scamp — it disappears as though I’d performed a magic trick. (Well, this is the way things seem to occur.)

Now, suppose that my AI+AR kept track of everything for me, and “highlighted” whatever it was I said I was looking for. I don’t know about you, but I would welcome this technology with open arms.

Is it Time to Worry Yet?

The problem with technology is that it’s a two-edged sword: it giveth, and it taketh away. On the one hand, we want access to information; on the other hand, we need to remember the old saying, “Be careful what you wish for,” because one can have too much of a good thing.

Take the case of a Hyper Reality future as depicted on YouTube, for example, in which information is coming at the user as if from a fire hose. It’s also interesting to see the part where the system crashes and reboots, during which time the bright and gaudy surroundings disappear to be replaced by dim and dismal reality.

And, speaking of taking things away, did you know that AR is just one aspect of mediated reality (MR), the other branch of which is deleted or deletive reality (DR). Suppose your AI+MR headset decided that seeing a homeless person sitting in a doorway might disturb you, so it replaced that person with a large pot of brightly colored flowers? Or suppose some nefarious person or AI took control of your AR goggles, and decided it would be fun to replace a missing manhole cover in your path with what appeared to be paving stones?

Next Time

As you can see, we’ve started to drift toward the dark side of AI. Next time, we will take a look at some examples of AI gone wrong that will make your hair stand on end. In the meantime, I welcome your comments and questions on the first two parts of this miniseries, along with your thoughts and suggestions for things we might want to start worrying about in an AI-apocalyptic future.

 

7 thoughts on “The Artificial Intelligence Apocalypse (Part 2)”

  1. My biggest concern is that as the AIs get better, we as a species become diminished. We already see it in the younger folks today. With immediate access to the whole of mankind’s information via the web, many have stopped remembering even simple things like how many houses of congress there are or who the Nazis were and why we were fighting them in WWII (wait, there was a world war!?!). I already suspect that the majority of the world’s progress is in the hands of just a few hundred thousand engineers and scientists. And, there are fewer and fewer of us left because science, math and engineering is “hard” so kids don’t want to go into it.
    Look at Bill Gates testimony before Congress a few years back on why Microsoft and other tech giants needed so many H1B visas. He said that they’re losing engineers to finance and other fields where they can make a lot of money and still work a 9-5 job. Of course, those were the same bright folks that came up with derivatives and caused the housing crisis. But, that’s used as an excuse to bring in thousands of lower-paid folks from outside the US to work engineering jobs.
    Now, I’m hearing that companies like Cisco are going to be replacing those cheaper engineers and help-desk folks with even cheaper AIs. So, now even the opportunities for the H1Bs are going away to be replaced with even cheaper AI agents. On the plus side though, there will always be a need for bright engineers and scientists at least until the AIs develop independent thought.

    1. I work with a lot of people in the field. I would say many of them are more scared of the future due to AI than the masses. The big question — will it be AIpocalypse or AIrmageddon?

      1. I must admit that I watch too much science fiction — as a result, I bounce around being scared of a zombie apocalypse, a robot/AI apocalypse, a super volcano, a giant asteroid strike, a pandemic, and… the list goes on LOL

    2. On the one hand I agree with you that the vast amount of “stuff” younger folks don’t know is scary — but then I think back to how little I knew when I was younger.

      Re jobs, the industrial revolution took away the jobs of a vast number of agricultural workers, but new jobs opened up in other fields.

      As usual I end up sitting on both sides of the fence (which can be a tad uncomfortable) — I worry that AI will put a lot of people out of work — but I live in hopes that other meaningful and fruitful jobs will emerge.

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

SLM Silicon.da Introduction
Sponsored by Synopsys
In this episode of Chalk Talk, Amelia Dalton and Guy Cortez from Synopsys investigate how Synopsys’ Silicon.da platform can increase engineering productivity and silicon efficiency while providing the tool scalability needed for today’s semiconductor designs. They also walk through the steps involved in a SLM workflow and examine how this open and extensible platform can help you avoid pitfalls in each step of your next IC design.
Dec 6, 2023
57,953 views