feature article
Subscribe Now

ChatGPT will not replace me – yet

An EEJournal reader recently suggested, eagerly I might add, that ChatGPT can already replace me. This statement was made even before OpenAI announced ChatGPT 4o. It’s not something that particularly worries me, knowing how these chatbots are trained. You can try to dump the entire Internet into a chatbot training session, but that does not give the chatbot wisdom, nor will it transform the chatbot into the Encyclopedia Britannica or Wikipedia because these online encyclopedias are updated daily with fresh information, sometimes even properly curated information.

However, as an intrepid EEJournal editor, I wanted to test my own prejudices, so I submitted an EEJournal-appropriate question to ChatGPT 3. Here was my query to ChatGPT 3:

“What is the best FPGA?”

Here is ChatGPT 3’s answer:

“Determining the “best” FPGA (Field-Programmable Gate Array) depends on various factors including your specific project requirements, budget, performance needs, power consumption constraints, and the ecosystem surrounding the FPGA. Some of the leading FPGA manufacturers include Xilinx (now part of AMD), Intel (formerly Altera), Lattice Semiconductor, and Microchip (formerly Actel).

“As of my last update, Xilinx and Intel (formerly Altera) have been dominant players in the FPGA market, offering a wide range of products catering to different applications and performance levels. Xilinx’s Virtex and Ultrascale series as well as Intel’s Stratix and Cyclone series are among the popular choices for high-performance computing, networking, telecommunications, and other applications.

“However, the ‘best’ FPGA for your particular project will depend on factors such as the required logic capacity, I/O requirements, processing capabilities (such as DSP slices or embedded processors), available development tools and support, and your familiarity with the vendor’s ecosystem.

“To determine the best FPGA for your specific needs, it’s advisable to thoroughly research and compare the features, performance metrics, power consumption, and support provided by different FPGA families, and choose the one that best aligns with your project requirements and constraints. Additionally, reaching out to FPGA experts or consulting with FPGA vendors directly can also provide valuable insights and guidance in selecting the most suitable FPGA for your application.”

This answer is perfectly reasonable. It mentions the major FPGA vendors, although it omits some of the smaller FPGA vendors such as Achronix, Efinix, and GOWIN. In addition, this version of ChatGPT is not aware that Altera spun back out of Intel on February 29 and now operates as an independent subsidiary with its own CEO, Sandra Rivera. These small problems are the tip of the iceberg that is ChatGPT’s Achilles’ heel: this chatbot cannot do its own research.

ChatGPT’s answer is also completely useless because it provides a general overview of the FPGA arena that anyone already working with FPGAs will know already. ChatGPT explicitly avoids giving any specific advice and suggests that you should reach out to an FPGA expert, implicitly admitting that it’s not an expert.

A few weeks later, after OpenAI announced ChatGPT 4o, I again placed my query to the free version of the chatbot. This time, ChatGPT seemed more ready to chat. To my query about the best FPGA, the new version of ChatGPT answered:

“Determining the ‘best’ FPGA depends on your specific requirements and project needs. Some popular FPGA brands include Xilinx, Intel (formerly Altera), and Lattice Semiconductor. Each company offers a range of FPGAs with varying features, performance levels, and cost points. It’s important to assess your project requirements such as logic density, I/O capabilities, processing speed, power consumption, and budget to determine the best fit for your application. Would you like more specific recommendations based on your project needs?”

Note that ChatGPT is still not aware of Altera’s recent separation from Intel.

“OK,” I thought. “Let’s see how deep this rabbit hole goes.” I responded with this statement:

“I need a low-cost FPGA.”

To which, ChatGPT responded:

“For a low-cost FPGA option, you might consider looking into the Lattice Semiconductor FPGAs. Lattice often provides lower-cost FPGA solutions compared to Xilinx and Intel. Their ICE40 and ECP5 series are known for being cost-effective and suitable for various applications, including IoT, consumer electronics, and industrial devices. These FPGAs offer a good balance of features, performance, and affordability. However, always make sure to compare the specifications and features of different FPGAs to ensure they meet your project requirements. Let me know if you need more information or assistance with choosing a low-cost FPGA.”

Well, now ChatGPT was sounding a little more authoritative, and quite a bit like Lattice Semiconductor’s marketing department. Somehow, ChatGPT decided that Lattice Semiconductor’s ICE40 and ECP5 FPGA families “offer a good balance of features, performance, and affordability.” Sounds like it was taken directly from a marketing blurb to me.

I continued following the rabbit by typing:

“I need PCIe.”

ChatGPT again had a response:

“If you’re looking for an FPGA with PCIe (Peripheral Component Interconnect Express) support, both Xilinx and Intel (formerly Altera) offer FPGA families with PCIe interfaces that facilitate high-speed communication between the FPGA and other devices.

“For Xilinx, you can consider their Artix, Kintex, Virtex, or UltraScale(+) FPGA families which offer PCIe support for various bandwidth and performance requirements.

“On the Intel side, you might explore their Cyclone, Arria, or Stratix FPGA families which also include PCIe interfaces in some of their models.”

“When selecting an FPGA with PCIe connectivity, make sure to consider factors such as data transfer speed, number of lanes required, protocol support, and other interface requirements specific to your project.

“Do you need more information on FPGA models with PCIe or assistance in choosing the right one for your needs?”

So, I typed “Yes,” and ChatGPT responded:

“I would love to answer this, but it seems that you are not subscribed to the pro version, and have no more free chats left. Please upgrade to enjoy unlimited access.”

There you have it. I’d say that ChatGPT is not yet a replacement for an EEJournal editor, but it would make a great used car salesperson. I feel I got precisely nowhere with this chat, which met my expectations. (As of this writing, one month of OpenAI’s ChatGPT 4o costs $20, which is steeper than I want to pay for the answer to one question.)

When you’re dealing with a tool like ChatGPT that’s trained on the contents of a big chunk of the Internet, you have access to a great general knowledge machine, which somewhat resembles an encyclopedia. ChatGPT is stuffed with facts, mostly of general knowledge. If it were not trained on a topic, it currently has no way of expanding its own knowledge base. It cannot do its own research. That’s why it recommended that I consult an FPGA expert – because it knows it’s not an expert and has no way to become one on its own.

I have no doubt that students in elementary, middle, and high school can use ChatGPT to write essays for them. Students have done this for years using encyclopedias, Cliff Notes, and other knowledge compendiums both large and small. ChatGPT merely makes it easier for students to do this type of research by proceeding to write the entire report for them, given the right question. Is that worth ten or twenty bucks a month? You’ll need to decide for yourself.

Normally, when I write an article for EEJournal, you’re seeing the result of tens of hours of focused research, including interviews with actual experts. To date, ChatGPT cannot do this. Someday, it most certainly will, and I can retire.

9 thoughts on “ChatGPT will not replace me – yet”

    1. ChatGPT is incentivized to always provide an answer. Right, wrong, or made up from whole cloth doesn’t currently seem to matter to the AI.

  1. One of the most insightful articles on LLMs and Chatbots is Bender, Gebru, et al, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” ACM Digital Press. My experience has been that if you ask a chatbot about something you know a lot about, you’ll quickly run up against its confabulations and ‘hallucinations.’ Essentially, it produces beautiful-sounding nonsense.

    Let’s see if it evolves beyond the stochastic parrot model.

    — Dag Spicer

  2. I refer you to Dilbert. The key phrase is ‘expert’. AI output almost certainly appears to be expert knowledge to any manager senior enough to have decision-making responsibility for AI deployment. So we rely on that exec being expert enough in ‘management’ to know that they should not rely on one source, in this case AI, for their decision-making information…

  3. Insightful article, Steven! Thank you.
    In the brouhaha of AI drumming, people are forgetting that the current prowess of AI is surface level. But people often underestimate what can it do in 10 years timeframe.
    I’m working on a power product which is attempting to answer this question for a very specific application and I’d love for you to take it for a spin 🙂
    Thanks for your time and hopefully we can talk about it.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
39,349 views