The more I learn, the more I realize how little I know. A lot of people (my dear old mother, for one) think I’m clever. One thing I do know is how wrong they are. I have known a lot of clever people in my time, so I have something to compare myself against, and I’m afraid I don’t come out well.
I’m sad to say that I never got to meet the English mathematician John Horton Conway, who was born on the 26th of December 1937 in Liverpool, England, and who sadly passed away from the coronavirus on the 11th of April 2020.
John was “a bit of a character,” as they say (if you want to learn more, I recommend Genius at Play: The Curious Mind of John Horton Conway by Siobhan Roberts). One pundit described him as “Archimedes, Mick Jagger, Salvador Dali, and Richard Feynman all rolled into one.” John was the originator of Conway’s Game of Life. He also came up with the concept of Surreal Numbers. As we read on the Wikipedia:
If formulated in Von Neumann–Bernays–Gödel set theory, the surreal numbers are a universal ordered field in the sense that all other ordered fields, such as the rationals, the reals, the rational functions, the Levi-Civita field, the superreal numbers, and the hyperreal numbers, can be realized as subfields of the surreals.
Well, I don’t think any of us would argue with that. I know I wouldn’t because I have no idea what it means. It makes (what I laughingly call) my mind wobble on its metaphorical gimbals.
I’m reminded of Being and Nothingness by Jean-Paul Sartre. I can open that book at any page and cast my orbs over any sentence. I’ll understand the meaning of the individual words, but the significance of the sentence as a whole will elude me.
Just to add to the fun and frivolity, everything in Being and Nothingness is recursively-referential. If you look up something like “being-in-itself (l’être-en-soi)” in the glossary, for example, you’ll find it defined in terms of things like “being-for-itself (l’être-pour-soi).” If you then look up “being-for-itself,” you’ll find reference to something like “being-for-others (l’être-pour-autrui).” And if you are bold enough to peruse and ponder “being-for-others,” you’ll find it defined in terms of “being-in-itself,” at which point you’ll find yourself experiencing a sense of “déjà vu all over again” (did someone just say that?).
But we digress…
The reason for my meandering musings above was that I was just chatting with Daniel Olsher, who is the founder of Integral Mind. All I can say is that I would love to be able to be a “fly on the wall” participant in a conversation between Daniel and someone like John Conway.
Now, I must admit that I know just enough about artificial intelligence (AI) to be dangerous (very dangerous). I’ve created only one AI/ML app in my life (see I Just Created my First AI/ML App), and that was with a lot of help from someone who had a clue and who we will call Louis (because that’s his name).
When it comes to large language models (LLMs) like ChatGPT, I know they use transformers, where a transformer is a type of neural network architecture that was introduced in a landmark 2017 paper titled “Attention Is All You Need” by Vaswani et al. It’s the foundation of models like GPT (Generative Pretrained Transformer), BERT, and many others.
At the heart of transformers is something called “self-attention.” This lets the model look at all parts of the input simultaneously, figure out which words (or tokens) are most relevant to each other, and understand context better than older models like RNNs or LSTMs. For example, in the sentence “The animal didn’t cross the street because it was too tired,” to what does “it” refer? A transformer uses “attention” to focus on “animal” when interpreting “it.”
The thing is that something like ChatGPT, as amazingly clever as it is, is not an example of artificial general intelligence (AGI), where AGI is a hypothetical machine that can understand, learn, adapt, and apply knowledge across any intellectual task that a human can do.
Well, Daniel says that Integral Mind has developed a paradigm-shifting AI technology in the form of a demonstrable, working artificial general intelligence (AGI) model that has been extensively vetted by the U.S. Government, including the Department of Defense and DARPA.
If this is true, then “color me impressed!”
A high-level summary of Integral Mind’s claim to fame from their website reads as follows:
Previously available only to the US Government, Integral Mind’s novel, non-statistical AI is the first to meet the requirements for AGI as set forth by Goertzel (2007), Google DeepMind (2024), Legg and Hutter (2007), Wang (2019), and others.
Built on entirely different foundations than those of traditional AI, the same properties that support the system’s AGI capabilities also enable it to act as the first fielded superintelligence.
Validated in all respects—both theoretically and via the solution of otherwise intractable mission problems—by the Department of Defense, the Intelligence Community, the State Department, and other governments and organizations.
Comprehensively peer-reviewed in venues including AAAI, ICDM, KDD, Neural Networks, IEEE Symposium Series on Computational Intelligence, HumTech, and Cognitive Science.
Our Digital Enlightenment platform enables customers to predict, simulate, imagine, decide, and act. We specialize in problems that no other technology (and in many cases, no human) can solve.
Proven by application in the real world, this AI is the first to be able to think, feel, and understand. It is the first to offer genuine intelligence, provable correctness, and provable safety and morality. It is fully transparent and explainable. Because it is not statistical in nature, it does not use training data. Its computational and power requirements are extremely low, and it is compliant with all worldwide Safe AI standards out of the box. It is therefore ideal for building genuinely intelligent, provably safe autonomous systems. It is profoundly responsible to humanity and to those it serves.
Some related informational sources are the Proof of Achievement of the First Artificial General Intelligence (AGI) paper at Zenodo.org, the Semantically-Based Priors and Nuanced Knowledge Core for Big Data, Social AI, and Language Understanding paper at ScienceDirect.com, and the Cognitive-Cultural Simulation of Local and Host Government Perceptions in International Emergencies paper at IEEE.org.
Also, there’s the Integral Mind Proof Page. The problem is that no matter how many mental warm-up exercises I perform, I simply cannot wrap my poor old brain around any of this. It’s like the AGI equivalent of the aforementioned Being and Nothingness in that I understand the individual words, but not the sentences they form.
I need an AGI to read everything for me and tell me what it means. Unfortunately, I can’t use Integral Mind’s AGI because I wouldn’t know if it was lying to me or not, and I can’t use anyone else’s AGI because no one else has one. A conundrum indeed!
One part of me finds Integral Mind’s claims hard to believe. On the other hand, Daniel has been an AI researcher at Carnegie Mellon University and a research scientist for the Singapore Ministry of Defence, all of which certainly impresses me. Also—as previously noted—Integral Mind’s working AGI model is said to have been extensively vetted by the U.S. Government, including the Department of Defense and DARPA, and you can’t say things like that if they aren’t true because the U.S. Government is unhappy about that sort of thing, and they (whoever they” are) have ways of making you share their unhappiness.
I’m a bear of little brain. Now my head hurts. Perhaps if you were to visit the Integral Mind website and peruse and ponder the related documents discussed above, then you could explain it all to me in simple words that I can understand, in which case the radiance of my smile will lighten your life, and I can’t say fairer than that!
Proof of Achievement of the First Artificial General Intelligence… I have read most of this paper. It is somewhat Niaive and I am surprised at its supposedly high profile. Based on defining notional requirements for AGI it defines hence “proves” its requirements. Requirements tend to come from the business cases, customer and the contractor. (They should be looking for a sensible basis set on which to build the AGI premise.) You cannot use requirements in this case as it is research (hence needing a basis) and no one currently understands how the human brain works yet, which would help define that basis. I suggest that computers are nothing like human brains. If I further add that I do not believe that intelligence is a computer-programmable property (this is based on being a microcode programmer (testing ICL machines at that level you truly understand how inflexible computers are but how it is possible to create complex systems from dumb instruction sequences – its the programmer not the computer that is intelligent.
I must admit that this one left me confused — it was hard to pin anything down — on the one hand, it’s claimed that various groups in the US government are using this technology, which is impressive until you start to think about who we have forming the US Government. For example, the Secretary of Education Linda McMahon recently referred to AI (artificial intelligence) as ‘A-one’ (and she wasn’t joking) https://www.msn.com/en-us/news/politics/education-sec-linda-mcmahon-caught-in-major-gaffe-as-she-fails-to-pronouce-ai/ar-AA1CKn40