TL; DR
Modern AI doesn’t just retrieve text — it synthesizes, reinterprets, and recombines patterns from data. That looks a lot like how humans generate ideas: not from nowhere, but from rich, diverse networks of experience.
I’m going to dip my toe in the AI pool for a moment. I’m not an AI specialist, but like many technical folks, I’ve worked with the fundamentals (e.g., machine learning, deep learning) and I’m consistently amazed — mostly through use of ChatGPT, with some NotebookLM dabbling.
Here’s the question that I keep coming back to: Shouldn’t we consider these tools as rivals to human intelligence?
Quick disclaimer: What you’ll read here is based on what I’ve picked up passively. In other words, I did not spend a lot of time researching this subject and I avoided reading opinion pieces. That takes time, and my time is limited. Plus, I like to provide a raw take on things.
Beyond “spitting back information”
I’ll cut right to it with an example. I just typed “AI is just a model that is spitting back information based on what it was trained on” into Google, and Google’s AI Overview answer captured my thinking precisely:
The claim that AI simply “spits back information” based on its training data is a common but oversimplified misconception. While generative AI models are fundamentally based on patterns learned from immense datasets, their function goes beyond simple retrieval. They do not function as a search engine that directly copies and pastes information. Instead, the models are a complex system that synthesizes, reinterprets, and recombines information in novel ways to create something new.
How humans do it
Is that so different from us? Human brains start with a blank-ish slate that develops connections and learns through a lifetime of ongoing, interactive processes. Early development forms connections at astonishing rates, and the mind continues making connections as it constantly consumes information from our five senses. Our brain quickly becomes a rich, sophisticated network of neurons and connections – a neural network (NN). Where have I heard that phrase before? 🙂 Exactly. A NN is built by learning connections between nodes (or neurons). Connections are “weighted” by the math and numbers connecting them.
New ideas vs. new combinations
Some will argue that AI can’t create “truly new” ideas. But what counts as new? For humans, “original” thoughts typically emerge from novel combinations of prior knowledge — associations that haven’t been made (or noticed) before. We don’t create ideas ex nihilo, we generate them from what we’ve internalized — our lived datasets.
If that’s our metric for creativity, then AI tools like ChatGPT and other LLMs check that box in pen! They’re extremely capable combiners of information. That’s why ChatGPT’s best outputs can feel strikingly human. It’s not magic — It’s pattern recombination, guided by a very large, very pruned network. Sound familiar?
Why care
While I find the metaphysical questions (“Is it thinking?”) very engaging, my focus comes back to the utility of AI, for example:
- Can it help me frame a problem faster?
- Can it enlighten me to alternative approaches?
- Can it suggest workflows geared towards solving a particular problem?
- Can it improve my deliverables (e.g., code, documents)?
When the answer is yes, I use it.
Bottom line: Is AI at the point of human intelligence? No, “human intelligence” is much more than processing information. It includes other types of intelligence such as emotional intelligence and social intelligence. If/when AI masters these, [fill in your end to this sentence 🙂 ].
What are your “thoughts” – LOL.

Leave a comment