That is indeed exactly my point. LLMs are just a language-tailored expression of deep-learning, which can be incredibly useful, but should never be confused for any kind of intelligence (i.e. logical conclusions).
I appreciate that you see my point and admit that it makes some sense :)
Example where I think pattern recognition by deep learning can be extremely useful:
recheck medical imaging data of patients that have already been screened by a doctor, to flag some data for a re-check by a second doctor. This could improve chances of e.g. early cancer detection for patients, without a real risk of a false detection, because again, a real doctor will look at the flagged results in detail before even alarming a patient to a potential diagnosis
pre-filter large amounts of data for potential matches -> e.g. exoplanet search by certain patterns (planet hunters lets humans do this as crowdsourcing)
But what I am afraid is happening for people who do not see why a very simple algorithm is already AI, but consider LLMs AI, is that they mentally decide to call AI what seems “AGI” / “human-like”. They mistake the patterns of LLMs for a conscious being and that is incredibly dangerous in terms of trusting the answers given by LLMs.
Why do I think they subconsciously imply (self-)awareness / conscience? Because to not consider as (very limited) AI a control mechanism like a simple room thermostat, is viewing it as “too simple” to be AI - which means that a person with such a view makes a qualitative distinction between control laws and “AI”, where a quantitative distinction between “simple AI” and “advanced AI” would be appropriate.
And such a qualitative distinction that elevates a complex word guessing machine to “intelligence”, that can only be made by people who actually believe there’s understanding behind those word predictions.
That is indeed exactly my point. LLMs are just a language-tailored expression of deep-learning, which can be incredibly useful, but should never be confused for any kind of intelligence (i.e. logical conclusions).
I appreciate that you see my point and admit that it makes some sense :)
Example where I think pattern recognition by deep learning can be extremely useful:
But what I am afraid is happening for people who do not see why a very simple algorithm is already AI, but consider LLMs AI, is that they mentally decide to call AI what seems “AGI” / “human-like”. They mistake the patterns of LLMs for a conscious being and that is incredibly dangerous in terms of trusting the answers given by LLMs.
Why do I think they subconsciously imply (self-)awareness / conscience? Because to not consider as (very limited) AI a control mechanism like a simple room thermostat, is viewing it as “too simple” to be AI - which means that a person with such a view makes a qualitative distinction between control laws and “AI”, where a quantitative distinction between “simple AI” and “advanced AI” would be appropriate.
And such a qualitative distinction that elevates a complex word guessing machine to “intelligence”, that can only be made by people who actually believe there’s understanding behind those word predictions.
That’s my take on this.