Researchers have taken photographs of children’s retinas and screened them using a deep learning AI algorithm to diagnose autism with 100% accuracy. The findings support using AI as an objective screening tool for early diagnosis, especially when access to a specialist child psychiatrist is limited.
AI-screened eye pics diagnose childhood autism with 100% accuracy::undefined
We don’t know what they are identifying. We give it input and it gives output. What exactly is going on internally is a mystery.
Counterintuitively that’s also where the benefit comes from.
The reason most AI is powerful isn’t because its can think like humans, its because it doesn’t. It makes associations that humans don’t simply by consumption of massive amounts of data. We humans tell it “Here’s a bajillion sample examples of X. Okay, got it? Good. Now here’s 10 bajillion samples we don’t know if they are X or not. What do you, AI, think?”
AI isn’t really a causation machine, but instead a correlation machine. The AI output effectively says “This thing you gave me later has some similarities to the thing you gave me before. I don’t know if the similarities mean anything, but they ARE similarities”.
Its up to us humans to evaluate the answer AI gave us, and determine if the similarities it found are useful or just coincidental.
Sure, but if we could take the model generated by the AI and convert it into a set of quantifiable criteria - i.e., what is being correlated - we could use our human abilities of associative thought to gain an understanding of why this correlation may exist, possibly leading to better understanding of Autism overall.
The problem is identifying what an AI model is doing is basically impossible. You can’t just decompile an AI model and see a bunch of logic, and you can’t view the machine code and reverse engineer it because it isn’t code in that sense. The best way to suss it out is to throw corner cases at it and try to figure out any common themes in the false negatives and false positives
There was no notable decrease in the mean AUROC, even when 95% of the least important areas of the image – those not including the optic disc – were removed.
So we know that it relates to the optic disc.
Edit: Repeated in the conclusions of the study itself:
Our findings suggest that the optic disc area is crucial for differentiating between individuals with ASD and TD.
Edit 2: Which is given more background as to what may be going on and being picked up by the model:
Considering that a positive correlation exists between retinal nerve fiber layer (RNFL) thickness and the optic disc area,32,33 previous studies that observed reduced RNFL thickness in ASD compared with TD14-16 support the notable role of the optic disc area in screening for ASD. Given that the retina can reflect structural brain alterations as they are embryonically and anatomically connected,12 this could be corroborated by evidence that brain abnormalities associated with visual pathways are observed in ASD. First, reduced cortical thickness of the occipital lobe was identified in ASD when adjusted for sex and intelligence quotient.34 Second, ASD was associated with slower development of fractional anisotropy in the sagittal stratum where the optic radiation passes through.35 Interestingly, structural and functional abnormalities of the visual cortex and retina have been observed in mice that carry mutations in ASD-associated genes, including Fmr1, En2, and BTBR,36-38 supporting the idea that retinal alterations in ASD have their origins at a low level.
Hold the fuck up. What exactly is the marker?
A big problem with this type of ai is they are a black box.
We don’t know what they are identifying. We give it input and it gives output. What exactly is going on internally is a mystery.
Counterintuitively that’s also where the benefit comes from.
The reason most AI is powerful isn’t because its can think like humans, its because it doesn’t. It makes associations that humans don’t simply by consumption of massive amounts of data. We humans tell it “Here’s a bajillion sample examples of X. Okay, got it? Good. Now here’s 10 bajillion samples we don’t know if they are X or not. What do you, AI, think?”
AI isn’t really a causation machine, but instead a correlation machine. The AI output effectively says “This thing you gave me later has some similarities to the thing you gave me before. I don’t know if the similarities mean anything, but they ARE similarities”.
Its up to us humans to evaluate the answer AI gave us, and determine if the similarities it found are useful or just coincidental.
Sure, but if we could take the model generated by the AI and convert it into a set of quantifiable criteria - i.e., what is being correlated - we could use our human abilities of associative thought to gain an understanding of why this correlation may exist, possibly leading to better understanding of Autism overall.
The problem is identifying what an AI model is doing is basically impossible. You can’t just decompile an AI model and see a bunch of logic, and you can’t view the machine code and reverse engineer it because it isn’t code in that sense. The best way to suss it out is to throw corner cases at it and try to figure out any common themes in the false negatives and false positives
No, we just haven’t come up with a way of reverse-engineering AI models yet.
Incidentally to train AI, you need a bajillion samples of X and a bajillion-plus samples of not-X.
Not so much of a mystery:
So we know that it relates to the optic disc.
Edit: Repeated in the conclusions of the study itself:
Edit 2: Which is given more background as to what may be going on and being picked up by the model: