I have to disagree with that. To quote the comment I replied to:
AI figured the “rescued” part was either a mistake or that the person wanted to eat a bird they rescued
Where's the "turn of phrase" in this, lol? It could hardly read any more clearly that they assume this "AI" can "figure" stuff out, which is simply false for LLMs. I'm not trying to attack anyone here, but spreading misinformation is not ok.
You can play with words all you like, but that's not going to change the fact that LLMs fail at reasoning. See this Wired article, for example.