If you mean solutions as in answers to questions I would be against it, LLMs have a habit of spewing wrong information that looks correct. This happens way more when you tell it to write code, it could end up unoptimized, misleading, or straight up wrong. I wouldn't want an AI to answer my question and then feeling like I'm forced to triple check its answers and make sure it's hallucinating.
There's also the point of "if people wanted AI answers, they would be asking chatbots and not post on a community of people"