
When I asked "What does the Church teach about the Novus Ordo Mass?" the AI
returned raw JSON instead of an actual answer — you can see it printed {"follow_ups": [...]} directly in the chat window. The follow-up
suggestion buttons at the bottom did render correctly, so that part is working, but the JSON is leaking into the visible response when it
shouldn't be.
The bigger issue though is that there was no actual answer to the question — just the JSON block and nothing else. So the AI skipped the
content entirely and only returned the follow-up suggestions.
Probably a tweak needed in the system prompt to tell the model to keep the JSON separate from the main response, and to always answer the
question first before generating follow-ups.