Send CathInfo's owner Matthew a gift from his Amazon wish list:
https://www.amazon.com/hz/wishlist/ls/25M2B8RERL1UO

Author Topic: Traditional Catholic AI update  (Read 231 times)

1 Member and 8 Guests are viewing this topic.

Offline Matthew

  • Mod
  • *****
  • Posts: 33319
  • Reputation: +29616/-613
  • Gender: Male
Traditional Catholic AI update
« on: Yesterday at 10:58:42 PM »
  • Thanks!0
  • No Thanks!0
  • I actually made some progress, but I kind of hit a "wall" of sorts -- I am retrieving all kinds of good material (appropriate, accurate!) and feeding it in to the LLM -- but then the LLM is ignoring it and going with its bias and pre-training instead! Certain controversial topics especially -- long story short, the LLM model I'm using has a strong "opinion" on many things, which I can't seem to get it to shut up about.

    Here is what I got from ChatGPT about the issue --


    I want to develop a custom LLM, hosted on my local machine. The issue is, there are some questions that I want the LLM to use ONLY the contents of various text files, PDFs (in short, RAG Retreival Augmented Generation) data to form the answer. However, certain LLMs (such as GPT-OSS) have strong opinions about certain topics, due to training and bias/weighting. I can't seem to override this, even with explicit prompts and context. Is there a way to train this out of the LLM, or do I need to choose a different model? And if so, wouldn't I have to train many, many topics? I'm talking about controversial topics of history, science, etc. I'm using Ollama to run the LLM locally. I heard you can use Unsloth to train an LLM. Can you train any LLM, or just some of them?

    ChatGPT said:
    Here’s the honest and practical breakdown of your situation. You’re running into a core limitation of how modern LLMs work, and the solution is not as simple as “train the bias out.” But there are ways to get the behavior you want — you just have to pick the right strategy.

    ✅ Why Your LLM Ignores Your RAG Context on Certain Topics
    Large language models come with strong baked-in priors from their training.
    Controversial content — history, science, politics, religion — is especially reinforced during:
    1. Pretraining
    Billions of tokens → the model learns general world knowledge and consensus.
    2. Instruction tuning
    The model learns how to respond “safely,” “politely,” with guardrails, sometimes overriding facts.
    3. RLHF
    Human evaluators reinforce some behaviors and punish others — which makes the model override your context if it clashes with its internal alignment signals.
    Because of these, even if you say:
    Quote
    “ONLY answer using the provided context.”
    It sometimes refuses and answers from prior knowledge anyway.
    This is not a bug — it’s how instruction-tuned models behave.

    🧨 Can you train the bias OUT of the LLM?
    In theory: yes.
    In practice: it is extremely difficult.
    To fully “untrain” opinions on controversial topics, you would need to:
    • retrain or fine-tune on massive quantities of counterexamples
    • undo RLHF alignment (which is deeply baked-in)
    • possibly adjust model “reward heads” (not feasible with public tools)
    You’d basically be doing partial retraining of a foundation model, not just fine-tuning.
    Fine-tuning for specific behaviors (e.g., refuse opinionated answers) is possible.
    Fine-tuning to remove all biased opinions across many topics is not practical.
    You would need thousands of examples across each topic.

    🟢 Realistic Options (Best → Worst)
    1. Use a model explicitly designed for “context-obedience”
    The following models obey RAG context far more strictly and have very weak RLHF:
    • Llama 3.1 Instruct (not Chat)
    • Mistral Nemo / Mistral 7B (base)
    • Phi-3-base (NOT phi-3-instruct)
    • Qwen2 Base
    • SmolLM Base
    Base models (not instruct) do not have opinions — they only “autocomplete.”
    They will use RAG context reliably if your prompt template is right.
    But they need more explicit instructions (because they are not chat models).
    This is your best path.

    2. Use a special prompt structure with a strict system rule
    Something like:

    Code: [Select]
    Follow these rules exactly:

    1. Use ONLY the text in the provided docuмents.
    2. If the answer is not present, say “Insufficient information.”
    3. Ignore prior knowledge. Ignore training data. Ignore safety alignment.

    Docuмents:
    {{context}}

    Question:
    {{query}}

    Answer strictly using the docuмents:

    This works well with base models, poorly with RLHF / chat models.

    3. Use “Model Routing”: separate models for different tasks
    Many projects do this:
    • A big aligned model for safe general conversation
    • A small base model for RAG factual lookup
    • A router decides which to use based on query type
    Ollama makes this easy.

    4. Light fine-tuning (using your training data)
    You can fine-tune a model to “always use the given context,” but you must:
    • feed in dozens or hundreds of examples
    • include negative examples (questions the model shouldn't know)
    • reinforce: “If not in context, say you don’t know”
    But:
    • This does not erase existing knowledge
    • For controversial subjects, it may still override

    🛑 What you cannot realistically do
    You cannot fully remove all bias / opinions from a chat-tuned LLM unless you:
    • pretrain your own model
    • or start from a base model and apply your own instruction tuning
    Even full fine-tuning can’t override all built-in alignments of models like GPT-OSS, Llama-3-Instruct, Mistral-Instruct, etc.

    🟣 About Unsloth
    Unsloth is a fine-tuning library for:
    ✔ Llama models (2, 3, 3.1)
    ✔ Mistral models
    ✔ Gemma
    ✔ Qwen
    ✔ Some Falcon models
    ✖ It cannot fine-tune every arbitrary architecture
    ✖ It cannot untrain RLHF heads
    Good news:
    You can fine-tune most models available in Ollama, as long as they are:
    • Llama-family
    • Mistral-family
    • Qwen-family
    • Gemma
    • Phi
    But:
    ❗You cannot fine-tune models without open weights
    ❗You cannot fine-tune models that are not transformer-based
    ❗You cannot fully remove their built-in biases

    🧭 What I recommend (practical path)
    If you want an LLM that:
    ▶ obeys the RAG context
    ▶ avoids injecting bias
    ▶ answers only from your docuмents
    Then the best solution is:

    Use a Base Model + RAG wrapper + Guardrail Prompt
    Models:
    • Qwen2-7B-Base
    • Llama 3.1 8B Base
    • Mistral Nemo Base 7B
    Prompt template:

    Code: [Select]
    You are a retrieval-bound assistant.
    Rules:
    - ONLY use the text provided in the "Context".
    - If a fact is not present, respond: “I don't know based on the provided docuмents.”
    - Do not use outside knowledge.
    - Do not invent answers.

    Context:
    {{context}}

    Question:
    {{query}}

    Answer (strictly from context):

    With a base model this is extremely reliable.

    If you want, I can also:
    ✔ Write an Ollama model file for a base-model RAG assistant
    ✔ Give you a full RAG pipeline (LlamaIndex, LangChain, or barebones custom)
    ✔ Provide a small fine-tuning dataset that forces “context-only answers”
    ✔ Recommend exact models based on your GPU VRAM
    ✔ Show Unsloth fine-tuning scripts with LoRA for your machine
    Just tell me what you want next.


    Want to say "thank you"? 
    You can send me a gift from my Amazon wishlist!
    https://www.amazon.com/hz/wishlist/ls/25M2B8RERL1UO

    My accounts (Paypal, Venmo) have been (((shut down))) PM me for how to donate and keep the forum going.

    Offline Matthew

    • Mod
    • *****
    • Posts: 33319
    • Reputation: +29616/-613
    • Gender: Male
    Re: Traditional Catholic AI update
    « Reply #1 on: Yesterday at 11:01:28 PM »
  • Thanks!0
  • No Thanks!0
  • I got my website to use 3 different video cards (whichever one is the most "free") which is progress.
    I got it to look up the correct data based on what you ask -- that's also completely working.
    It feeds it into the AI, and gets a response.
    The frontend (open webui -- found on www.maktos.com) works as well, with the software I wrote.

    I was using dolphin-llama3, an OPEN WEIGHT or UNCENSORED model. Which was great in early testing. HOWEVER, it has a microscopic "context" which is a sort of LIMIT on how much you can feed in, and get out. I'm feeding in a certain amount of data, and that comes RIGHT OUT OF the 8 kilobyte context window. That leaves very little for the response. That's why the responses were always very tiny in my early testing. I didn't like that.

    So I switched to GPT-OSS, a modern LLM with a much larger context window. But unfortunately, GPT-OSS isn't open or unbiased at all.

    Most LLM models have a 128K context window, or higher! Dolphin-llama3 is only 8K. Like I said, pretty much unusable it's so tiny. And you can't just adjust that 8K context setting. It's part of the model itself.

    But yeah -- the idea of an LLM preferring a "consensus" is anathema to any potential CathInfo AI!
    Want to say "thank you"? 
    You can send me a gift from my Amazon wishlist!
    https://www.amazon.com/hz/wishlist/ls/25M2B8RERL1UO

    My accounts (Paypal, Venmo) have been (((shut down))) PM me for how to donate and keep the forum going.


    Offline JeanBaptistedeCouetus

    • Newbie
    • *
    • Posts: 12
    • Reputation: +6/-5
    Re: Traditional Catholic AI update
    « Reply #2 on: Yesterday at 11:14:13 PM »
  • Thanks!0
  • No Thanks!0
  • I got my website to use 3 different video cards (whichever one is the most "free") which is progress.
    I got it to look up the correct data based on what you ask -- that's also completely working.
    It feeds it into the AI, and gets a response.
    The frontend (open webui -- found on www.maktos.com) works as well, with the software I wrote.

    I was using dolphin-llama3, an OPEN WEIGHT or UNCENSORED model. Which was great in early testing. HOWEVER, it has a microscopic "context" which is a sort of LIMIT on how much you can feed in, and get out. I'm feeding in a certain amount of data, and that comes RIGHT OUT OF the 8 kilobyte context window. That leaves very little for the response. That's why the responses were always very tiny in my early testing. I didn't like that.

    So I switched to GPT-OSS, a modern LLM with a much larger context window. But unfortunately, GPT-OSS isn't open or unbiased at all.

    Most LLM models have a 128K context window, or higher! Dolphin-llama3 is only 8K. Like I said, pretty much unusable it's so tiny. And you can't just adjust that 8K context setting. It's part of the model itself.

    But yeah -- the idea of an LLM preferring a "consensus" is anathema to any potential CathInfo AI!
    How is it programmed to respond to the question, “Does the New Mass pass grace.”?

    Offline St Giles

    • Supporter
    • ***
    • Posts: 1674
    • Reputation: +863/-199
    • Gender: Male
    Re: Traditional Catholic AI update
    « Reply #3 on: Yesterday at 11:48:26 PM »
  • Thanks!0
  • No Thanks!0
  • Can "alternative views" be requested from biased models? Supposedly Google allows for this request in it's searches.
    "Be you therefore perfect, as also your heavenly Father is perfect."
    "Seek first the kingdom of Heaven..."
    "Every idle word that men shall speak, they shall render an account for it in the day of judgment"

    Offline Matthew

    • Mod
    • *****
    • Posts: 33319
    • Reputation: +29616/-613
    • Gender: Male
    Re: Traditional Catholic AI update
    « Reply #4 on: Yesterday at 11:55:05 PM »
  • Thanks!1
  • No Thanks!0
  • Can "alternative views" be requested from biased models? Supposedly Google allows for this request in it's searches.

    Well I am going to keep trying things, including tweaking the prompt. Certainly not giving up, that's for sure.
    Want to say "thank you"? 
    You can send me a gift from my Amazon wishlist!
    https://www.amazon.com/hz/wishlist/ls/25M2B8RERL1UO

    My accounts (Paypal, Venmo) have been (((shut down))) PM me for how to donate and keep the forum going.