If you have decent hardware, try using a local LLM.
You can try Ollama or LLM Studio.
Then you can pick the right LLM for the things you need and to better match your hardware specs.
Some LLMs are better for general questions, some for coding, some for math, reasoning, etc.
And because you are running them on your local hardware, there are no questions about privacy.
The most important thing is to consider that the model should fit in your vram, otherwise performance will drop drastically.
So for example, if someone has a GPU with 16Gb of vram, using a model with 17Gb will result in slower performance.
All the AI models nowadays have shit tendency of lying to you or making stuff up - they will not flat out say "I don't know" because it's impossible with the information they search through. The problem is they search mostly on open public forums and as we know people there have no fucking idea about things.
All the AI models nowadays have shit tendency of lying to you or making stuff up - they will not flat out say "I don't know" because it's impossible with the information they search through. The problem is they search mostly on open public forums and as we know people there have no fucking idea about things.