Bluecondor
Member
This article from Reuters claims that a company (Databricks) is releasing an open-source chatbot that will be comparable to what we see in ChatGPT, but can be trained on much smaller datasets and will require far less computing power:
https://www.reuters.com/technology/...atbot-cheaper-chatgpt-alternative-2023-03-24/
Databricks CEO Ali Ghodsi said the release was aimed at demonstrating a viable alternative to training a kind of AI model called a large language model with enormous resources and computing power.
"The future will be that everyone has their own model, and they can actually train it, and they can make it better," Ghodsi said. "And that way, they also don't have to give away their data to someone else."
"My belief is that in the end, you will make these models smaller, smaller and smaller, and they will be open-sourced," Ghodsi said. "Everyone will have them."
--------------------------------------
So, if I am following this. Let's say you wanted to make chatbots based on the characters from the Harry Potter books. This AI model could be trained on the Harry Potter books/movies as the dataset. Then, chatbots on Harry Potter characters could respond based on this AI model that is trained on the Potter book dataset.
For those of you who know way more about this than me, am I correct in my Harry Potter example?
If so, this just makes sense. The next Harry Potter video game could (possibly depending on computing requirements) make it so that you could have more natural conversations with Potter characters on a wider range of topics. But, at the same time, you won't be able to get the characters to talk about non-canon topics, offensive topics, etc. (LOL, not that people won't try to get the characters to say everything under the sun and record it).
Anyway, this sounds really promising. Those of you with AI and/or game development, please set my expectations straight.
https://www.reuters.com/technology/...atbot-cheaper-chatgpt-alternative-2023-03-24/
Databricks CEO Ali Ghodsi said the release was aimed at demonstrating a viable alternative to training a kind of AI model called a large language model with enormous resources and computing power.
"The future will be that everyone has their own model, and they can actually train it, and they can make it better," Ghodsi said. "And that way, they also don't have to give away their data to someone else."
"My belief is that in the end, you will make these models smaller, smaller and smaller, and they will be open-sourced," Ghodsi said. "Everyone will have them."
--------------------------------------
So, if I am following this. Let's say you wanted to make chatbots based on the characters from the Harry Potter books. This AI model could be trained on the Harry Potter books/movies as the dataset. Then, chatbots on Harry Potter characters could respond based on this AI model that is trained on the Potter book dataset.
For those of you who know way more about this than me, am I correct in my Harry Potter example?
If so, this just makes sense. The next Harry Potter video game could (possibly depending on computing requirements) make it so that you could have more natural conversations with Potter characters on a wider range of topics. But, at the same time, you won't be able to get the characters to talk about non-canon topics, offensive topics, etc. (LOL, not that people won't try to get the characters to say everything under the sun and record it).
Anyway, this sounds really promising. Those of you with AI and/or game development, please set my expectations straight.