A quick question, since Vortex is responsible for loading and managing many of my mods I can't afford to launch the game normally, it needs Vortex to wrok properly. Is there a way to make the mod work by launching it through vortex or is that impossible?
Read what I wrote fully, like I said i can install it manually, but I still have to launch the game through vortex, and from what I've read it's the launching with vortex that's the problem. I cannot avoid uing vortex because of my modpack and the way it works, so I'm jsut inquiring if this mod still works if it's installed correctly but launched through vortex?
Would it be possible to make the gpt4 api link editable? So that we can place our own url in there, such as http://0.0.0.0:5001/v1. This way we'll be able to connect our local models and other services that accept GPT4 format calls. Or, maybe you can tell me where it can be changed in the mod files? Thank you!
this will allow using uncensored models, too.
Maybe also expose context length setting.
OpenRouter is the name of the place that has some free AI models all the time. It is also so affordable even when it comes to 'paid' models, that large amount of generation is really not a concern.
There's a mod for Skyrim called Mantella that allows for interactions with pretty much all skyrim npc's and even animals. Here is a copy/paste from it's config file for alternates to OpenAI gpt, in case it is going to be helpful.
; alternative_openai_api_base ; If you are using openai's services, leave this alone, otherwise you can change this variable to another base_api that uses openai's api ; For example, if you have a local llm framework or online framework that allows you to use a different url to access openai api functions, you can enter the base_api url here ; Your alternative api_base must support openai's python streaming protocol. ; Examples: ; http://127.0.0.1:5000/v1 for textgenwebui using the default openai extension ; http://127.0.0.1:5001/v1 for koboldcpp (after version 1.46 of koboldcpp which supports the openai API) ; http://localhost:8080/v1 using the default endpoint for Local.ai ; https://openrouter.ai/api/v1 for openrouter ; http://localhost:5001/v1 for using koboldcpp locally or the url you obtain from the koboldcpp google colab notebook with /v1 added at the end. ; Ensure that you have the correct secret key set in GPT_SECRET_KEY.txt for the service you are using ; Note that for some services, like textgenwebui, you must enable the openai extension and have the model you want to use preloaded before running mantella ; Leave this value as none to use the normal openai chat gpt models. alternative_openai_api_base =
I looked through their project. They are using python which has much more libraries than .NET framework. I am sorry that I cannot achieve this. HTTP requests can only send one sentence but cannot have the conversation for C#.
By starting a conversation and asking questions, residents react in the same way as the AI. For example, when asked what their name is, they answer: “Hello, I am the ChatGPT artificial assistant” and the like. In the log it says this: "ChatGPT didn't pass the test! Exception: Object reference not set to an instance of an object." Perhaps I'm doing something wrong.
This is exactly the kind of concept I was hoping for. Of course it's going to be somewhat limited as ChatGPT doesn't learn any longer and can't recall past a certain point in the conversation. Unless they've changed this more recently. Unfortunately I'm not able to use the mod properly since I had a subscription at one point and cancelled due to my needs and the AI's limitations. The paid version only fractionally improved its answers. I did use your API key and it threw no errors but the conversation option was there and I was able to type in a question. It simply never responded. I assumed the limits had been reached or the like. I really love the concept and will be following the development. Honestly, if chatGPT could be tied to a companion and somehow breach those limitations of memory loss/recall by storing backstory and archived conversations locally. I would definitely pay for that.
So we must pay to use the programme that powers this mod?... I remember there being a mod that used ChatGPT exactly like this but on the older version and it was all free. If we have to pay that kind of sucks. Not your fault, of course.
I googled around. That mod is called Inworld. And that is free. For new users, there will be 18 dollars charged to their account. And the fee for API is really cheap. For me, I only use 0.1 USD for a week
I'm not sure why you googled it, you already know it because you were clearly inspired by my Inworld Mod's code :) You even kept the "ModRelayer" folder naming in my mod which I used that naming because in my version it's keeping the relayer software. Which is totally fine by the way, the mod is open-sourced for this exact reason, and I'm glad to see that you added and released OpenAI ChatGPT version of it. I just find it a little strange that you didn't mention it anywhere or say that you found out about the mod by googling around :)
And it is unlikely that you will find free API for OpenAI unless you create your own relayer service and cover the costs yourself (which I don't recommend, it can cost a lot) Perhaps you can consider local models or some service models that are not costing anything for players, you take a look at unofficial lib for character.ai [here] and you can also have a look at the guide I created awhile ago about local and service models for LLM modding [here]
@ThePro51773: Not trying to undercut his mod but just for your information, Inworld mod still do work for 1.2.X+
Yeah i actually have your inworld mod installed right now. It works perfectly and its free. i first tried inworld on, i think, version 1.1.5 of bannerlord. Thank you for your work bloc.
Sorry for the misunderstanding, i mean that i googled around before making the mod. It is my first time making the mod. So basically i am referring other devoloped mod. I will add the reference later. Thanks a lot for your information!
I just checked the character.ai yesterday. And it only support .Net >7.0. This is always the trouble for me because mount and blade only supports .NET 4.7.2. Do you know how to solve that?
And, as far as I know, Character.ai does not have any official API, so it cannot have a requirement like that. If you mean this unofficial wrapper that requires .NET 7 or above, I think you can easily circumvent that by getting the source of that and compiling it yourself for a downgraded .NET - since almost all of them are using some puppeteer and the core of it is quite basic. Although for the long term I most certainly wouldn't suggest going with that approach since even with some minor changes to their web UI, your mod can become unusable. Perhaps you can try to have a look at the local models but then each of the local models would need to be downloaded to the player's computer (which is quite a bit of download since they are large models) and they need to be beefy enough to run such demanding LLMs on the machine. Technically, that would be the best case scenario for some players since even on my mod there were quite a bit of requests for offline models. But I personally don't see it as worth it.
If you have bucks to spend, what you can do is you can spin up a good machine on a cloud, run a local Llama 2 or something like that and setup a server which takes requests from mod users, runs it through cloud hosted model and answers back. But again, this will cost you as well and this cost can be significant depending on player usage.
So is this mod free to use or i need to pay some sort of dough towards the api or whatever the thing you guys are speaking of, before i can use it fully?
52 comments
( -_- ) pathetic ( -_- )
this will allow using uncensored models, too.
Maybe also expose context length setting.
OpenRouter is the name of the place that has some free AI models all the time. It is also so affordable even when it comes to 'paid' models, that large amount of generation is really not a concern.
There's a mod for Skyrim called Mantella that allows for interactions with pretty much all skyrim npc's and even animals. Here is a copy/paste from it's config file for alternates to OpenAI gpt, in case it is going to be helpful.
; alternative_openai_api_base
; If you are using openai's services, leave this alone, otherwise you can change this variable to another base_api that uses openai's api
; For example, if you have a local llm framework or online framework that allows you to use a different url to access openai api functions, you can enter the base_api url here
; Your alternative api_base must support openai's python streaming protocol.
; Examples:
; http://127.0.0.1:5000/v1 for textgenwebui using the default openai extension
; http://127.0.0.1:5001/v1 for koboldcpp (after version 1.46 of koboldcpp which supports the openai API)
; http://localhost:8080/v1 using the default endpoint for Local.ai
; https://openrouter.ai/api/v1 for openrouter
; http://localhost:5001/v1 for using koboldcpp locally or the url you obtain from the koboldcpp google colab notebook with /v1 added at the end.
; Ensure that you have the correct secret key set in GPT_SECRET_KEY.txt for the service you are using
; Note that for some services, like textgenwebui, you must enable the openai extension and have the model you want to use preloaded before running mantella
; Leave this value as none to use the normal openai chat gpt models.
alternative_openai_api_base =
I really love the concept and will be following the development. Honestly, if chatGPT could be tied to a companion and somehow breach those limitations of memory loss/recall by storing backstory and archived conversations locally. I would definitely pay for that.
For new users, there will be 18 dollars charged to their account. And the fee for API is really cheap. For me, I only use 0.1 USD for a week
And it is unlikely that you will find free API for OpenAI unless you create your own relayer service and cover the costs yourself (which I don't recommend, it can cost a lot) Perhaps you can consider local models or some service models that are not costing anything for players, you take a look at unofficial lib for character.ai [here] and you can also have a look at the guide I created awhile ago about local and service models for LLM modding [here]
@ThePro51773: Not trying to undercut his mod but just for your information, Inworld mod still do work for 1.2.X+
I just checked the character.ai yesterday. And it only support .Net >7.0. This is always the trouble for me because mount and blade only supports .NET 4.7.2. Do you know how to solve that?
And, as far as I know, Character.ai does not have any official API, so it cannot have a requirement like that. If you mean this unofficial wrapper that requires .NET 7 or above, I think you can easily circumvent that by getting the source of that and compiling it yourself for a downgraded .NET - since almost all of them are using some puppeteer and the core of it is quite basic. Although for the long term I most certainly wouldn't suggest going with that approach since even with some minor changes to their web UI, your mod can become unusable. Perhaps you can try to have a look at the local models but then each of the local models would need to be downloaded to the player's computer (which is quite a bit of download since they are large models) and they need to be beefy enough to run such demanding LLMs on the machine. Technically, that would be the best case scenario for some players since even on my mod there were quite a bit of requests for offline models. But I personally don't see it as worth it.
If you have bucks to spend, what you can do is you can spin up a good machine on a cloud, run a local Llama 2 or something like that and setup a server which takes requests from mod users, runs it through cloud hosted model and answers back. But again, this will cost you as well and this cost can be significant depending on player usage.
thanks for the mod, cant wait to try it!