About this mod
This is a full translation and prompt restructuring of the CoboldAIValley mod found at https://www.nexusmods.com/stardewvalley/mods/33616, to English. I translated everything, including the code and tooltips. Updated it to work with Ridgeside Village and Stardew Valley: Expanded (not required to install), and fully optimized prompts to assist AI.
- Requirements
- Permissions and credits
- Changelogs
More (probably cleaner) instructions at https://www.nexusmods.com/stardewvalley/mods/33616 - Original mod, English instructions at bottom half.
MORE N00B FRIENDLY WITH LM STUDIO! - Additional Details under 'IMPORTANT' and 'SUGGESTED SPECS & SETTINGS' by first * highlights.
*For running models LOCALLY!
MOBILE VERSION TESTING NEEDED - This might be compatible with the mobile version if hosted on a separate PC and connected to via 'webtunnel' from your phone to Koboldcpp. If anyone could test this for me and post, I'd really appreciate it.
If you want to try:
1. Launch Koboldcpp and set it up normally, before launching go to the 'Network' tab and tick the 'webtunnel' box. Launch like normal.
2. Koboldcpp should generate a file and a couple addresses inside your powershell window. Leave the window open, and file intact.
3. Copy the last address in the powershell (looks like 'xxxxxxx.trycloudflare.com/) and paste it into the in game mod's setting window 'address' line on your phone.
Translated and re-prompted by me, for me... posted because I'm just that nice. If you don't like the mod, I don't care or want to hear about it. You'll likely just get told to go **** yourself in a way you've never been told to go **** yourself before.
I did it because I feel like the available mods (ValleyTalk) get bloated with calls to include way too much context for what this game is. Assaulting a model with 1200+ tokens of context about locations etc. before you even add any chat history doesn't always result in the best responses and kills generation speeds locally for most people. I don't care if Abigail knows what furniture is in her room and where it's located nearly %100.00 of the time I'm 'talking' to her. The chats here should process in one batch and generate a high quality response in about 5 seconds with this at suggested settings on a mid-range PC. The original creator did extremely well at creating a lightweight, efficient, and highly customizable framework.
**A better choice for AMD users. With Vulkan and ROCm (LMS) support.**
*Your privacy in mind!*
FINAL VERSION UP! Instructions cleaned up.
UPDATE:1.4.1 - Final Version (unless creator adds something): All character descriptions completely done, I tweaked quite a few details that were wrong or off. Prompts to AI are completely restructured to include []s and various other details that assist AI in processing context and keeping data organized. AI Keeping track of time prompt fully implemented and base prompts to system finalized. Don't expect to see any more updates on this from my end barring any unexpected changes to the original mod. (Changed version number to stop update warning)
*The instructions below are fairly in depth. If you follow them and learn, it's a crash course for running just about any game or mod with local LLMs in the future. Stardew Valley is a really easy game to get set up with AI, it's a great starting point if you're interested in some of the more advanced AI mods that exist for games like Skyrim, and would like some 'practice'. If you'd like more advanced tips or advice not related to SDV, feel free to post and ask.
IMPORTANT - SUGGESTED READING (It's organized if it's a a bit messy. Sorry... I write code and prompts, not introductions.) Suggested AI models listed below.
*If you want an easy way to download models you can download LMStudio and install that. It has a built in search function for HuggingFace that will allow you to browse, download, and test models very easily. Just load them into Koboldcpp from there, LMStudio's backend address SHOULD be compatitble as well if you'd rather use it. The address should be in the 'Developer Tab', you may need to enable 'developer/power user functions' (Bottom of main window, left side) to enable the tab.
*Your backend (Koboldcpp/LMStudio) must remain open and running in the background. Don't close them or you'll eject the model.
*!*!* If you edit .json files - Keep your edits between the []s I've inserted. Example "[She lives in pelican down with her mother. TEXT YOU ADDED]" If you edit while the game is open, you must restart for new descriptions to take effect. *!*!*
FEATURES:
*Chats will affect your relationship with the character you speak to. It can be adjusted or turned off in the settings, and the 'keyword' list can be found in config.json (open with notepad) in the mod folder. You can add or remove words and phrases that will trigger the change. Follow the format that exists and everything will work fine. Positive Keywords will increase the relationship, while Negative Keywords will decrease it.
-Please keep in mind AI isn't perfect and may get things wrong if you 'dig too deep'. There's a 99.99% chance that you DON'T have the specs to run a model that knows EVERYTHING about SDV locally. So don't bother trying. The primary function of this is to add 'surface level' interactions and roleplay that further the story between the player and NPC. Don't expect Leah to know everything about Abigail, for example, even though they are friends.
*AI tracks passage of time/season and weather - They'll tell you the season/date, and even reference it being their birthdays accurately now.
*There isn't really any 'long term' memory (it doesn't exist) If you want to add some basic memories, do so via character.json in the mod folder.
-Example: "Haley danced with player on Spring 1, Year 1." Include the date and the AI will have a general awareness of how long it's been since the 'memory'.
-Chat history resets at the start of each day. This prevents references to old events when interacting with characters you haven't spoken to in a while.
-No mod or model in existence has 'long term' memory in a real sense. At best you'll get AI written summaries written to a .json... and AI can get summaries VERY wrong. If you want real 'long term' memories, you have to write them yourself.
*You can also edit the "base prompt" field in config.json (open with notepad) to edit system prompts and tweak the AI's output. Example - Longer responses by switching it to 'short paragraph around 1-5 sentences in length' instead of '1-3'.
*You can edit events.json in the same way to add your own event descriptions for festivals etc.
*Should work with most local backends or web-tunnels if you change the address or just use Koboldcpp's.
*Add details about your farmer for the AI. (See next * below, use PolySweet method)
*Will support any mod that adds characters and pick up their Vanilla dialogue in chat history. If using something like 'dateable Jodi' you will need to edit her descriptions in characters.json. Example: For the 'Jodi' mod, you would just edit her and Kent's descriptions to be ex wife/husband.
-Support should include mods like 'Valley Girls' and 'PolySweet'. For mods like PolySweet: In config.json, in the 'base prompt' field at the end, add "Player Details: The player is a polygynist. Dating: __________. Married To: __________. Children: __________." (Do not copy the "", paste it INSIDE the "[ ]" and erase the ____ and fill in.) You can update the information here as events unfold and all NPCs will be aware of the details. Include extra details like "Caroline (secret affair)" if you don't want all NPCs to know about something openly, (secret affair) tells the AI only Caroline knows.
-Doing this, I've got 4 wives and 2 girlfriends that all know about each other and will reference doing things together with the whole 'family'. They'll even get excited to see your 'harem' grow. And I'm having an affair with Caroline she actually references correctly. That divorce is coming Pierre!
SUGGESTED SPECS & SETTINGS: READ ME!
8GB VRAM and 16GB RAM minimum suggested. (12B @ 2048 Context)
*If you have less than listed, you can still try with CPU (BLAS) processing... it's SLOW though. Depending on how much of the AI model is loaded into your card... it might still perform pretty well. Give it a try. Koboldcpp/LMStudio will handle most of this for you automatically. Models load in on 'layers', and you want to load the model in as many 'layers' as possible. After selecting your AI model, set your context to 2048 then tweak your 'layers' slider if needed (Koboldcpp does this automatically as you adjust the 'context' slider) until it loads in. Have it generate 4 paragraphs of dialogue for you about 5 times. That will just about fill a 2048 context window and give you an idea of max load response times.
-If the model fails to load on all layers, or generation is slow, try reloading it and REDUCING 'layers' to put some of the load onto your CPU.
-If you are playing SDV on the same card your model is loaded into, it will slow the generation speed of the model once launched. Shouldn't be much, but still.
-If you have a 2nd GPU: Windows Search, Graphics Settings, Find or Add StardewModdingAPI.exe (SMAPI Launcher) and set it to launch on a separate video card in 'Options'. The more space on the AI model's card, the better. You can do this for other apps too and free up extra if needed, Windows sucks at choosing GPUs.
*Koboldcpp - Under the 'Quicklaunch' tab - 'GPU ID': Select your GPU. 'Context Size' slider = 2048. 'Load' button to select your model.
- 'Hardware' tab - I suggest ticking 'mlock' before launching your model. Then click 'Launch'.
-Test generation speeds in the WebUI that launches if you want to. Koboldcpp's WebUI (the browser that launches in Firefox etc.) can be closed as long as you leave the powershell (black window full of text) open.
- Keep in mind that context will affect generation speed slightly as it builds and tank if it gets pushed into your RAM/CPU too much for inference.
*LMStudio - At the bottom left of the window, enable 'Developer' mode.
-Go to the 'Chat' tab, at the top click 'Load Model' and select your AI model. The settings window will pop up.
-Set context to 2048 and tick 'mlock' (if available) then adjust 'layers' until your model successfully loads in. Test text generation in 'Chat', get your connection address under 'Chat' in the 'Developer' tab, address (http://127.0.0.1:1234 or similar) located mid right area of window, and input it into the mod's in game settings. LMStudio must remain open to act as a 'server' backend.
-Keep in mind that context will affect generation speed slightly as it builds and tank if it gets pushed into your RAM/CPU too much for inference.
*(Should be Auto-set by Kobold/LMS)Use 'Vulkan' processing if on AMD and CLBlas for NVidia GPUs. Setting located under 'Quick Launch' (Kobold) or 'Settings/Runtime' (LMStudio, lower right of window, dropdown selection) but should be automatically set for you. Open/BLAS processing is done by your CPU and is the slowest option if your GPU won't handle it. LMStudio also has the option for 'ROCm' if you're on AMD and want to try it in place of 'Vulkan'.
Breakdown of Mod Settings (in game) and effects:
Temperature: .4 (Default) This setting effects how 'creative' the AI will be. For more descriptive or random responses, increase. Do not suggest more than .8
Repetition Penalty: 1.0 (Default) You can increase this if you notice the AI repeating the same words or phrases too much. 1.0-1.2 should be plenty. Too much will limit responses and cause 'quirks'.
The other settings like 'Context' will just increase responses in memory. You can increase the main context from '20' to change TOTAL number of chats stored, increasing the 2nd context slider from '5' (10max) affects how many previous chats for an INDIVIDUAL CHARACTER will be loaded into chat with that character from the '20' TOTAL chats stored by the main context. (Wipes at the start of each new day)
SUGGESTED LLM AI MODELS
Updated: 5-27-25
ALWAYS DOWNLOAD: Q4_KM if possible, Q4_KS 2nd, and IQ4(S/XS) as a last resort. If you have a beefy system you could go higher into Q8_0/F16 territory, but at that point you could probably just get a 'heavier' (15B-24B) model at the Q4 quants. The higher the 'quant', the more 'coherent' the AI's output will be generally.
(Recommended): Archaeo12B V2 (tested) by Mradermacher - https://huggingface.co/mradermacher/Archaeo-12B-V2-GGUF (The best choice if your PC will run it. The model knows what the Stardew Valley characters, events, mods etc. are to some extent, and with the added context from the mod it performs VERY well. It also uses the new time prompt with passage of time very effectively.
OR
ConvAI 9B (tested) by Mradermacher - https://huggingface.co/mradermacher/ConvAI-9b-GGUF (A good model for conversation but relies a lot more on the context provided by characters.json Not even close when compared with Archaeo.)
Both are uncensored and support NSFW interactions if provoked.
Both (among others) can be obtained on Huggingface. If you have trouble, use LMStudio, it has a built in function to search for and download models for you.
ADVANCED TIPS - CAN IGNORE (Tips if Changing Models)
Always use 'conversational' 'roleplay' models for mods or tasks like this. You can see the tag for a model on its 'card' (homepage) around the top of the page.
*I see a lot of other mods suggesting LLMs like Claude/GPT and 'heavier' models. I can see MAYBE a 22B if you can support it, but there is no real legit reason to use heavier models if you choose the correct model to handle your tasks. Anything newer trained on the formats used by 'Mistral' (in the tags on the base model's page, link mid right) and tested (ask it) for basic SDV knowledge should do just fine with the mod's added context and for what this game supports... which is simple roleplay chat.
Don't sit there for 30+ seconds waiting for a response that isn't worth it (a heavier model knowing Abigail's hair is dyed blue isn't worth the wait when it will likely never be used in chat context) if your system can't handle it. If you choose the right 12B model, it will be more than capable of handling this, a lot of newer 9Bs can do it if you look. If a 12B can't handle the data SDV throws at it, it isn't prompted correctly for the AI, has the wrong format, or is deprecated. Heavier models with 'general' knowledge will usually be outperformed by smaller models with specialized training in a specific task.
If trying other models.... aim to keep around 20%-25% of your GPU free for inference, which equates to speed. A 12B model should eat up about 7 GB of VRAM at Q4KM (use Q4KS minimum, or 4IQ quants if desperate). 2k of context should equate to a little under a GB of RAM and as context builds (from 500 to 2000 in example) it will affect generation speed. You can help this by lowering context (don't go under 1024) or choosing a smaller (9B instead of 12B) model to process your chats. I aim to use around 6-6.2GB (most 9Bish models with 4096 context window or 12B @ 2048) of VRAM out of my 8GB discreet GPU as an example. More is fine, just slower.
Final Tip - Load the model you want to use in and ask it about Stardew Valley and/or the mods. It'll probably get a lot of stuff off, but if it's listing the characters etc. somewhat accurately, it should work well with the additional context the mod provides. Always remember that the models you use, especially smaller ones, will usually be 'specialized' for 'on task' interactions. The more it knows about the 'task' (Role-playing in Stardew Valley) the more effective it will be at performing that task. Bigger is NOT always better.
***Using 'uncensored' models will help with problems like 'being to nice' or 'overly positive'. An example would be a censored model not accurately portraying Kent's PTSD from so many years of war. GPT is censored garbage that steals your data and records your interactions, don't use it.***
INSTALLATION:
Just unzip this mod's folder into your Stardew Valley Mods folder and run while your backend is active.
FINAL NOTES:
Run Koboldcpp and load your models BEFORE launching the game. To use: Approach an NPC and press "T" to open the chat window. The AI will translate your input into speech directed at the nearest NPC to your character.
You can add or edit your own characters by editing 'characters.json' in the mod's folder. Just follow the template for the other characters and it should work fine.