Skip to content

Latest commit

 

History

History

Mistral7BUnity

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

LLM-based generation of content with Mistral 7B (CPU only)

This is a Visual Studio solution for showcasing how you can integrate LLMs to add content to your video game. No GPU is required, as we will be using quantized models to 3 or 5 bits.

1. Download this solution

2. Open and look for NuGet plugin

As you will need to install LlamaSharp from NuGet, you need NuGet for Unity to manage the downloading and installation. The plugin is alredy installed by default, and you should see this: image

But if not, just click on the .package you will see in AddToProjects image

3. Go to NuGet -> Manage NuGet Packages

image

Check if LlamaSharp is installed. image

If not, go to NuGet -> Manage Nu Get Packages -> Online, type LlamaSharp and install it. image

4. Open Assets/_Scripts/ContentGenerator.cs, check and look for UniTask

UniTask is a library to get full multithread in Unity.

The library should have been preinstalled for you in the solution. Check that the compiler does not complain about lñacUniTask library. If it complains about it being missing, then install it the same way as with NuGet, just click on the .package you will see in AddToProjects. image

5. Download Mistral 7B weights

In the demo I use two quantized 7B versions. Let's download both of them.

Go to: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF

In there, look for: image

You will need to click in the model names and then, in the opening tab, download them clicking here: image

Proceed as mentioned for the two models.

6. Place the models inside your StreamingAssets/ folder.

It should look like this: image

7. Click on Play

It takes some seconds to load the model into memory. After, you can type concepts and click on Submit to see what the model generates. image

8. Stop, play with the hyperparams and run again.

Do you see the difference?

image

Additional:

  • Feel free to download other GGUF-based LLM models supported by LlamaSharp, including Llama2 and Mistral.
  • Do you want to run it in GPU? Let me know!