Skip to content

LM Studio

The lmstudio provider connects to the LMStudio headless server. and allows to run local LLMs.

  1. Install LMStudio (v0.3.5+)

  2. Open LMStudio

  3. Open the Model Catalog, select your model and load it at least once so it is downloaded locally.

  4. Open the settings (Gearwheel icon) and enable Enable Local LLM Service.

  5. GenAIScript assumes the local server is at http://localhost:1234/v1 by default. Add a LMSTUDIO_API_BASE environment variable to change the server URL.

    .env
    LMSTUDIO_API_BASE=http://localhost:2345/v1

Find the model API identifier in the dialog of loaded models then use that identifier in your script:

script({
model: "lmstudio:llama-3.2-1b-instruct",
})
  • GenAIScript uses the LMStudio CLI to pull models on demand.
  • Specifying the quantization is currently not supported.

Aliases

The following model aliases are attempted by default in GenAIScript.

AliasModel identifier
embeddingstext-embedding-nomic-embed-text-v1.5

Limitations

  • Ignore prediction of output tokens

Follow this guide to load Hugging Face models into LMStudio.

The jan provider connects to the Jan local server.

  1. Jan

  2. Open Jan and download the models you plan to use. You will find the model identifier in the model description page.

  3. Click on the Local API Server icon (lower left), then Start Server.

    Keep the desktop application running!

To use Jan models, use the jan:modelid syntax. If you change the default server URL, you can set the JAN_API_BASE environment variable.

.env
JAN_API_BASE=http://localhost:1234/v1

Limitations

  • Ignore prediction of output tokens
  • top_p ignored