Resource

LM Studio

Discover, download, and run local LLMs

⛶  Fullscreen ↓  Download 📂 Demo
Edit

With LM Studio, you can ...

  • 🤖 - Run LLMs on your laptop, entirely offline
  • 👾 - Use models through the in-app Chat UI or an OpenAI compatible local server
  • 📂 - Download any compatible model files from HuggingFace 🤗 repositories
  • 🔭 - Discover new & noteworthy LLMs in the app's home page

LM Studio supports any ggml Llama, MPT, and StarCoder model on Hugging Face (Llama 2, Orca, Vicuna, Nous Hermes, WizardCoder, MPT, etc.). Made possible thanks to the llama.cpp project.

Consult the Technical Documentation at https://lmstudio.ai/docs

TL;DR: The app does not collect data or monitor your actions. Your data stays local on your machine. It's free for personal use. For business use, please get in touch.

Does LM Studio collect any data?

No. One of the main reasons for using a local LLM is privacy, and LM Studio is designed for that. Your data remains private and local to your machine.

Can I use LM Studio at work?

Please fill out the LM Studio @ Work request form and we will get back to you as soon as we can. Please allow us some time to respond.

What are the minimum hardware / software requirements?

  • Apple Silicon Mac (M1/M2/M3) with macOS 13.6 or newer
  • Windows / Linux PC with a processor that supports AVX2 (typically newer PCs)
  • 16GB+ of RAM is recommended. For PCs, 6GB+ of VRAM is recommended
  • NVIDIA/AMD GPUs supported

Are you hiring?

Yes! See our careers page for open positions. We are a small team located in Brooklyn, New York, USA.

version

model-catalog

A collection of standardized JSON descriptors for Large Language Model (LLM) model files.

<model_name>.json

A single JSON file describes a model, its authors, additional resources (such as an academic paper) as well as available model files and their providers.

Version 0.0.1 of this format attempts to capture an informative set of factors including: - model size (e.g. 7B, 13B, 30B, etc.) - model architecture (such as Llama, MPT, Pythia, etc.) - model file format (e.g. ggml) as well as quantization format (e.g. q4_0, q4_K_M, etc.)

See examples: guanaco-7b.json, samantha-1.1-llama-7b.json, Nous-Hermes-13b.json.

catalog.json

A Github action picks up .json files from the models/ directory and merges them into one catalog.json file. The contents of each JSON file is validated by another Github action using a JSON schema.

Contribution Process

You're invited to help catalog models and improve upon this description format.

  1. Fork this repo and create a new development branch.
  2. Create a new model JSON file and place it the models/ directory.
  3. Validate your file against the expected JSON schema using the validate.py tool or by running createCatalog.py.
  4. Open a PR with your change.
  5. Ensure all Github actions complete successfully.

Note: Do not modify catalog.json manually.

 
Alle Teilnehmer*innen, Sponsor, Partner, Freiwilligen und Mitarbeiter*innen unseres Hackathons sind verpflichtet, dem Hack Code of Conduct zuzustimmen. Die Organisatoren werden diesen Kodex während der gesamten Veranstaltung durchsetzen. Wir erwarten die Zusammenarbeit aller Teilnehmer*innen, um eine sichere Umgebung für alle zu gewährleisten. Weitere Einzelheiten zum Ablauf der Veranstaltung finden Sie unter Richtlinien in unserem Wiki.

Creative Commons LicenceDie Inhalte dieser Website stehen, sofern nicht anders angegeben, unter einer Creative Commons Attribution 4.0 International License.