504 lines
16 KiB
Markdown
504 lines
16 KiB
Markdown
# Ollama Cheatsheet
|
||
|
||
Here is a comprehensive Ollama cheat sheet containing most often used commands and explanations:
|
||
|
||
### Installation and Setup
|
||
|
||
- **macOS**: Download Ollama for macOS using the command:
|
||
|
||
```
|
||
curl -fsSL https://ollama.com/install.sh | sh
|
||
```
|
||
|
||
- **Windows (Preview)**: Download Ollama for Windows.
|
||
- **Linux**: Use the command:
|
||
|
||
```
|
||
curl -fsSL https://ollama.com/install.sh | sh
|
||
```
|
||
|
||
- **Docker**: Use the official image available at `ollama/ollama` on Docker Hub.
|
||
|
||
### Running Ollama
|
||
|
||
- **Run Ollama**: Start Ollama using the command:
|
||
|
||
```
|
||
ollama serve
|
||
```
|
||
|
||
- **Run a Specific Model**: Run a specific model using the command:
|
||
|
||
```
|
||
ollama run <model_name>
|
||
```
|
||
|
||
### Model Library and Management
|
||
|
||
- **List Models**: List all available models using the command:
|
||
|
||
```
|
||
ollama list
|
||
```
|
||
|
||
- **Pull a Model**: Pull a model using the command:
|
||
|
||
```
|
||
ollama pull <model_name>
|
||
```
|
||
|
||
- **Create a Model**: Create a new model using the command:
|
||
|
||
```
|
||
ollama create <model_name> -f <model_file>
|
||
```
|
||
|
||
- **Remove a Model**: Remove a model using the command:
|
||
|
||
```
|
||
ollama rm <model_name>
|
||
```
|
||
|
||
- **Copy a Model**: Copy a model using the command:
|
||
|
||
```
|
||
ollama cp <source_model> <new_model>
|
||
```
|
||
|
||
### Advanced Usage
|
||
|
||
- **Multimodal Input**: Use multimodal input by wrapping multiline text in triple quotes (`"""`) and specifying image paths directly in the prompt.
|
||
- **REST API Examples**:
|
||
- **Generate a Response**: Use the command: `curl http://localhost:11434/api/generate -d '{"model": "<model_name>", "prompt": "<prompt>"}'`
|
||
- **Chat with a Model**: Use the command:
|
||
`bash curl http://localhost:11434/api/chat -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "<message>"}]}'`
|
||
|
||
### Integration with Visual Studio Code
|
||
|
||
- **Start Ollama**: Start a terminal session and execute the command:
|
||
|
||
```
|
||
ollama serve
|
||
```
|
||
|
||
- **Run a Model**: Start a second terminal session and execute the command:
|
||
|
||
```
|
||
ollama run <model_name>
|
||
```
|
||
|
||
### AI Developer Scripts
|
||
|
||
- **ai_review**: Scours through your codebase for specific files, provides suggestions and code examples, and saves them in a review-{current_date}.md file.
|
||
- **ai_commit**: Suggests a commit message based on staged changes.
|
||
- **ai_readme**: Creates a README file automatically based on your project.
|
||
- **ai_pr**: Provides a PR review message automatically.
|
||
|
||
### Additional Resources
|
||
|
||
- **GitHub Repository**: Find the GitHub repository for AI developer scripts at https://github.com/ikramhasan/AI-Dev-Scripts.
|
||
|
||
### Other Tools and Integrations
|
||
|
||
- **Lobe Chat**: An open-source, modern-design LLMs/AI chat framework supporting multiple AI providers and modalities.
|
||
- **LangChain**: A Java version of LangChain.
|
||
- **AI Vtuber**: A virtual YouTuber driven by various AI models, including Ollama, for real-time interaction with viewers.
|
||
- **AI Code Completion**: A locally or API-hosted AI code completion plugin for Visual Studio Code.
|
||
|
||
### Community and Support
|
||
|
||
- **Reddit**: Join the Ollama community on Reddit for discussions and support.
|
||
|
||
### Documentation and Updates
|
||
|
||
- **Official Documentation**: Refer to the official Ollama documentation for detailed guides and tutorials.
|
||
- **GitHub Topics**: Explore the Ollama topic on GitHub for updates and new projects.
|
||
|
||
### Additional Tips
|
||
|
||
- **GPU Support**: Use the command:
|
||
|
||
```
|
||
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi -L
|
||
```
|
||
|
||
- **NVIDIA GPU Support**: Generate the CDI spec according to the documentation and check that your GPU is detected.
|
||
- **Openshift**: Use the commands:
|
||
|
||
```
|
||
oc new-project darmstadt-workshop
|
||
oc apply -f deployments/ollama.yaml
|
||
```
|
||
|
||
- **Debugging**: Use the command:
|
||
|
||
```
|
||
oc run mycurl --image=curlimages/curl -it -- sh
|
||
```
|
||
|
||
### Additional References
|
||
|
||
- **Ollama Cheat Sheet**: Refer to the Ollama cheat sheet for detailed information on using Ollama.
|
||
- **LLM AppDev Hands-On**: Refer to the LLM AppDev Hands-On repository for additional information on developing applications with local LLMs.
|
||
|
||
### Additional Tools and Resources
|
||
|
||
- **Streamlit**: Use Streamlit to run your Ollama application.
|
||
- **Podman**: Use Podman to run your Ollama application in a container.
|
||
- **NVIDIA GPU Support**: Use the command:
|
||
|
||
```
|
||
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi -L
|
||
```
|
||
|
||
- **Openshift**: Use the commands:
|
||
|
||
```
|
||
oc new-project darmstadt-workshop
|
||
oc apply -f deployments/ollama.yaml
|
||
```
|
||
|
||
- **Debugging**: Use the command:
|
||
|
||
```
|
||
oc run mycurl --image=curlimages/curl -it -- sh
|
||
```
|
||
|
||
### Additional Tips and Tricks
|
||
|
||
- **Customize a Model**: Use the command:
|
||
|
||
```
|
||
ollama create <model_name> -f <model_file>
|
||
```
|
||
|
||
- **Customize Prompt**: Use the command:
|
||
|
||
```
|
||
ollama create <model_name> -f <model_file> -p <prompt>
|
||
```
|
||
|
||
- **Chat with a Model**: Use the command:
|
||
|
||
```
|
||
curl http://localhost:11434/api/chat -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "<message>"}]}'
|
||
```
|
||
|
||
- **Generate a Response**: Use the command:
|
||
|
||
```
|
||
curl http://localhost:11434/api/generate -d '{"model": "<model_name>", "prompt": "<prompt>"}'
|
||
```
|
||
|
||
- **Multimodal Input**: Use multimodal input by wrapping multiline text in triple quotes (`"""`) and specifying image paths directly in the prompt.
|
||
- **REST API Examples**:
|
||
- **Generate a Response**: Use the command: `curl http://localhost:11434/api/generate -d '{"model": "<model_name>", "prompt": "<prompt>"}'`
|
||
- **Chat with a Model**: Use the command:
|
||
`bash curl http://localhost:11434/api/chat -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "<message>"}]}'`
|
||
|
||
### Additional Resources
|
||
|
||
- **GitHub Repository**: Find the GitHub repository for AI developer scripts at https://github.com/ikramhasan/AI-Dev-Scripts.
|
||
- **Reddit**: Join the Ollama community on Reddit for discussions and support.
|
||
- **Official Documentation**: Refer to the official Ollama documentation for detailed guides and tutorials.
|
||
- **GitHub Topics**: Explore the Ollama topic on GitHub for updates and new projects.
|
||
|
||
### Additional Tips and Tricks
|
||
|
||
- **GPU Support**: Use the command:
|
||
|
||
```
|
||
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi -L
|
||
```
|
||
|
||
- **NVIDIA GPU Support**: Generate the CDI spec according to the documentation and check that your GPU is detected.
|
||
- **Openshift**: Use the commands:
|
||
|
||
```
|
||
oc new-project darmstadt-workshop
|
||
oc apply -f deployments/ollama.yaml
|
||
```
|
||
|
||
- **Debugging**: Use the command:
|
||
|
||
```
|
||
oc run mycurl --image=curlimages/curl -it -- sh
|
||
```
|
||
|
||
### Additional References
|
||
|
||
- **Ollama Cheat Sheet**: Refer to the Ollama cheat sheet for detailed information on using Ollama.
|
||
- **LLM AppDev Hands-On**: Refer to the LLM AppDev Hands-On repository for additional information on developing applications with local LLMs.
|
||
|
||
### Additional Tools and Resources
|
||
|
||
- **Streamlit**: Use Streamlit to run your Ollama application.
|
||
- **Podman**: Use Podman to run your Ollama application in a container.
|
||
- **NVIDIA GPU Support**: Use the command:
|
||
|
||
```
|
||
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi -L
|
||
```
|
||
|
||
- **Openshift**: Use the commands:
|
||
|
||
```
|
||
oc new-project darmstadt-workshop
|
||
oc apply -f deployments/ollama.yaml
|
||
```
|
||
|
||
- **Debugging**: Use the command:
|
||
|
||
```
|
||
oc run mycurl --image=curlimages/curl -it -- sh
|
||
```
|
||
|
||
### Additional Tips and Tricks
|
||
|
||
- **Customize a Model**: Use the command:
|
||
|
||
```
|
||
ollama create <model_name> -f <model_file>
|
||
```
|
||
|
||
- **Customize Prompt**: Use the command:
|
||
|
||
```
|
||
ollama create <model_name> -f <model_file> -p <prompt>
|
||
```
|
||
|
||
- **Chat with a Model**: Use the command:
|
||
|
||
```
|
||
curl http://localhost:11434/api/chat -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "<message>"}]}'
|
||
```
|
||
|
||
- **Generate a Response**: Use the command:
|
||
|
||
```
|
||
curl http://localhost:11434/api/generate -d '{"model": "<model_name>", "prompt": "<prompt>"}'
|
||
```
|
||
|
||
- **Multimodal Input**: Use multimodal input by wrapping multiline text in triple quotes (`"""`) and specifying image paths directly in the prompt.
|
||
- **REST API Examples**:
|
||
- **Generate a Response**: Use the command: `curl http://localhost:11434/api/generate -d '{"model": "<model_name>", "prompt": "<prompt>"}'`
|
||
- **Chat with a Model**: Use the command:
|
||
`bash curl http://localhost:11434/api/chat -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "<message>"}]}'`
|
||
|
||
### Additional Resources
|
||
|
||
- **GitHub Repository**: Find the GitHub repository for AI developer scripts at https://github.com/ikramhasan/AI-Dev-Scripts.
|
||
- **Reddit**: Join the Ollama community on Reddit for discussions and support.
|
||
- **Official Documentation**: Refer to the official Ollama documentation for detailed guides and tutorials.
|
||
- **GitHub Topics**: Explore the Ollama topic on GitHub for updates and new projects.
|
||
|
||
### Additional Tips and Tricks
|
||
|
||
- **GPU Support**: Use the command:
|
||
|
||
```
|
||
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi -L
|
||
```
|
||
|
||
- **NVIDIA GPU Support**: Generate the CDI spec according to the documentation and check that your GPU is detected.
|
||
- **Openshift**: Use the commands:
|
||
|
||
```
|
||
oc new-project darmstadt-workshop
|
||
oc apply -f deployments/ollama.yaml
|
||
```
|
||
|
||
- **Debugging**: Use the command:
|
||
|
||
```
|
||
oc run mycurl --image=curlimages/curl -it -- sh
|
||
```
|
||
|
||
### Additional References
|
||
|
||
- **Ollama Cheat Sheet**: Refer to the Ollama cheat sheet for detailed information on using Ollama.
|
||
- **LLM AppDev Hands-On**: Refer to the LLM AppDev Hands-On repository for additional information on developing applications with local LLMs.
|
||
|
||
### Additional Tools and Resources
|
||
|
||
- **Streamlit**: Use Streamlit to run your Ollama application.
|
||
- **Podman**: Use Podman to run your Ollama application in a container.
|
||
- **NVIDIA GPU Support**: Use the command:
|
||
|
||
```
|
||
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi -L
|
||
```
|
||
|
||
- **Openshift**: Use the commands:
|
||
|
||
```
|
||
oc new-project darmstadt-workshop
|
||
oc apply -f deployments/ollama.yaml
|
||
```
|
||
|
||
- **Debugging**: Use the command:
|
||
|
||
```
|
||
oc run mycurl --image=curlimages/curl -it -- sh
|
||
```
|
||
|
||
### Additional Tips and Tricks
|
||
|
||
- **Customize a Model**: Use the command:
|
||
|
||
```
|
||
ollama create <model_name> -f <model_file>
|
||
```
|
||
|
||
- **Customize Prompt**: Use the command:
|
||
|
||
```
|
||
ollama create <model_name> -f <model_file> -p <prompt>
|
||
```
|
||
|
||
- **Chat with a Model**: Use the command:
|
||
|
||
```
|
||
curl http://localhost:11434/api/chat -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "<message>"}]}'
|
||
```
|
||
|
||
- **Generate a Response**: Use the command:
|
||
|
||
```
|
||
curl http://localhost:11434/api/generate -d '{"model": "<model_name>", "prompt": "<prompt>"}'
|
||
```
|
||
|
||
- **Multimodal Input**: Use multimodal input by wrapping multiline text in triple quotes (`"""`) and specifying image paths directly in the prompt.
|
||
- **REST API Examples**:
|
||
- **Generate a Response**: Use the command: `curl http://localhost:11434/api/generate -d '{"model": "<model_name>", "prompt": "<prompt>"}'`
|
||
- **Chat with a Model**: Use the command:
|
||
`bash curl http://localhost:11434/api/chat -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "<message>"}]}'`
|
||
|
||
### Additional Resources
|
||
|
||
- **GitHub Repository**: Find the GitHub repository for AI developer scripts at https://github.com/ikramhasan/AI-Dev-Scripts.
|
||
- **Reddit**: Join the Ollama community on Reddit for discussions and support.
|
||
- **Official Documentation**: Refer to the official Ollama documentation for detailed guides and tutorials.
|
||
- **GitHub Topics**: Explore the Ollama topic on GitHub for updates and new projects.
|
||
|
||
### Additional Tips and Tricks
|
||
|
||
- **GPU Support**: Use the command:
|
||
|
||
```
|
||
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi -L
|
||
```
|
||
|
||
- **NVIDIA GPU Support**: Generate the CDI spec according to the documentation and check that your GPU is detected.
|
||
- **Openshift**: Use the commands:
|
||
|
||
```
|
||
oc new-project darmstadt-workshop
|
||
oc apply -f deployments/ollama.yaml
|
||
```
|
||
|
||
- **Debugging**: Use the command:
|
||
|
||
```
|
||
oc run mycurl --image=curlimages/curl -it -- sh
|
||
```
|
||
|
||
### Additional References
|
||
|
||
- **Ollama Cheat Sheet**: Refer to the Ollama cheat sheet for detailed information on using Ollama.
|
||
- **LLM AppDev Hands-On**: Refer to the LLM AppDev Hands-On repository for additional information on developing applications with local LLMs.
|
||
|
||
### Additional Tools and Resources
|
||
|
||
- **Streamlit**: Use Streamlit to run your Ollama application.
|
||
- **Podman**: Use Podman to run your Ollama application in a container.
|
||
- **NVIDIA GPU Support**: Use the command:
|
||
|
||
```
|
||
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi -L
|
||
```
|
||
|
||
- **Openshift**: Use the commands:
|
||
|
||
```
|
||
oc new-project darmstadt-workshop
|
||
oc apply -f deployments/ollama.yaml
|
||
```
|
||
|
||
- **Debugging**: Use the command:
|
||
|
||
```
|
||
oc run mycurl --image=curlimages/curl -it -- sh
|
||
```
|
||
|
||
### Additional Tips and Tricks
|
||
|
||
- **Customize a Model**: Use the command:
|
||
|
||
```
|
||
ollama create <model_name> -f <model_file>
|
||
```
|
||
|
||
- **Customize Prompt**: Use the command:
|
||
|
||
```
|
||
ollama create <model_name> -f <model_file> -p <prompt>
|
||
```
|
||
|
||
- **Chat with a Model**: Use the command:
|
||
|
||
```
|
||
curl http://localhost:11434/api/chat -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "<message>"}]}'
|
||
```
|
||
|
||
- **Generate a Response**: Use the command:
|
||
|
||
```
|
||
curl http://localhost:11434/api/generate -d '{"model": "<model_name>", "prompt": "<prompt>"}'
|
||
```
|
||
|
||
- **Multimodal Input**: Use multimodal input by wrapping multiline text in triple quotes (`"""`) and specifying image paths directly in the prompt.
|
||
- **REST API Examples**:
|
||
- **Generate a Response**: Use the command: `curl http://localhost:11434/api/generate -d '{"model": "<model_name>", "prompt": "<prompt>"}'`
|
||
- **Chat with a Model**: Use the command:
|
||
`bash curl http://localhost:11434/api/chat -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "<message>"}]}'`
|
||
|
||
### Additional Resources
|
||
|
||
- **GitHub Repository**: Find the GitHub repository for AI developer scripts at https://github.com/ikramhasan/AI-Dev-Scripts.
|
||
- **Reddit**: Join the Ollama community on Reddit for discussions and support.
|
||
- **Official Documentation**: Refer to the official Ollama documentation for detailed guides and tutorials.
|
||
- **GitHub Topics**: Explore the Ollama topic on GitHub for updates and new projects.
|
||
|
||
### Additional Tips and Tricks
|
||
|
||
- **GPU Support**: Use the command:
|
||
|
||
```
|
||
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi -L
|
||
```
|
||
|
||
- **NVIDIA GPU Support**: Generate the CDI spec according to the documentation and check that your GPU is detected.
|
||
- **Openshift**: Use the commands:
|
||
|
||
```
|
||
oc new-project darmstadt-workshop
|
||
oc apply -f deployments/ollama.yaml
|
||
```
|
||
|
||
–
|
||
|
||
### Share this:
|
||
|
||
- [Click to share on Twitter (Opens in new window)](https://secretdatascientist.com/ollama-cheatsheet/?share=twitter&nb=1 "Click to share on Twitter")
|
||
- [Click to share on Facebook (Opens in new window)](https://secretdatascientist.com/ollama-cheatsheet/?share=facebook&nb=1 "Click to share on Facebook")
|
||
|
||
### _Related_
|
||
|
||
[LangChain Cheatsheet](https://secretdatascientist.com/langchain-cheatsheet/?relatedposts_hit=1&relatedposts_origin=1154&relatedposts_position=0 "LangChain Cheatsheet")June 15, 2024In "BLOG"
|
||
|
||
[Introduction to TensorFlow](https://secretdatascientist.com/introduction-to-tensorflow/?relatedposts_hit=1&relatedposts_origin=1154&relatedposts_position=1 "Introduction to TensorFlow")June 4, 2017In "BLOG"
|
||
|
||
[Intro to machine learning competitions with ‘Numerai’ – example code.](https://secretdatascientist.com/numerai-example/?relatedposts_hit=1&relatedposts_origin=1154&relatedposts_position=2 "Intro to machine learning competitions with ‘Numerai’ – example code.")February 13, 2017In "BLOG"
|
||
|
||
© 2025 SecretDataScientist.com • Built with [GeneratePress](https://generatepress.com/) |