AI Summary
This AI-generated content is derived from the source article and has been manually verified.
The guide outlines deploying Ollama and OpenWebUI on macOS, emphasizi
Why? #
Efficiency and Convenience: The integrated deployment process allows you to complete Ollama setup, OpenWebUI launch, and browser opening with a single execution, significantly improving efficiency compared to manual operations.
Deployment Preparation #
System Requirements #
- OS Requirements: Recommended to use the latest version of macOS for optimal software compatibility and performance.
- Hardware Requirements: Minimum 8GB RAM for smooth model operation. For optional GPU acceleration, your Mac needs to support compatible GPUs (e.g., AMD Radeon Pro series in some MacBook Pro models).
- Network: Stable internet connection required for downloading Ollama, model files, and Open-WebUI resources.
Essential Software Installation (Ollama, OpenWebUI, Shell environment): #
- Python 3.11~3.12
- Ollama Official Download
- OpenWebUI
pip install open-webui
Model Preparation #
ollama pull qwen3:8b
Quick Start #
Create Startup Script #
Save the script anywhere on your Mac, e.g.: /Users/username/Desktop/ollama_start.sh
# Configure ports
PORT=11434
WEB_PORT=9790
# Start Ollama service
start_ollama() {
echo "Starting Ollama..."
ollama serve &
OLLAMA_PID=$!
echo "Ollama Process ID: $OLLAMA_PID"
}
# Start OpenWebUI
start_webui() {
echo "Starting OpenWebUI..."
open-webui serve --port "$WEB_PORT" &
WEBUI_PID=$!
echo "OpenWebUI Process ID: $WEBUI_PID"
}
# Auto-open browser
open_browser() {
echo "Opening browser..."
# Cross-platform support
case "$(uname -s)" in
Darwin) open "http://localhost:$WEB_PORT" ;;
Linux) xdg-open "http://localhost:$WEB_PORT" ;;
Windows) start "http://localhost:$WEB_PORT" ;;
*) echo "Unsupported OS, cannot auto-open browser" ;;
esac
}
# Main workflow
start_ollama
sleep 5 # Wait for Ollama to start (adjust time based on model size)
start_webui
sleep 5 # Wait for WebUI to start
open_browser # Auto-open browser
# Display access info
echo -e "\nOllama service running at: http://localhost:$PORT"
echo -e "OpenWebUI running at: \033[32mhttp://localhost:$WEB_PORT\033[0m"
echo "Press Ctrl+C to stop all services"
# Handle termination signals
trap "echo 'Stopping services...'; kill -9 $OLLAMA_PID $WEBUI_PID 2>/dev/null; wait; echo 'Services stopped'." SIGINT SIGTERM
# Wait for processes
wait
Configure Quick-Start Alias #
Create a shell script named ollama_start.sh and add:
# alias ollama_start='/Users/username/Desktop/ollama_start.sh'
echo "alias ollama_start='/Users/username/Desktop/ollama_start.sh'" >> ~/.zshrc
source ~/.zshrc
ollama_start # Launch services
Script Explanation #
- Port Configuration: Defines two ports for Ollama and OpenWebUI (default: 11434 and 9790, customizable).
- Start Ollama: Launches Ollama service with
ollama serve
and records process ID. - Start OpenWebUI: Launches OpenWebUI with
open-webui serve
and records process ID. - Auto-open Browser: Opens browser automatically with OS-appropriate commands.
- Main Workflow: Starts services sequentially with delays, then opens browser and displays access info.
- Termination Handling: Gracefully stops services when receiving Ctrl+C.