For the workshop, we recommend having these models available:
Copy
# Core model for text generationollama pull llama2# Small model for quick testsollama pull orca-mini# Embedding model for vector operationsollama pull nomic-embed-text
# Check if you can reach Ollama registrycurl https://ollama.ai/api/registry/models# Configure npm proxy if needednpm config set proxy http://your-proxy:portnpm config set https-proxy http://your-proxy:port