A step-by-step guide to quickly deploy and manage Ollama on OpenShift or Kubernetes using the Ollama Operator. Install the operator, deploy a model, and test your AI inference server in minutes.
Use Python, kubectl, and a local LLM (Llama3 via Ollama) to automatically diagnose failing Kubernetes pods.
ollama rag app built with python
Monitor internet uptime with Prometheus and Grafana on you local PC
Load testing with 3scale and hyperfoil
Automate 3scale monitoring stack deployment
Install the Keycloak via the operator Create a project oc new-project keycloak ...
How to setup 3scale with the 3scale-operator and create your first product
How to upgrade Postgresql-v10 to v13 on Openshift
id_token_hint and how to set it
Changing an Existing Kubernetes Operator to cluster scoped
Create an Electron app from web app on Fedora
Build a Kubernetes operator
How to use Keycloak in Express using OIDC
How to run two Kubernetes Operators locally
Use Grafana to plot Express.js apps Metrics
Get Prometheus Metric from a Express js app
Setting up Keycloak using a Github Identity provider in express
how to setup the debug for the operator-sdk v1.0.0 in Goland
How to debug operator-sdk v1.0.0 locally using Vscode
At some stage in development of a high availability application you will want test what happens when...
Golang rest api using Mongo driver and deployed on Openshift
change the login theme in the Keycloak docker image
jq tutorial
git log pretty print aliases
how to setup the debug for the operator-sdk in Goland
Improve Golang coverage reporting
Debug Kubernetes Operator-sdk locally using Vscode
Options for free website hosting
Create a command line Node application