Introduction
I want to take a look at a very basic Ollama Artificial Intelligence (AI) model and track the information flow from query to results. For those who are new to AI, Ollama is a Application Program Interface (API) for Large Language modes (LLM). The combination of API and LLM make up a basic AI. And I want to examine how data flows in this process.
The basic components of the model include the User, some type of interface between the User and the Ollama, The Ollama API itself, and a LLM. It should be noted that there are many different LLM models. As this model does not call out a specific LLM, the limiting factor will be if the LLM will run on the hardware being used. The larger the LLM model, the more precise the response, however more resources, in the form of processor cores and RAM, will be required to run it.
