Task-oriented dialogue agents help users achieve a goal — here, recommending and booking restaurants from a dataset based on user constraints (cuisine, area, price). This lab uses a local, open-source model (Llama 3.1 8B via Ollama) to avoid paid APIs and keep the setup reproducible.
You implement a small set of APIs (e.g. search restaurants, check availability, confirm booking) and design prompts so that the model calls the appropriate API at the right time. Because prompt-based control has limits, the lab introduces simple mechanisms to refine the model's function-calling behaviour and avoid infinite loops or off-track replies. You run the agent locally (Ollama typically on localhost:11434), so use a local Jupyter or VSCode notebook rather than Colab.
The lab notebook (Lab5) covers Ollama setup, prompt design, and the restaurant dataset. Work through the steps to get a working booking agent and reflect on where prompt engineering succeeds and where stronger control (e.g. explicit state machines or learned policy) might be needed.