Module 4: Large Language Models

Module Overview

This updated module takes you beyond interacting with existing LLM interfaces to building and customizing your own LLM-powered applications. Building on the foundation from Module 3, you'll learn to work directly with LLM APIs and local models to create sophisticated, context-aware conversational agents.

You'll explore how to design and implement local LLM bots with customizable prompts and parameters, experiment with different model configurations, and tackle advanced challenges like implementing memory systems for more coherent conversations. This hands-on approach will give you practical experience in building production-ready LLM applications while understanding the technical considerations involved in deploying these powerful models.

Learning Objectives

  • Develop and customize local LLM bots with parameterized prompts and configurations
  • Implement memory systems and context management for enhanced conversational experiences

Guided Project

This guided project focuses on hands-on LLM implementation and does not have traditional repository materials. For students interested in exploring additional technical background, you can review the legacy Time Series Forecasting material as supplementary content, though the current guided project and assignment are the primary focus.

Building a Chatbot with Persitent Memory

Module Assignment

This module features a hands-on implementation assignment that differs from our typical structured exercises.

Building an Advanced Local LLM Bot

Objective:

The main goal of this assignment is to develop a local LLM bot with customizable prompts and parameters. As a stretch goal, you will implement a short-term memory model for the bot, allowing for more coherent and context-aware interactions.

The instructions for this project are intentionally a bit vague. The purpose of this is for you to build something of your own design, which can present many challenges and more importantly, a portfolio-worthy project.

Prerequisites:

  • Python programming experience
  • Basic understanding of machine learning, NLP, and LLMs
  • Access to an LLM API or local LLM setup

Steps:

  1. Initial Setup
    • Set up a basic bot using a local LLM or an API service.
  2. Experimentation
    • Experiment with various prompts and parameters to understand their impact on the bot's responses.
  3. Refactoring
    • Refactor your bot into a function or class, making sure to parameterize the user_prompt.
  4. Memory Module (Stretch Goal)
    • Implement a memory system for your bot. This can range from simply feeding back previous interactions to a more complex approach like a vector database for automatic relevant recall.
  5. Evaluation
    • Evaluate the performance in terms of coherence, relevance, and context-awareness.
  6. Documentation
    • Document your design choices, implementation details, and observations.
  7. Peer Review (Stretch Goal)
    • Share your project for peer review, focusing on the bot's design, performance, and memory model.
  8. Final Submission
    • Submit your code and documentation for evaluation.

Evaluation Criteria:

  • Quality of the design and implementation of the bot
  • Effectiveness of the parameterization and customization
  • Implementation and performance of the memory model (if attempted)
  • Peer review feedback (optional)

Resources:

Assignment Solution Video