top of page

Context Engineering with Redis and LangChain

In this 1-hour workshop, you’ll learn how to design scalable AI agents by mastering context engineering — the discipline of structuring memory, retrieval, and reasoning workflows so LLMs behave reliably in production. The session explores how Redis and LangChain enable high-performance agent systems through semantic caching, vector search, and intelligent context management.

TYLER HUTCHERSON.png
  • Twitter
  • LinkedIn

Tyler Hutcherson is an engineer, researcher, and builder focused on turning complex AI ideas into production-ready systems. He develops AI/ML-native applications and real-time architectures, specializing in generative AI, vector search, semantic caching, and distributed machine learning using modern cloud and data platforms. Currently working with Redis, Tyler combines deep technical expertise in microservices, end-to-end ML pipelines, and scalable infrastructure with strong design thinking and stakeholder collaboration to help teams build reliable, high-performance AI applications.

Workshop Overview

Modern AI systems are shifting from prompt engineering to context engineering, where developers program how information flows into an LLM rather than relying on a single prompt. Instead of treating the model as a chatbot, this approach builds structured pipelines that assemble instructions, retrieved knowledge, memory, and tool outputs into the context window — turning stateless models into reliable, production-ready agents.
 

This workshop explores how Redis and LangChain enable scalable context architectures by combining vector search, semantic caching, and agent memory into a unified system. Attendees will learn how to design efficient context pipelines that reduce token costs, improve latency, and maintain persistent state across sessions — key capabilities for building real-world AI applications beyond demos. 

 

1. Foundations of Context Engineering

  • Moving beyond prompt engineering into programmable context pipelines

  • Understanding the context window as working memory for agents

  • Strategies for selecting, compressing, and isolating context signals


2. Memory Architectures for AI Agents

  • Short-term vs long-term memory design

  • Retrieval-augmented generation (RAG) patterns

  • Using Redis vector search and agent memory systems for persistent context 


3. High-Performance Context Pipelines

  • Semantic caching to reduce token usage and latency

  • Context pruning and summarization strategies

  • Designing for throughput, reliability, and real-time agent interactions


4. Orchestrating Context with LangChain

  • Building multi-step agent workflows

  • Managing tool outputs, conversation history, and state

  • Structuring context flows for scalable production deployments

Time and Location

March 31, 2026
10:30am - 11:30am
Cobb Galleria

Workshop Requirements

  • AI practitioners and researchers. 

  • Developers seeking to transition into advanced agent-building roles. 

  • Organizations looking to implement custom AI solutions.

bottom of page