Expert Insights | Earley Information Science

[RECORDED] Knowledge and Prompt Engineering Part 2: Focus on Prompt Design Approaches

Written by Earley Information Science Team | Jul 1, 2024 6:44:09 PM

In our last session on knowledge and prompt engineering, we focused more on the knowledge design side. In this follow-up session, we will explore prompt approaches and practices more deeply.

We will cover the following topics:

  • Structured prompting, based on use cases, scenarios, and the RACE framework (Role, Action, Context, and Examples)
  • Situational factors that affect the prompt framework
    • Capability – Are you interacting with a chatbot or an assistant?
    • Model – Which LLM are you working with: OpenAI's GPT, Mixtral, Llama2, or another?
    • Use case – Do you need translation, generation, or a Q&A system?
    • Mode – Are you working with text, vision, or voice?
    • Action – Function calling or fine-tuning?
  • How the prompt design framework needs to align with and take signals from the knowledge engineering framework
  • Approaches to optimize your prompts and achieve the best responses from an LLM
    • Working through problems step-by-step using the Tree of Thought (ToT), chain of thought, or iterative prompting techniques
    • Injecting emotion to prompt emotionally engaging language
    • Including signals to improve LLM performance:
      • Corporate signals such as brand guidelines, terminology, and data sources
      • User signals from prior interactions
      • Industry-specific data

Speakers

    • Seth Earley
      CEO and Founder, Earley Information Science
    • Nick Usborne
      Copywriter, Trainer, and Speaker 
    • Sanjay Mehta
      Principal Solution Architect, Earley Information Science