Generative AI with Amazon SageMaker and Vector Engine for Amazon OpenSearch Serverless
Generative AI is powered by large language models (LLMs) that are pre-trained on vast amounts of data, commonly referred to as foundation models (FMs). There are many foundational models that can be used to power variety of use cases, from text generation, and summarization to product search. In this session, we cover the use of vector engine for Amazon OpenSearch serverless capability to improve the performance and accuracy of LLM model responses. Learn how LLM can augment a typical product search use case. We also dive into the architecture, and demonstrate how to build a Q&A bot based on an LLM model that is fine-tuned on specific dataset. Download slides », Download demo »
Speaker: Muhammad Ali, Principal OpenSearch Solutions Architect, AWS