Skip to main content

Unlock new value from diverse data types and leveraging multimodal capabilities

Join this session to learn how to amplify chatbot applications, unlocking diverse data types and multi-modal capabilities with Amazon Bedrock multi-modal embedding. We explain the approaches to tackle complex scenarios such as handling tabular data embedded within word documents and extracting meaningful insights from diverse data sources. Understand the key strategies to seamlessly integrate multi-modal inputs and outputs and extend beyond textual interactions to include images, enabling you to enhance user experience. We walk through the semantic, hybrid, and filter/query-based search strategies you can use for context retrieval. Understand the potential pitfalls in handling structured and unstructured data, including web-crawled content and documents and ways to overcome them. We explain how to navigate the common scenarios and harness the full spectrum of available data to enhance your chatbot applications.

Speakers: 
Tristan Nguyen, Senior Solutions Architect, AWS
Isaac Ibrahim, Associate Solutions Architect, AWS