Simplifying modern data pipelines with zero-ETL architecture (Level 200)
Fill form to unlock content
Error - something went wrong!
Access Content
Thank you!
Simplifying modern data pipelines with zero-ETL architecture
Extract, transform, and load (ETL) is the process of combining, cleaning, and normalizing data from different sources to get it ready for analytics, AI/ML workloads. But traditional ETL processes can be time-consuming and complex to develop, maintain, and scale. In this session, we share how zero-ETL eliminates the need for complex ETL data pipelines by enabling direct data movement and federated querying across databases, data lakes, and external sources. Learn how the integration of Amazon Aurora with Amazon Redshift allows near real-time analytics and ML on transactional data stored in Amazon Aurora MySQL-Compatible Edition, without building data pipelines. We then demonstrate Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service to perform tasks such as full-text search, fuzzy search, auto-complete, and vector search for machine learning (ML) capabilities to offer new experiences that boost user engagement and improve satisfaction with their applications. By end of session, understand how zero-ETL architecture on AWS empowers users to focus on extracting value from data rather than pipeline development.
Speakers: Surendar Munimohan, Senior Database Solutions Architect, AWS
Paul Villena, Senior Analytics Solutions Architect, AWS