Tuesday, August 6, 2024 | 10:00 AM - 3:00 PM EST
Amazon Office | JFK-27 Hank | 12 W 39th St, New York
Learn about small language models (SLMs), which typically have 100 million to 1 billion parameters, offering a compact yet powerful alternative to large language models (LLMs).
Discover how SLMs can outperform LLMs in task-specific generative AI applications.
Save over 90% on inference costs, making SLMs ideal for heavy inference use-cases with reduced compute demand and increased accessibility.
Explore the latest in data multi-tenancy with Adapters, ensuring robust data management and dynamic query routing.
Gain insights into avoiding data over-fitting or under-fitting during fine-tuning, and understand the differences in benchmarks when fine-tuning.
Learn how to improve generative AI customer questions and feedback experiences using data training techniques.
You’ll get 1:1 workshop assistance with AWS and Invisible experts.
Access AWS-funded SI Partner engagement for data preparation and fine-tuning assistance.
Receive AWS credits for workload migration and an additional $150K credits for AWS silicon users.
Gain free access to solution accelerators and pre-built examples to implement immediately.
Learn about the AWS programs for SLM adoption, providing comprehensive support and resources.