AWS LLM SageMaker
1.0.0
This hands-on workshop for developers and solution builders introduces how to leverage Foundation Models (FM) with Amazon SageMaker.
In this lab, we'll show you some of our customers' most popular use patterns for Generative AI and examples of technologies that use GenAI to drive value for their organizations by improving productivity.
This can be achieved by leveraging foundational models that help you compose emails, summarize text, answer questions, build chatbots, and create images.
This lab material is distributed on AWS Samples Github. Current practice materials are always more up-to-date than the official AWS Samples materials.
1_prepare-dataset-alpaca-method.ipynb
: Prepare a training dataset from the instruction dataset. This method tokenizes each sample.1_prepare-dataset-chunk-method.ipynb
: Prepare a training dataset from the instruction dataset. This method concatenates all samples and divides them according to the chunk size.2_local-train-debug-lora.ipynb
: Debug with some sample data in the development environment before performing in earnest on the training instance. If you are already familiar with fine tuning, please skip this hands-on and proceed with 3_sm-train-lora.ipynb.3_sm-train-lora.ipynb
: Performs fine tuning on SageMaker training instances.