The End-to-End LLM (Large Language Model) Bootcamp is designed from a real-world perspective that follows the data processing, development, and deployment pipeline paradigm. Attendees walk through the workflow of preprocessing the openassistant-guanaco dataset for the Text Generation task and training the dataset using the LLAMA 2 7Billion Model,a pre-trained and fine-tuned LLM. Attendees will also learn to optimize an LLM using NVIDIA® TensorRT™ LLM, an SDK for high-performance large language model inference, understand guardrail prompts and responses from the LLM model using NVIDIA NeMo Guardrails, and deploy the AI pipeline using NVIDIA TensorRT LLM Backend (powered by Triton™ Inference Server), an open-source software that standardizes LLM deployment and execution across every workload.
This content contains three Labs, plus a challenge notebook:
The tools and frameworks used in the Bootcamp material are as follows:
The total Bootcamp material would take approximately 7 hours and 30 minutes. We recommend dividing the material's teaching into two days, covering Lab 1-3 in one session and the rest in the next session.
To deploy the Labs, please refer to the deployment guide presented here
This material originates from the OpenHackathons Github repository. Check out additional materials here
Don't forget to check out additional Open Hackathons Resources and join our OpenACC and Hackathons Slack Channel to share your experience and get more help from the community.
Copyright © 2024 OpenACC-Standard.org. This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0). These materials may include references to hardware and software developed by other entities; all applicable licensing and copyrights apply.