Developing Generative AI Solutions on AWS

Course 1244

  • Duration: 2 days
  • Language: English
  • Level: Intermediate

This course is designed to introduce generative artificial intelligence (AI) to software developers interested in using large language models (LLMs) without fine-tuning.

The course provides an overview of generative AI, planning a generative AI project, getting started with Amazon Bedrock, the foundations of prompt engineering, and the architecture patterns to build generative AI applications using Amazon Bedrock and LangChain. 

Gen AI Solutions on AWS Training Delivery Methods

  • In-Person

  • Online

  • Upskill your whole team by bringing Private Team Training to your facility.

Gen AI Solutions on AWS Training Information

Training Prerequisites

Gen AI Solutions on AWS Training Outline

Module 1: Introduction to Generative AI Art of the Possible

  • Overview of ML
  • Basics of generative AI
  • Generative AI use cases
  • Generative AI in practice
  • Risks and benefits

Module 2: Planning a Generative AI Project

  • Generative AI fundamentals
  • Generative AI in practice
  • Generative AI context
  • Steps in planning a generative AI project
  • Risks and mitigation

Module 3: Getting Started with Amazon Bedrock

  • Introduction to Amazon Bedrock
  • Architecture and use cases
  • How to use Amazon Bedrock
  • Demonstration Setting up Bedrock access and using playgrounds

Module 4: Foundations of Prompt Engineering

  • Basics of foundation models
  • Fundamentals of Prompt Engineering
  • Basic prompt techniques
  • Advanced prompt techniques
  • Model-specific prompt techniques
  • Demonstration Finetuning a basic text prompt
  • Addressing prompt misuses
  • Mitigating bias
  • Demonstration: Image bias mitigation

Module 5: Amazon Bedrock Application Components

  • Overview of generative AI application components
  • Applications and use cases
  • Foundation models and the FM interface
  • Working with datasets and embeddings
  • Demonstration: Word embeddings
  • Additional application components
  • Retrieval Augmented Generation RAG
  • Model finetuning
  • Securing generative AI applications
  • Generative AI application architecture

Module 6: Amazon Bedrock Foundation Models

  • Introduction to Amazon Bedrock foundation models
  • Using Amazon Bedrock FMs for inference
  • Amazon Bedrock methods
  • Data protection and auditability
  • Lab: Invoke Amazon Bedrock model for text generation using zero-shot prompt

Module 7: LangChain

  • Optimizing LLM performance
  • Integrating AWS and LangChain
  • Using models with LangChain
  • Constructing prompts
  • Structuring documents with indexes
  • Storing and retrieving data with memory
  • Using chains to sequence components
  • Managing external resources with LangChain agents

Module 8: Architecture Patterns

  • Introduction to architecture patterns
  • Text summarization
  • Question answering
  • Demonstration Using Amazon Bedrock for question-answering
  • Chatbot
  • Lab: Build a chatbot • Code generation
  • Demonstration Using Amazon Bedrock models for code generation
  • LangChain and agents for Amazon Bedrock
  • Lab: Building conversational applications with the Converse API

Need Help Finding The Right Training Solution?

Our training advisors are here for you.

Gen AI Solutions on AWS Training FAQs

A large language model (LLM) refers to a type of artificial intelligence model that has been trained on massive amounts of text data to understand and generate human-like language.

These models are typically built using deep learning architectures, such as recurrent neural networks (RNNs), transformer models, or variations thereof.

Not at this time.

Prompt engineering refers to the process of designing and crafting prompts or instructions for language models, particularly large language models like GPT (Generative Pre-trained Transformer) models.

Prompt engineering aims to guide the behavior and output of the model towards desired tasks or objectives.

Chat With Us