AWS News Blog
https://aws.amazon.com/blogs/aws/
Announcements, Updates, and LaunchesThu, 12 Dec 2024 18:51:00 +0000en-US
hourly
1 Now Available – Second-Generation FPGA-Powered Amazon EC2 instances (F2)
https://aws.amazon.com/blogs/aws/now-available-second-generation-fpga-powered-amazon-ec2-instances-f2/
<![CDATA[Jeff Barr]]>Wed, 11 Dec 2024 23:09:48 +0000<![CDATA[Amazon EC2]]><![CDATA[Announcements]]><![CDATA[Launch]]><![CDATA[News]]>823f176c71ff36d802a6912c8db66ae3c79403cfAccelerate genomics, multimedia, big data, networking, and more with up to 192 vCPUs, 8 FPGAs, 2TiB memory, and 100Gbps network – outpacing CPUs by up to 95x.<p>Equipped with up to eight AMD Field-Programmable Gate Arrays (FPGAs), AMD EPYC (Milan) processors with up to 192 cores, High Bandwidth Memory (HBM), up to 8 TiB of SSD-based instance storage, and up to 2 TiB of memory, the new F2 instances are available in two sizes, and are ready to accelerate your genomics, multimedia processing, big data, satellite communication, networking, silicon simulation, and live video workloads.</p>
<p><span style="text-decoration: underline"><strong>A Quick FPGA Recap</strong></span><br> Here’s how I explained the FPGA model when we <a href="https://aws.amazon.com/blogs/aws/developer-preview-ec2-instances-f1-with-programmable-hardware/">previewed</a> the first generation of FPGA-powered <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (Amazon EC2)</a> instances</p>
<blockquote>
<p>One of the more interesting routes to a custom, hardware-based solution is known as a Field Programmable Gate Array, or FPGA. In contrast to a purpose-built chip which is designed with a single function in mind and then hard-wired to implement it, an FPGA is more flexible. It can be programmed in the field, after it has been plugged in to a socket on a PC board. Each FPGA includes a fixed, finite number of simple logic gates. Programming an FPGA is “simply” a matter of connecting them up to create the desired logical functions (AND, OR, XOR, and so forth) or storage elements (flip-flops and shift registers). Unlike a CPU which is essentially serial (with a few parallel elements) and has fixed-size instructions and data paths (typically 32 or 64 bit), the FPGA can be programmed to perform many operations in parallel, and the operations themselves can be of almost any width, large or small.</p>
</blockquote>
<p>Since that launch, AWS customers have used F1 instances to host many different types of applications and services. With a newer FPGA, more processing power, and more memory bandwidth, the new F2 instances are an even better host for highly parallelizable, compute-intensive workloads.</p>
<p>Each of the <a href="https://www.amd.com/en/products/adaptive-socs-and-fpgas/fpga/virtex-ultrascale-plus-hbm.html">AMD Virtex UltraScale+</a> HBM VU47P FPGAs has 2.85 million system logic cells and 9,024 DSP slices (up to 28 TOPS of DSP compute performance when processing INT8 values). The FPGA Accelerator Card associated with each F2 instance provides 16 GiB of High Bandwidth Memory and 64 GiB of DDR4 memory per FPGA.</p>
<p><strong><span style="text-decoration: underline">Inside the F2</span><br> </strong>F2 instances are powered by 3rd generation <a href="https://www.amd.com/en/products/processors/server/epyc/7003-series.html">AMD EPYC</a> (Milan) processors. In comparison to F1 instances, they offer up to 3x as many processor cores, up to twice as much system memory and NVMe storage, and up to 4x the network bandwidth. Each FPGA comes with 16 GiB High Bandwidth Memory (HBM) with up to 460 GiB/s bandwidth. Here are the instance sizes and specs:</p>
<table style="margin-left: auto;margin-right: auto;border: 1px solid black;border-collapse: collapse" cellpadding="8">
<tbody>
<tr style="background-color: #e0e0e0;vertical-align: bottom">
<td style="border-bottom: 1px solid black;text-align: center"><strong>Instance Name</strong></td>
<td style="border-bottom: 1px solid black;text-align: center"><strong>vCPUs<br> </strong></td>
<td style="border-bottom: 1px solid black;text-align: center" align="center"><strong>FPGAs<br> </strong></td>
<td style="border-bottom: 1px solid black;text-align: center" align="center"><strong>FPGA Memory<br> HBM / DDR4<br> </strong></td>
<td style="border-bottom: 1px solid black;text-align: center" align="center"><strong>Instance Memory<br> </strong></td>
<td style="border-bottom: 1px solid black;text-align: center" align="center"><strong>NVMe Storage<br> </strong></td>
<td style="border-bottom: 1px solid black;text-align: center" align="center"><strong>EBS Bandwidth<br> </strong></td>
<td style="border-bottom: 1px solid black;text-align: center" align="center"><strong>Network Bandwidth<br> </strong></td>
</tr>
<tr style="border-bottom: 1px solid #ddd">
<td align="left"><strong>f2.12xlarge</strong></td>
<td align="center">48</td>
<td align="center">2</td>
<td align="center">32 GiB /<br> 128 GiB</td>
<td align="center">512 GiB</td>
<td align="center">1900 GiB<br> (2x 950 GiB)</td>
<td align="center">15 Gbps</td>
<td align="center">25 Gbps</td>
</tr>
<tr>
<td align="left"><strong>f2.48xlarge</strong></td>
<td align="center">192</td>
<td align="center">8</td>
<td align="center">128 GiB /<br> 512 GiB</td>
<td align="center">2,048 GiB</td>
<td align="center">7600 GiB<br> (8x 950 GiB)</td>
<td align="center">60 Gbps</td>
<td align="center">100 Gbps</td>
</tr>
</tbody>
</table>
<p>The high-end <strong>f2.48xlarge</strong> instance supports the <a href="https://aws.amazon.com/media-services/resources/cdi/">AWS Cloud Digital Interface</a> (CDI) to reliably transport uncompressed live video between applications, with instance-to-instance latency as low as 8 milliseconds.</p>
<p><span style="text-decoration: underline"><strong>Building FPGA Applications</strong></span><br> The <a href="https://github.com/aws/aws-fpga">AWS EC2 FPGA Development Kit</a> contains the tools that you will use to develop, simulate, debug, compile, and run your hardware-accelerated FPGA applications. You can launch the kit’s <a href="https://aws.amazon.com/marketplace/pp/prodview-f5kjsenkfkz5u">FPGA Developer AMI</a> on a memory-optimized or compute-optimized instance for development and simulation, then use an F2 instance for final debugging and testing.</p>
<p>The tools included in the developer kit support a variety of development paradigms, tools, accelerator languages, and debugging options. Regardless of your choice, you will ultimately create an Amazon FPGA Image (AFI) which contains your custom acceleration logic and the <a href="https://github.com/aws/aws-fpga/blob/f2/hdk/docs/AWS_Shell_Interface_Specification.md">AWS Shell</a> which implements access to the FPGA memory, PCIe bus, interrupts, and external peripherals. You can deploy AFIs to as many F2 instances as desired, share with other AWS accounts or publish on AWS Marketplace.</p>
<p>If you have already created an application that runs on F1 instances, you will need to update your development environment to use the latest AMD tools, then rebuild and validate before upgrading to F2 instances.</p>
<p><span style="text-decoration: underline"><strong>FPGA Instances in Action</strong></span><br> Here are some cool examples of how F1 and F2 instances can support unique and highly demanding workloads:</p>
<p><strong>Genomics</strong> – Multinational pharmaceutical and biotechnology company AstraZeneca used thousands of F1 instances to build the world’s fastest genomics pipeline, able to process over 400K whole genome samples in under two months. They will adopt <a href="https://www.illumina.com/products/by-type/informatics-products/dragen-secondary-analysis.html">Illumina DRAGEN</a> for F2 to realize better performance at a lower cost, while accelerating disease discovery, diagnosis, and treatment.</p>
<p><strong>Satellite Communication</strong> – Satellite operators are moving from inflexible and expensive physical infrastructure (modulators, demodulators, combiners, splitters, and so forth) toward agile, software-defined, FPGA-powered solutions. Using the digital signal processor (DSP) elements on the FPGA, these solutions can be reconfigured in the field to support new waveforms and to meet changing requirements. Key F2 features such as support for up to 8 FPGAs per instance, generous amounts of network bandwidth, and support for the <a href="https://www.dpdk.org/">Data Plan Development Kit</a> (DPDK) using <a href="https://github.com/aws/aws-fpga/tree/f2/sdk/apps/virtual-ethernet">Virtual Ethernet</a> can be used to support processing of multiple, complex waveforms in parallel.</p>
<p><strong>Analytics</strong> – <a href="https://www.neuroblade.com/">NeuroBlade</a>‘s SQL Processing Unit (SPU) integrates with Presto, Apache Spark, and other open source query engines, delivering faster query processing and market-leading query throughput efficiency when run on F2 instances.</p>
<p><span style="text-decoration: underline"><strong>Things to Know</strong></span><br> Here are a couple of final things that you should know about the F2 instances:</p>
<p><strong>Regions</strong> – F2 instances are available today in the US East (N. Virginia) and Europe (London) AWS Regions, with plans to extend availability to additional regions over time.</p>
<p><strong>Operating Systems</strong> – F2 instances are Linux-only.</p>
<p><strong>Purchasing Options</strong> – F2 instances are available in <a href="https://aws.amazon.com/ec2/pricing/on-demand/">On-Demand</a>, <a href="https://aws.amazon.com/ec2/spot/">Spot</a>, <a href="https://aws.amazon.com/savingsplans/compute-pricing/">Savings Plan</a>, <a href="https://aws.amazon.com/ec2/pricing/dedicated-instances/">Dedicated Instance</a>, and <a href="https://aws.amazon.com/ec2/dedicated-hosts/">Dedicated Host</a> form.</p>
<p>— <a href="https://twitter.com/jeffbarr">Jeff</a>;</p>Introducing Buy with AWS: an accelerated procurement experience on AWS Partner sites, powered by AWS Marketplace
https://aws.amazon.com/blogs/aws/introducing-buy-with-aws-an-accelerated-procurement-experience-on-aws-partner-sites-powered-by-aws-marketplace/
<![CDATA[Prasad Rao]]>Wed, 04 Dec 2024 23:30:08 +0000<![CDATA[Announcements]]><![CDATA[AWS Marketplace]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>a4c71b19d5eb758bbfd45a28141c249bef867d25Buy with AWS enables you to seamlessly discover and purchase products available in AWS Marketplace from AWS Partner websites using your AWS account.<p>Today, we are announcing <a href="https://aws.amazon.com/marketplace/features/buy-with-aws">Buy with AWS</a>, a new way to discover and purchase solutions available in <a href="https://aws.amazon.com/mp/marketplace-service/overview/">AWS Marketplace</a> from <a href="https://aws.amazon.com/partners/">AWS Partner</a> sites. You can use Buy with AWS to accelerate and streamline your product procurement process on websites outside of <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a>. This feature provides you the ability to find, try, and buy solutions from Partner websites using your AWS account</p>
<p>AWS Marketplace is a curated digital store for you to find, buy, deploy, and manage cloud solutions from Partners. Buy with AWS is another step towards AWS Marketplace making it easy for you to find and procure the right Partner solutions, when and where you need them. You can conveniently find and procure solutions in AWS Marketplace, through integrated AWS service consoles, and now on Partner websites.</p>
<p><span style="text-decoration: underline;"><strong>Accelerate cloud solution discovery and evaluation</strong></span></p>
<p>You can now discover solutions from Partners available for purchase through AWS Marketplace as you explore solutions on the web beyond AWS.</p>
<p>Look for products that are “Available in AWS Marketplace” when browsing on Partner sites, then accelerate your evaluation process with fast access to free trials, demo requests, and inquiries for custom pricing.</p>
<p>For example, I want to evaluate <a href="https://www.wiz.io/">Wiz</a> to see how it can help with my cloud security requirements. While browsing the Wiz website, I come across a <a href="https://www.wiz.io/partners/aws">page where I see “Connect Wiz with Amazon Web Services (AWS)”</a>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/Wiz-1-2.png"><img class="aligncenter wp-image-91568 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/Wiz-1-2.png" alt="Wiz webpage featuring Buy With AWS" width="1307" height="785"></a></p>
<p>I choose <strong>Try with AWS</strong>. It asks me to sign in to my AWS account if I’m not signed in already. I’m then presented with a Wiz and AWS co-branded page for me to sign up for the free trial.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/Wiz-2.png"><img loading="lazy" class="aligncenter wp-image-91556 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/Wiz-2.png" alt="Wiz and AWS co-branded page to sign up for free trial using Buy with AWS through AWS Marketplace" width="904" height="458"></a></p>
<p>The discovery experience that you see will vary depending on type of the Partner website you’re shopping from. Wiz is an example of how Buy with AWS can be implemented by an independent software vendor (ISV). Now, let’s look at an example of an AWS Marketplace Channel Partner, or reseller, who operates a storefront of their own.</p>
<p>I browse to the <a href="https://marketplace-aws.bytes.co.uk/products">Bytes storefront</a> with product listings from AWS Marketplace. I have the option to filter and search from the curated product listings, which are available in AWS Marketplace, on the Bytes site.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/Bytes-1.png"><img loading="lazy" class="aligncenter wp-image-91558 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/Bytes-1.png" alt="Bytes storefront with product listings from AWS Marketplace" width="904" height="634"></a></p>
<p>I choose <strong>View Details</strong> for Fortinet and see an option to <strong>Request Private Offer</strong> from AWS.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/Bytes-2.png"><img loading="lazy" class="aligncenter wp-image-91559 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/Bytes-2.png" alt="Bytes storefront with option to Request Private Offer for Fortinet from AWS Marketplace" width="904" height="404"></a></p>
<p>As you can tell, on a Channel Partner site, you can browse curated product listings available in AWS Marketplace, filter products, and request custom pricing directly from their website.</p>
<p><span style="text-decoration: underline;"><strong>Streamline product procurement on AWS Partner sites</strong></span><br> I had a seamless experience using Buy with AWS to access a free trial for Wiz and browse through the Bytes storefront to request a private offer.</p>
<p>Now I want to try <a href="https://www.databricks.com/">Databricks</a> for one of the applications I’m building. I sign up for a <a href="http://signup.databricks.com/">Databricks trial</a> through their website.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/DB-1.png"><img loading="lazy" class="aligncenter wp-image-91560 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/DB-1.png" alt="Database homepage after login with option to Upgrade" width="1443" height="500"></a></p>
<p>I chose <strong>Upgrade</strong> and see Databricks is available in AWS Marketplace, which gives me the option to <strong>Buy with AWS</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/DB-2.png"><img loading="lazy" class="aligncenter wp-image-91561 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/DB-2.png" alt="Option to upgrade to Databricks premium using Buy with AWS feature of AWS marketplace" width="1426" height="882"></a></p>
<p>I choose <strong>Buy with AWS</strong>, and after I sign in to my AWS account, I land on a Databricks and AWS Marketplace co-branded procurement page.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/DB-3.png"><img loading="lazy" class="aligncenter wp-image-91562 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/DB-3.png" alt="Databricks and AWS co-branded page to subscribe using Buy with AWS" width="1433" height="888"></a></p>
<p>I complete the purchase on the co-branded procurement page and continue to set up my Databricks account.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/DB-4.png"><img loading="lazy" class="aligncenter wp-image-91563 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/DB-4.png" alt="Databricks and AWS co-branded page after subscribing using Buy with AWS" width="1431" height="892"></a></p>
<p>As you can tell, I didn’t have to navigate the challenge of managing procurement processes for multiple vendors. I also didn’t have to speak with a sales representative or onboard a new vendor in my billing system, which would have required multiple approvals and delayed the overall process.</p>
<p><span style="text-decoration: underline;"><strong>Access centralized billing and benefits through AWS Marketplace</strong></span><br> Because Buy with AWS purchases are transacted through and managed in AWS Marketplace, you also benefit from the post-purchase experience of AWS Marketplace, including consolidated AWS billing, centralized subscription management, and access to cost optimization tools.</p>
<p>For example, through the <a href="https://aws.amazon.com/aws-cost-management/aws-billing/">AWS Billing and Cost Management console</a>, I can centrally manage all my AWS purchases, including Buy with AWS purchases, from one dashboard. I can easily access and process invoices for all of my organization’s AWS purchases. I also need to have valid <a href="https://docs.aws.amazon.com/marketplace/latest/buyerguide/buyer-iam-users-groups-policies.html">AWS Identity and Access Management (IAM) permissions</a> to manage subscriptions and make a purchase through AWS Marketplace.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/billing-1-1.png"><img loading="lazy" class="aligncenter size-full wp-image-91571" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/24/billing-1-1.png" alt="" width="1453" height="703"></a></p>
<p>AWS Marketplace not only simplifies my billing but also helps in maintaining governance over spending by helping me manage purchasing authority and subscription access for my organization with centralized visibility and controls. I can manage my budget with pricing flexibility, cost transparency, and AWS cost management tools.</p>
<p><span style="text-decoration: underline;"><strong>Buy with AWS for Partners</strong></span><br> Buy with AWS enables Partners who sell or resell products in AWS Marketplace to create new solution discovery and buying experiences for customers on their own websites. By adding call to action (CTA) buttons to their websites such as “Buy with AWS”, “Try free with AWS”, “Request private offer”, and “Request demo”, Partners can help accelerate product evaluation and the path-to-purchase for customers.</p>
<p>By integrating with <a href="https://docs.aws.amazon.com/marketplace/latest/APIReference/welcome.html">AWS Marketplace APIs</a>, Partners can display products from the AWS Marketplace catalog, allow customers to sort and filter products, and streamline private offers. Partners implementing Buy with AWS can access AWS Marketplace creative and messaging resources for guidance on building their own web experiences. Partners who implement Buy with AWS can access metrics for insights into engagement and conversion performance.</p>
<p>The <a href="https://aws.amazon.com/marketplace/management/homepage?pageType=awsmpmp%3Acustomer">Buy with AWS onboarding guide in the AWS Marketplace Management Portal</a> details how Partners can get started.</p>
<p><span style="text-decoration: underline;"><strong>Learn more</strong></span><br> Visit the <a href="https://aws.amazon.com/marketplace/features/buy-with-aws">Buy with AWS page</a> to learn more and explore Partner sites that offer Buy with AWS.</p>
<p>To learn more about selling or reselling products using Buy with AWS on your website, visit:</p>
<ul>
<li><a href="https://aws.amazon.com/partners/marketplace/buy-with-aws/">Buy with AWS seller page</a></li>
<li><a href="https://aws.amazon.com/marketplace/management/homepage?pageType=awsmpmp%3Acustomer">Buy with AWS onboarding guide in the AWS Marketplace Management Portal</a></li>
</ul>
<p>– <a href="https://www.linkedin.com/in/kprasadrao/">Prasad</a></p>Accelerate foundation model training and fine-tuning with new Amazon SageMaker HyperPod recipes
https://aws.amazon.com/blogs/aws/accelerate-foundation-model-training-and-fine-tuning-with-new-amazon-sagemaker-hyperpod-recipes/
<![CDATA[Channy Yun (윤석찬)]]>Wed, 04 Dec 2024 18:21:16 +0000<![CDATA[Amazon SageMaker HyperPod]]><![CDATA[Announcements]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>5f4d931d4077b46731468b69717f5ead7e44dfabAmazon SageMaker HyperPod recipes help customers get started with training and fine-tuning popular publicly available foundation models, like Llama 3.1 405B, in just minutes with state-of-the-art performance.<p>Today, we’re announcing the general availability of <a href="https://github.com/aws/sagemaker-hyperpod-recipes">Amazon SageMaker HyperPod recipes</a> to help data scientists and developers of all skill sets to get started training and fine-tuning <a href="https://aws.amazon.com/what-is/foundation-models/">foundation models</a> (FMs) in minutes with state-of-the-art performance. They can now access optimized recipes for training and fine-tuning popular publicly available FMs such as <a href="https://github.com/aws/sagemaker-hyperpod-recipes/blob/main/recipes_collection/recipes/fine-tuning/llama/hf_llama3_405b_seq128k_gpu_qlora.yaml">Llama 3.1 405B</a>, <a href="https://github.com/aws/sagemaker-hyperpod-recipes/blob/main/recipes_collection/recipes/training/llama/hf_llama3_2_90b_seq8k_gpu_p5x32_pretrain.yaml">Llama 3.2 90B</a>, or <a href="https://github.com/aws/sagemaker-hyperpod-recipes/blob/main/recipes_collection/recipes/training/mixtral/hf_mixtral_8x22b_seq8k_gpu_p5x32_pretrain.yaml">Mixtral 8x22B</a>.</p>
<p>At AWS re:Invent 2023, we <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-sagemaker-hyperpod-a-purpose-built-infrastructure-for-distributed-training-at-scale/">introduced SageMaker HyperPod</a> to reduce time to train FMs by up to 40 percent and scale across more than a thousand compute resources in parallel with preconfigured distributed training libraries. With SageMaker HyperPod, you can find the required accelerated compute resources for training, create the most optimal training plans, and run training workloads across different blocks of capacity based on the availability of compute resources.</p>
<p>SageMaker HyperPod recipes include a training stack tested by AWS, removing tedious work experimenting with different model configurations, eliminating weeks of iterative evaluation and testing. The recipes automate several critical steps, such as loading training datasets, applying distributed training techniques, automating checkpoints for faster recovery from faults, and managing the end-to-end training loop.</p>
<p>With a simple recipe change, you can seamlessly switch between GPU- or Trainium-based instances to further optimize training performance and reduce costs. You can easily run workloads in production on SageMaker HyperPod or SageMaker training jobs.</p>
<p><u><strong>SageMaker HyperPod recipes in action</strong><br> </u>To get started, visit the <a href="https://github.com/aws/sagemaker-hyperpod-recipes">SageMaker HyperPod recipes GitHub repository</a> to browse training recipes for popular publicly available FMs.</p>
<p><img loading="lazy" class="aligncenter wp-image-92923 size-full" style="width: 90%; border: solid 1px #ccc;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/03/2024-sagemaker-hyperpod-recipes-github-1.png" alt="" width="1206" height="1332"></p>
<p>You only need to edit straightforward recipe parameters to specify an instance type and the location of your dataset in cluster configuration, then run the recipe with a single line command to achieve state-of-art performance.</p>
<p>You need to edit the recipe config.yaml file to specify the model and cluster type after cloning the repository.</p>
<pre><code class="lang-bash">$ git clone --recursive https://github.com/aws/sagemaker-hyperpod-recipes.git
$ cd sagemaker-hyperpod-recipes
$ pip3 install -r requirements.txt.
$ cd ./recipes_collections
$ vim config.yaml</code></pre>
<p>The recipes support <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-slurm.html">SageMaker HyperPod with Slurm</a>, <a href="https://aws.amazon.com/blogs/aws/amazon-sagemaker-hyperpod-introduces-amazon-eks-support/">SageMaker HyperPod with Amazon Elastic Kubernetes Service (Amazon EKS)</a>, and <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-training.html">SageMaker training jobs</a>. For example, you can set up a cluster type (Slurm orchestrator), a model name (Meta Llama 3.1 405B language model), an instance type (<code>ml.p5.48xlarge</code>), and your data locations, such as storing the training data, results, logs, and so on.</p>
<pre><code class="lang-yaml">defaults:
- <strong>cluster: slurm</strong> # support: slurm / k8s / sm_jobs
- <strong>recipes: fine-tuning/llama/hf_llama3_405b_seq8k_gpu_qlora</strong> # name of model to be trained
debug: False # set to True to debug the launcher configuration
<strong>instance_type: ml.p5.48xlarge</strong> # or other supported cluster instances
base_results_dir: # Location(s) to store the results, checkpoints, logs etc.</code></pre>
<p>You can optionally adjust model-specific training parameters in this YAML file, which outlines the optimal configuration, including the number of accelerator devices, instance type, training precision, parallelization and sharding techniques, the optimizer, and logging to monitor experiments through <a href="https://www.tensorflow.org/tensorboard">TensorBoard</a>.</p>
<pre><code class="lang-yaml">run:
name: llama-405b
results_dir: ${base_results_dir}/${.name}
time_limit: "6-00:00:00"
restore_from_path: null
trainer:
devices: 8
num_nodes: 2
accelerator: gpu
precision: bf16
max_steps: 50
log_every_n_steps: 10
...
exp_manager:
exp_dir: # location for TensorBoard logging
name: helloworld
create_tensorboard_logger: True
create_checkpoint_callback: True
checkpoint_callback_params:
...
auto_checkpoint: True # for automated checkpointing
use_smp: True
distributed_backend: smddp # optimized collectives
# Start training from pretrained model
model:
model_type: llama_v3
train_batch_size: 4
tensor_model_parallel_degree: 1
expert_model_parallel_degree: 1
# other model-specific params</code></pre>
<p>To run this recipe in SageMaker HyperPod with Slurm, you must prepare the SageMaker HyperPod cluster following the <a href="https://catalog.workshops.aws/sagemaker-hyperpod/en-US/01-cluster">cluster setup instruction</a>.</p>
<p>Then, connect to the SageMaker HyperPod head node, access the Slurm controller, and copy the edited recipe. Next, you run a helper file to generate a Slurm submission script for the job that you can use for a dry run to inspect the content before starting the training job.</p>
<pre><code class="lang-bash">$ python3 main.py --config-path recipes_collection --config-name=config</code></pre>
<p>After training completion, the trained model is automatically saved to your assigned data location.</p>
<p>To run this recipe on SageMaker HyperPod with Amazon EKS, clone the recipe from the GitHub repository, install the requirements, and edit the recipe (<code>cluster: k8s</code>) on your laptop. Then, create a link between your laptop and running the EKS cluster and subsequently use the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/smcluster-getting-started-cli.html">HyperPod Command Line Interface (CLI)</a> to run the recipe.</p>
<pre><code class="lang-bash">$ hyperpod start-job –recipe fine-tuning/llama/hf_llama3_405b_seq8k_gpu_qlora \
--persistent-volume-claims fsx-claim:data \
--override-parameters \
'{
"recipes.run.name": "hf-llama3-405b-seq8k-gpu-qlora",
"recipes.exp_manager.exp_dir": "/data/<your_exp_dir>",
"cluster": "k8s",
"cluster_type": "k8s",
"container": "658645717510.dkr.ecr.<region>.amazonaws.com/smdistributed-modelparallel:2.4.1-gpu-py311-cu121",
"recipes.model.data.train_dir": "<your_train_data_dir>",
"recipes.model.data.val_dir": "<your_val_data_dir>",
}'</code></pre>
<p>You can also run recipe on SageMaker training jobs using <a href="https://sagemaker.readthedocs.io/en/stable/">SageMaker Python SDK</a>. The following example is running PyTorch training scripts on SageMaker training jobs with overriding training recipes.</p>
<pre><code class="lang-python">...
recipe_overrides = {
"run": {
"results_dir": "/opt/ml/model",
},
"exp_manager": {
"exp_dir": "",
"explicit_log_dir": "/opt/ml/output/tensorboard",
"checkpoint_dir": "/opt/ml/checkpoints",
},
"model": {
"data": {
"train_dir": "/opt/ml/input/data/train",
"val_dir": "/opt/ml/input/data/val",
},
},
}
pytorch_estimator = PyTorch(
output_path=<output_path>,
base_job_name=f"llama-recipe",
role=<role>,
instance_type="p5.48xlarge",
training_recipe="fine-tuning/llama/hf_llama3_405b_seq8k_gpu_qlora",
recipe_overrides=recipe_overrides,
sagemaker_session=sagemaker_session,
tensorboard_output_config=tensorboard_output_config,
)
...</code></pre>
<p>As training progresses, the model checkpoints are stored on <a href="https://aws.amazon.com/s3">Amazon Simple Storage Service (Amazon S3)</a> with the fully automated checkpointing capability, enabling faster recovery from training faults and instance restarts.</p>
<p><strong><u>Now available</u></strong><br> Amazon SageMaker HyperPod recipes are now available in the <a href="https://github.com/aws/sagemaker-hyperpod-recipes">SageMaker HyperPod recipes GitHub repository</a>. To learn more, visit the <a href="https://aws.amazon.com/sagemaker-ai/hyperpod/">SageMaker HyperPod product page</a> and the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod.html">Amazon SageMaker AI Developer Guide</a>.</p>
<p>Give SageMaker HyperPod recipes a try and send feedback to <a href="https://repost.aws/tags/TAT80swPyVRPKPcA0rsJYPuA/amazon-sagemaker">AWS re:Post for SageMaker</a> or through your usual AWS Support contacts.</p>
<p>— <a href="https://twitter.com/channyun">Channy</a></p>AWS Education Equity Initiative: Applying generative AI to educate the next wave of innovators
https://aws.amazon.com/blogs/aws/aws-education-equity-initiative-applying-generative-ai-to-educate-the-next-wave-of-innovators/
<![CDATA[Jeff Barr]]>Wed, 04 Dec 2024 18:13:13 +0000<![CDATA[Announcements]]><![CDATA[AWS re:Invent]]><![CDATA[Education]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>895b231f474517397ead998b88bfac9286617a23Amazon commits $100M to empower education equity initiatives, enabling socially-minded organizations to create AI-powered digital learning solutions. This aims to reach underserved students globally through innovative platforms, apps, and assistants.<p>Building on the work that we and our partners have been doing for many years, Amazon is committing up to $100 million in cloud technology and technical resources to help existing, dedicated learning organizations reach more learners by creating new and innovative digital learning solutions, all as part of the <a href="https://aws.amazon.com/about-aws/our-impact/education-equity-initiative/">AWS Education Equity Initiative</a>.</p>
<p><span style="text-decoration: underline;"><strong>The Work So Far</strong></span><br> AWS and Amazon have a long-standing commitment to learning and education. Here’s a sampling of what we have already done:</p>
<p><a href="https://aws.amazon.com/machine-learning/scholarship/"><strong>AWS AI & ML Scholarship Program</strong></a> – This program has awarded $28 million in scholarships to approximately 6000 students.</p>
<p><a href="https://aws.amazon.com/ai/machine-learning/educators/"><strong>Machine Learning University</strong></a> – MLU offers a free program helping community colleges and Historically Black Colleges and Universities (HBCUs) teach data management, artificial intelligence, and machine learning concepts. The program is designed to address opportunity gaps by supporting students who are historically underserved and underrepresented in technology disciplines.</p>
<p><a href="https://www.amazonfutureengineer.com/"><strong>Amazon Future Engineer</strong></a> – Since 2021, up to $46 million in scholarships has been awarded to 1150 students through this program. In the past year, more than 2.1 million students received over 17 million hours of STEM education, literacy, and career exploration courses through this and other Amazon philanthropic education programs in the United States. I was able to speak to one such session last year and it was an amazing experience:</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91813" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/afe_jeff_2024_1.png" alt="" width="892" height="670"></p>
<p><a href="https://www.aboutamazon.com/news/workplace/amazon-to-help-29-million-people-around-the-world-grow-their-tech-skills-with-free-cloud-computing-skills-training-by-2025"><strong>Free Cloud Training</strong></a> – In late 2020 we set a goal of helping 29 million people grow their tech skills with free cloud computing training by 2025. We worked hard and met that target a year ahead of time!</p>
<p><span style="text-decoration: underline;"><strong>There’s More To Do</strong></span><br> Despite all of this work and progress, there’s still more to be done. The future is definitely not evenly distributed: over half a billion students cannot be reached by digital learning today.</p>
<p>We believe that Generative AI can amplify the good work that socially-minded edtech organizations, non-profits, and governments are already doing. Our goal is to empower them to build new and innovative digital learning systems that can amplify their work and allow them to reach a bigger audience.</p>
<p>With the launch of the AWS Education Equity Initiative, we want to help pave the way for the next generation of technology pioneers as they build powerful tools, train foundation models at scale, and create AI-powered teaching assistants.</p>
<p>We are committing up to $100 million in cloud technology and comprehensive technical advising over the next five years. The awardees will have access to the portfolio of AWS services and technical expertise so that they can build and scale learning management systems, mobile apps, chatbots, and other digital learning tools. As part of the application process, applicants will be asked to demonstrate how their proposed solution will benefit students from underserved and underrepresented communities.</p>
<p>As I mentioned earlier, our partners are already doing a lot of great work in this area. For example:</p>
<p><a href="https://code.org/"><strong> Code.org</strong></a> has already used AWS to scale their free computer science curriculum to millions of students in more than 100 countries. With this initiative, they will expand their use of <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> to provide an automated assessment of student projects, freeing up educator time that can be use for individual instruction and tailored learning.</p>
<p><a href="https://rocketlearning.org/"><strong>Rocket Learning</strong></a> focuses on early childhood education in India. They will use Amazon Q in QuickSight to enhance learning outcomes for more than three million children.</p>
<p>I’m super excited about this initiative and look forward to seeing how it will help to create and educate the next generation of technology pioneers!</p>
<p>— <a href="https://twitter.com/jeffbarr">Jeff</a>;</p>Solve complex problems with new scenario analysis capability in Amazon Q in QuickSight
https://aws.amazon.com/blogs/aws/solve-complex-problems-with-new-scenario-analysis-capability-in-amazon-q-in-quicksight/
<![CDATA[Veliswa Boya]]>Wed, 04 Dec 2024 17:57:51 +0000<![CDATA[Amazon Q]]><![CDATA[Amazon QuickSight]]><![CDATA[Analytics]]><![CDATA[Announcements]]><![CDATA[AWS re:Invent]]><![CDATA[Business Intelligence]]><![CDATA[Featured]]><![CDATA[Generative BI]]><![CDATA[Launch]]><![CDATA[News]]>6703e9b3f4cad53a2b6710109c7fdcd36d1688d7Find solutions to your most critical business challenges with ease. Amazon Q in QuickSight enables business users to perform complex scenario analysis up to 10x faster than spreadsheets.<p>Today, we announced a new capability of <a href="https://aws.amazon.com/quicksight/q/">Amazon Q in QuickSight</a> that helps users perform scenario analyses to find answers to complex problems quickly. This AI-assisted data analysis experience helps business users find answers to complex problems by guiding them step-by-step through in-depth data analysis—suggesting analytical approaches, automatically analyzing data, and summarizing findings with suggested actions—using natural language prompts. This new capability eliminates hours of tedious and error-prone manual work traditionally required to perform analyses using spreadsheets or other alternatives. In fact, Amazon Q in QuickSight enables business users to perform complex scenario analysis up to 10x faster than spreadsheets. This capability expands upon existing data Q&A capabilities of Amazon QuickSight so business professionals can start their analysis by simply asking a question.</p>
<p><span style="text-decoration: underline;"><strong>How it works</strong></span><br> Business users are often faced with complex questions that have traditionally required specialized training and days or weeks of time analyzing data in spreadsheets or other tools to address. For example, let’s say you’re a franchisee with multiple locations to manage. You might use this new capability in Amazon Q in QuickSight to ask, “<em>How can I help our new Chicago store perform as well as the flagship store in New York?</em>” Using an agentic approach, Amazon Q would then suggest analytical approaches needed to address the underlying business goal, automatically analyze data, and present results complete with visualizations and suggested actions. You can conduct this multistep analysis in an expansive analysis canvas, giving you the flexibility to make changes, explore multiple analysis paths simultaneously, and adapt to situations over time.</p>
<p>This new analysis experience is part of Amazon QuickSight meaning it can read from QuickSight dashboards which connect to sources such as <a href="https://aws.amazon.com/athena/">Amazon Athena</a>, <a href="https://aws.amazon.com/rds/aurora/">Amazon Aurora</a>, <a href="https://aws.amazon.com/pm/redshift/">Amazon Redshift</a>, <a href="https://aws.amazon.com/pm/serv-s3/">Amazon Simple Storage Service (Amazon S3)</a>, and <a href="https://aws.amazon.com/opensearch-service/">Amazon OpenSearch Service</a>. Specifically, this new experience is part of Amazon Q in QuickSight, which allows it to seamlessly integrate with other generative business intelligence (BI) capabilities such as data Q&A. You can also upload either a .csv or a single-table, single-sheet .xlsx file to incorporate into your analysis.</p>
<p>Here’s a visual walkthrough of this new analysis experience in Amazon Q in QuickSight.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/image-23-2.png"><img loading="lazy" class="aligncenter size-large wp-image-91314" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/image-23-2-1024x350.png" alt="" width="1024" height="350"></a></p>
<p>I’m planning a customer event, and I’ve received an Excel spreadsheet of all who’ve registered to attend the event. I want to learn more about the attendees, so I analyze the spreadsheet and ask a few questions. I start by describing what I want to explore.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/image-24.png"><img loading="lazy" class="aligncenter size-large wp-image-91315" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/image-24-1024x481.png" alt="" width="1024" height="481"></a></p>
<p>I upload the spreadsheet to start my analysis. Firstly, I want to understand how many people have registered for the event.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/image-25.png"><img loading="lazy" class="aligncenter size-large wp-image-91316" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/image-25-1024x450.png" alt="" width="1024" height="450"></a></p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/screen5-2.png"><img loading="lazy" class="aligncenter size-large wp-image-91317" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/screen5-2-1024x376.png" alt="" width="1024" height="376"></a></p>
<p>To design an agenda that’s suitable for the audience, I want to understand the various roles that will be attending. I select on the <strong>+ icon</strong> to add a new block for asking a question following along the thread from the previous block.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/screen6-3.png"><img loading="lazy" class="aligncenter size-large wp-image-91318" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/screen6-3-1024x448.png" alt="" width="1024" height="448"></a></p>
<p>I can continue to ask more questions. However, there are suggested questions for analyzing my data even further, and I now select one of these suggested questions. I want to increase marketing efforts at companies that don’t currently have a lot of attendees in this case, companies with fewer than two attendees.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/screen8-2.png"><img loading="lazy" class="aligncenter size-large wp-image-91319" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/screen8-2-1024x382.png" alt="" width="1024" height="382"></a></p>
<p>Amazon Q executes the required analysis and keeps me updated of the progress. <strong>Step 1</strong> of the process identifies companies that have fewer than two attendees and lists them.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/screen9-1.png"><img loading="lazy" class="aligncenter size-large wp-image-91320" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/screen9-1-1024x435.png" alt="" width="1024" height="435"></a></p>
<p><strong>Step 2</strong> gives an estimate of how many more attendees I might get from each company if marketing efforts are increased.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/08/screen10.png"><img loading="lazy" class="aligncenter size-large wp-image-89396" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/08/screen10-1024x301.png" alt="" width="1024" height="301"></a></p>
<p>In <strong>Step 3</strong> I can see the potential increase in total attendees (including the percentage increase) in line with the increase in marketing efforts.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/08/screen11.png"><img loading="lazy" class="aligncenter size-large wp-image-89397" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/08/screen11-1024x321.png" alt="" width="1024" height="321"></a></p>
<p>Lastly, <strong>Step 4</strong> goes even further to highlight companies I should prioritize for these increased marketing efforts.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/08/screen12.png"><img loading="lazy" class="aligncenter size-large wp-image-89398" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/08/screen12-1024x244.png" alt="" width="1024" height="244"></a></p>
<p>To increase the potential number of attendees even more, I wanted to change the analysis to identify companies with fewer than three attendees instead of two attendees. I choose the <strong>AI sparkle icon</strong> in the upper right to launch a modal that I then use to provide more context and make specific changes to the previous result.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/08/screen14.png"><img loading="lazy" class="aligncenter size-large wp-image-89400" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/08/screen14-1024x227.png" alt="" width="1024" height="227"></a><br> This change resulted in new projections, and I can choose to consider them for my marketing efforts or keep to the previous projections.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/08/screen15.png"><img loading="lazy" class="aligncenter size-large wp-image-89401" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/08/screen15-1024x123.png" alt="" width="1024" height="123"></a><br> <span style="text-decoration: underline;"><strong>Now available</strong></span><br> Amazon Q in QuickSight Pro users can use this new capability in preview in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Regions</a> at launch: US East (N. Virginia) and US West (Oregon). Get started with a <a href="https://aws.amazon.com/quicksight/pricing/">free 30-day trial</a> of QuickSight today. To learn more, visit the <a href="https://docs.aws.amazon.com/quicksight/latest/user/working-with-scenarios.html">Amazon QuickSight User Guide</a>. You can submit your questions to <a href="https://repost.aws/questions/QUBt6GS7a3TA6sTrsM-iE-hw/amazon-quicksight">AWS re:Post for Amazon QuickSight</a>, or through your usual AWS Support contacts.</p>
<p>– <a href="https://www.linkedin.com/in/veliswa-boya/">Veliswa</a>.</p>Use Amazon Q Developer to build ML models in Amazon SageMaker Canvas
https://aws.amazon.com/blogs/aws/use-amazon-q-developer-to-build-ml-models-in-amazon-sagemaker-canvas/
<![CDATA[Elizabeth Fuentes]]>Wed, 04 Dec 2024 17:56:37 +0000<![CDATA[Amazon Machine Learning]]><![CDATA[Amazon Q]]><![CDATA[Amazon SageMaker]]><![CDATA[Amazon SageMaker Canvas]]><![CDATA[Announcements]]><![CDATA[Artificial Intelligence]]><![CDATA[AWS re:Invent]]><![CDATA[Data Science & Analytics for Media]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>45cc4696a14ba4b33f0777d03cb184d6d7c05cf0Q Developer empowers non-ML experts to build ML models using natural language, enabling organizations to innovate faster with reduced time to market.<p>As a data scientist, I’ve experienced firsthand the challenges of making machine learning (ML) accessible to business analysts, marketing analysts, data analysts, and data engineers who are experts in their domains without ML experience. That’s why I’m particularly excited about today’s <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> announcement that <a href="https://aws.amazon.com/q/developer/">Amazon Q Developer</a> is now available in <a href="https://aws.amazon.com/sagemaker/canvas/">Amazon SageMaker Canvas</a>. What catches my attention is how Amazon Q Developer helps connect ML expertise with business needs, making ML more accessible across organizations.</p>
<p><a href="https://aws.amazon.com/q/developer/">Amazon Q Developer</a> helps domain experts build accurate, production-quality ML models through natural language interactions, even if they don’t have ML expertise. Amazon Q Developer guides these users by breaking down their business problems and analyzing their data to recommend step-by-step guidance for building custom ML models. It transforms users’ data to remove anomalies, and builds and evaluates custom ML models to recommend the best one, while providing users control and visibility into every step of the guided ML workflow. This empowers organizations to innovate faster with reduced time to market. It also reduces their reliance on ML experts so their specialists can focus on more complex technical challenges.</p>
<p>For example, a marketing analyst can state, “I want to predict home sales prices using home characteristics and past sales data”, and Amazon Q Developer will translate this into a set of ML steps, analyzing relevant customer data, building multiple models, and recommending the best approach.</p>
<p><span style="text-decoration: underline;"><strong>Let’s see it in action</strong></span><br> To start using Amazon Q Developer, I follow the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-getting-started.html">Getting started with using Amazon SageMaker Canvas</a> guide to launch the Canvas application. In this demo, I use natural language instructions to create a model to predict house prices for marketing and finance teams. From the SageMaker Canvas page, I select <strong>Amazon Q</strong> and then choose <strong>Start a new conversation.</strong></p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/21/Screenshot-2024-11-21-at-2.02.00 PM.png"><img loading="lazy" class="aligncenter wp-image-91203 size-full" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/21/Screenshot-2024-11-21-at-2.02.00 PM.png" alt="" width="688" height="332"></a></p>
<p>In the new conversation I write:</p>
<p><strong><em>I am an analyst and need to predict house prices for my marketing and finance teams.</em></strong></p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/AmazonQ-CanvasPDP-1127.jpg"><img loading="lazy" class="aligncenter size-full wp-image-92369" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/AmazonQ-CanvasPDP-1127.jpg" alt="" width="824" height="813"></a></p>
<p>Next, Amazon Q Developer explains the problem and recommends the appropriate ML model type. It also outlines the solution requirements, including the necessary dataset characteristics. Amazon Q Developer then asks if <strong>I want to upload my dataset</strong> or<strong> I want to choose a target column</strong>. I select it to upload my dataset.</p>
<p>In the next step, Amazon Q Developer lists the dataset requirements, which include relevant information about houses, current house prices, and the target variable for the regression model. It then recommended next steps, including: <strong>I want to upload my dataset</strong>, <strong>Select an existing dataset</strong>, <strong>Create a new dataset</strong> or <strong>I want to choose a target column</strong>. For this demo, I’ll use the <strong>canvas-sample-housing.csv</strong> <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-sample-datasets.html">sample dataset</a> as my existing dataset.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/13/select_an_existing_dataset.png"><img loading="lazy" class="aligncenter size-large wp-image-90045" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/13/select_an_existing_dataset-922x1024.png" alt="select_an_existing_dataset" width="922" height="1024"></a></p>
<p>After selecting and loading the dataset, Amazon Q Developer analyzes it and suggests <strong>median_house_value</strong> as the target column for the regression model. I accept by selecting <strong>I would like to predict the “median_house_value” column.</strong> Moving on to the next step, Amazon Q Developer details which dataset features (such as “location”, “housing_median_age”, and “total_rooms”) it will use to predict the median_house_value.</p>
<p><img loading="lazy" class="size-large aligncenter" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/artifacts/AWSNews/2024/AWSNEWS-1199-upload-dataset.gif" width="864" height="864"></p>
<p>Before moving forward with model training, I ask about the data quality, because without good data we can’t build a reliable model. Amazon Q Developer responds with quality insights for my entire dataset.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/13/quality.png"><img loading="lazy" class="aligncenter size-large wp-image-90051" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/13/quality-1024x967.png" alt="" width="1024" height="967"></a></p>
<p>I can ask specific questions about individual features and their distributions to better understand the data quality.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/13/Screenshot-2024-11-13-at-7.01.49 PM.png"><img loading="lazy" class="aligncenter size-large wp-image-90066" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/13/Screenshot-2024-11-13-at-7.01.49 PM-866x1024.png" alt="columns in dataset" width="866" height="1024"></a></p>
<p>To my surprise, through the previous question, I discovered that the “households” column has a wide variation between extreme values, which could affect the model’s prediction accuracy. Therefore, I ask Amazon Q Developer to fix this outlier problem.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/13/Screenshot-2024-11-13-at-6.39.42 PM.png"><img loading="lazy" class="aligncenter wp-image-90055 size-large" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/13/Screenshot-2024-11-13-at-6.39.42 PM-885x1024.png" alt="" width="885" height="1024"></a></p>
<p>After the transformation is done, I can ask what steps Amazon Q Developer followed to make this change. Behind the scenes, Amazon Q Developer applies advanced data preparation steps using <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-data-export.html">SageMaker Canvas data preparation capabilities</a>, which I can review and see the steps so that I can visualize and replicate the process to get the final, prepared dataset for training the model.</p>
<p><img loading="lazy" class="alignnone size-large" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/artifacts/AWSNews/2024/AWSNEWS-1199-data.gif" width="1090" height="652"></p>
<p>After reviewing the data preparation steps, I select <strong>Launch my training job</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/13/Screenshot-2024-11-13-at-7.05.21 PM.png"><img loading="lazy" class="aligncenter size-large wp-image-90068" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/13/Screenshot-2024-11-13-at-7.05.21 PM-1024x765.png" alt="launch training job" width="1024" height="765"></a></p>
<p>After the training job is launched, I can see its progress in the conversation, and the datasets created.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/21/Screenshot-2024-11-21-at-11.30.20 AM.png"><img loading="lazy" class="aligncenter wp-image-91170 size-full" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/21/Screenshot-2024-11-21-at-11.30.20 AM.png" alt="" width="706" height="822"></a></p>
<p>As a data scientist, I particularly appreciate that, with Amazon Q Developer, Ican see detailed metrics such as the confusion matrix and precision-recall scores for classification models and root mean square error (RMSE) for regression models. These are crucial elements I always look for when evaluating model performance and making data-driven decisions, and it’s refreshing to see them presented in a way that’s accessible to nontechnical users to build trust and enable proper governance while maintaining the depth that technical teams need.</p>
<p>You can access these metrics by selecting the new model from <strong>My Models</strong> or from the <strong>Amazon Q </strong>conversation menu:</p>
<ul>
<li><strong>Overview – </strong>This tab shows the <strong>Column impact</strong> analysis. In this case, <em><strong>median_income</strong></em> emerges as the primary factor influencing my model.</li>
<li><strong>Scoring – </strong>This tab provides model accuracy insights, including RMSE metrics.</li>
<li><strong>Advanced metrics – </strong>This tab displays the detailed <strong>Metrics table</strong>,<strong> Residuals</strong> and <strong>Error density</strong> for in-depth model evaluation.</li>
</ul>
<p><img loading="lazy" class="aligncenter size-medium" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/artifacts/AWSNews/2024/AWSNEWS-1199-analyze.gif" alt="Analyze My Model" width="864" height="864"></p>
<p>After reviewing these metrics and validating the model’s performance, I can move to the final stages of the ML workflow:</p>
<ul>
<li><strong>Predictions –</strong> I can test my model using the <strong>Predictions</strong> tab to validate its real-world performance.</li>
<li><strong>Deployment </strong>– I can create an endpoint deployment to make my model available for production use.</li>
</ul>
<p>This simplifies the deployment process, a step that traditionally requires significant DevOps knowledge, into a straightforward operation that business analysts can handle confidently.</p>
<p><img loading="lazy" class="aligncenter size-medium" style="border: 1px black solid;" src="https://d2908q01vomqb2.cloudfront.net/artifacts/AWSNews/2024/AWSNEWS-1199-end.gif" alt="predictions and deploy" width="864" height="864"></p>
<p><span style="text-decoration: underline;"><strong>Things to know</strong></span><br> Amazon Q Developer democratizes ML across organizations:</p>
<p><strong>Empowering all skill levels with ML</strong> – Amazon Q Developer is now available in SageMaker Canvas, helping business analysts, marketing analysts, and data professionals who don’t have ML experience create solutions for business problems through a guided ML workflow. From data analysis and model selection to deployment, users can solve business problems using natural language, reducing dependence on ML experts such as data scientists and enabling organizations to innovate faster with reduced time to market.</p>
<p><strong>Streamlining the ML workflow </strong>– With Amazon Q Developer available in SageMaker Canvas, users can prepare data, and build, analyze, and deploy ML models through a guided, transparent workflow. Amazon Q Developer provides advanced data preparation and AutoML capabilities that democratize ML, and allows non-ML experts to produce highly-accurate ML models.</p>
<p><strong>Providing full visibility into the ML workflow</strong> – Amazon Q Developer provides full transparency by generating the underlying code and technical artifacts such as data transformation steps, model explainability, and accuracy measures. This allows cross-functional teams, including ML experts, to review, validate, and update the models as needed, facilitating collaboration in a secure environment.</p>
<p><strong>Availability</strong> – Amazon Q Developer is now in preview release in Amazon SageMaker Canvas.</p>
<p><strong>Pricing</strong> – Amazon Q Developer is now available in SageMaker Canvas at no additional cost to both <a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/q-pro-tier.html">Amazon Q Developer Pro Tier</a> and <a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/q-free-tier.html">Amazon Q Developer Free tier</a> users. However, standard charges apply for resources such as <a href="https://aws.amazon.com/sagemaker/canvas/pricing/">SageMaker Canvas workspace</a> instances and any resources used for building or deploying models. For detailed pricing information, visit the <a href="https://aws.amazon.com/sagemaker/canvas/pricing/">Amazon SageMaker Canvas Pricing.</a></p>
<p>To learn more about getting started visit the <a href="https://aws.amazon.com/q/developer/">Amazon Q Developer product web page</a>.</p>
<p>— <a href="https://www.linkedin.com/in/lizfue/">Eli</a></p>Amazon Bedrock Guardrails now supports multimodal toxicity detection with image support (preview)
https://aws.amazon.com/blogs/aws/amazon-bedrock-guardrails-now-supports-multimodal-toxicity-detection-with-image-support/
<![CDATA[Antje Barth]]>Wed, 04 Dec 2024 17:38:16 +0000<![CDATA[Amazon Bedrock]]><![CDATA[Announcements]]><![CDATA[Artificial Intelligence]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]><![CDATA[Responsible AI]]>c70235c3e3185931cb16a48df9d852ef0b2af318Build responsible AI applications - Safeguard them against harmful text and image content with configurable filters and thresholds.<p>Today, we’re announcing the preview of multimodal toxicity detection with image support in <a href="https://aws.amazon.com/bedrock/guardrails/">Amazon Bedrock Guardrails</a>. This new capability detects and filters out undesirable image content in addition to text, helping you improve user experiences and manage model outputs in your <a href="https://aws.amazon.com/ai/generative-ai/">generative AI</a> applications.</p>
<p>Amazon Bedrock Guardrails helps you implement safeguards for generative AI applications by filtering undesirable content, redacting personally identifiable information (PII), and enhancing content safety and privacy. You can configure policies for denied topics, content filters, word filters, PII redaction, contextual grounding checks, and Automated Reasoning checks (preview), to tailor safeguards to your specific use cases and responsible AI policies.</p>
<p>With this launch, you can now use the existing content filter policy in Amazon Bedrock Guardrails to detect and block harmful image content across categories such as hate, insults, sexual, and violence. You can configure thresholds from low to high to match your application’s needs.</p>
<p>This new image support works with all <a href="https://aws.amazon.com/what-is/foundation-models/">foundation models (FMs)</a> in Amazon Bedrock that support image data, as well as any custom fine-tuned models you bring. It provides a consistent layer of protection across text and image modalities, making it easier to build responsible AI applications.</p>
<p><a href="https://www.linkedin.com/in/terohottinen/">Tero Hottinen</a>, VP, Head of Strategic Partnerships at <a href="http://www.kone.com">KONE</a>, envisions the following use case:</p>
<blockquote>
<p>In its ongoing evaluation, KONE recognizes the potential of Amazon Bedrock Guardrails as a key component in protecting gen AI applications, particularly for relevance and contextual grounding checks, as well as the multimodal safeguards. The company envisions integrating product design diagrams and manuals into its applications, with Amazon Bedrock Guardrails playing a crucial role in enabling more accurate diagnosis and analysis of multimodal content.</p>
</blockquote>
<p>Here’s how it works.</p>
<p><strong><u>Multimodal toxicity detection in action<br> </u></strong>To get started, create a guardrail in the <a href="https://aws.amazon.com/console/">AWS Management Console</a> and configure the content filters for either text or image data or both. You can also use <a href="https://aws.amazon.com/developer/tools/">AWS SDKs</a> to integrate this capability into your applications.</p>
<p><strong>Create guardrail<br> </strong>On the <a href="https://console.aws.amazon.com/console/home">console</a>, navigate to<strong> Amazon Bedrock</strong> and select <strong>Guardrails</strong>. From there, you can create a new guardrail and use the existing content filters to detect and block image data in addition to text data. The categories for <strong>Hate</strong>, <strong>Insults</strong>, <strong>Sexual</strong>, and <strong>Violence</strong> under <strong>Configure content filters</strong> can be configured for either text or image content or both. The <strong>Misconduct</strong> and <strong>Prompt attacks</strong> categories can be configured for text content only.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/23/2024-guardrails-toxicity-7.png"><img loading="lazy" class="aligncenter wp-image-91497 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/23/2024-guardrails-toxicity-7.png" alt="Amazon Bedrock Guardrails Multimodal Support" width="1442" height="1138"></a></p>
<p>After you’ve selected and configured the content filters you want to use, you can save the guardrail and start using it to build safe and responsible generative AI applications.</p>
<p>To test the new guardrail in the console, select the guardrail and choose <strong>Test</strong>. You have two options: test the guardrail by choosing and invoking a model or to test the guardrail without invoking a model by using the Amazon Bedrock Guardrails independent <code>ApplyGuardail</code> API.</p>
<p>With the <code>ApplyGuardrail</code> API, you can validate content at any point in your application flow before processing or serving results to the user. You can also use the API to evaluate inputs and outputs for any self-managed (custom), or third-party FMs, regardless of the underlying infrastructure. For example, you could use the API to evaluate a <a href="https://www.llama.com/">Meta Llama 3.2</a> model hosted on <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a> or a <a href="https://mistral.ai/news/mistral-nemo/">Mistral NeMo</a> model running on your laptop.</p>
<p><strong>Test guardrail by choosing and invoking a model<br> </strong>Select a model that supports image inputs or outputs, for example, Anthropic’s Claude 3.5 Sonnet. Verify that the prompt and response filters are enabled for image content. Next, provide a prompt, upload an image file, and choose <strong>Run</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/16/2024-guardrails-toxicity-2-1.png"><img loading="lazy" class="aligncenter wp-image-90493 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/16/2024-guardrails-toxicity-2-1.png" alt="Amazon Bedrock Guardrails Multimodal Support" width="1165" height="803"></a></p>
<p>In my example, Amazon Bedrock Guardrails intervened. Choose <strong>View trace</strong> for more details.</p>
<p>The guardrail trace provides a record of how safety measures were applied during an interaction. It shows whether Amazon Bedrock Guardrails intervened or not and what assessments were made on both input (prompt) and output (model response). In my example, the content filters blocked the input prompt because they detected insults in the image with a high confidence.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/16/2024-guardrails-toxicity-3.png"><img loading="lazy" class="aligncenter wp-image-90489 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/16/2024-guardrails-toxicity-3.png" alt="Amazon Bedrock Guardrails Multimodal Support" width="1172" height="890"></a></p>
<p><strong>Test guardrail without invoking a model<br> </strong>In the console, choose <strong>Use Guardrails independent API</strong> to test the guardrail without invoking a model. Choose whether you want to validate an input prompt or an example of a model generated output. Then, repeat the steps from before. Verify that the prompt and response filters are enabled for image content, provide the content to validate, and choose <strong>Run</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/16/2024-guardrails-toxicity-5.png"><img loading="lazy" class="aligncenter size-full wp-image-90486" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/16/2024-guardrails-toxicity-5.png" alt="Amazon Bedrock Guardrails Multimodal Support" width="1163" height="799"></a></p>
<p>I reused the same image and input prompt for my demo, and Amazon Bedrock Guardrails intervened again. Choose <strong>View trace</strong> again for more details.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/16/2024-guardrails-toxicity-6.png"><img loading="lazy" class="aligncenter wp-image-90487 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/16/2024-guardrails-toxicity-6.png" alt="Amazon Bedrock Guardrails Multimodal Support" width="1176" height="840"></a></p>
<p><strong><u>Join the preview<br> </u></strong>Multimodal toxicity detection with image support is available today in preview in Amazon Bedrock Guardrails in the US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Mumbai, Seoul, Singapore, Tokyo), Europe (Frankfurt, Ireland, London), and AWS GovCloud (US-West) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a>. To learn more, visit <a href="https://aws.amazon.com/bedrock/guardrails/">Amazon Bedrock Guardrails</a>.</p>
<p>Give the multimodal toxicity detection content filter a try today in the <a href="https://console.aws.amazon.com/bedrock/home#/guardrails">Amazon Bedrock console</a> and let us know what you think! Send feedback to <a href="https://repost.aws/tags/TAQeKlaPaNRQ2tWB6P7KrMag">AWS re:Post for Amazon Bedrock</a> or through your usual AWS Support contacts.</p>
<p>— <a href="https://www.linkedin.com/in/antje-barth/" target="_blank" rel="noopener noreferrer">Antje</a></p>New Amazon Bedrock capabilities enhance data processing and retrieval
https://aws.amazon.com/blogs/aws/new-amazon-bedrock-capabilities-enhance-data-processing-and-retrieval/
<![CDATA[Danilo Poccia]]>Wed, 04 Dec 2024 17:35:20 +0000<![CDATA[Amazon Bedrock]]><![CDATA[Amazon Machine Learning]]><![CDATA[Announcements]]><![CDATA[Artificial Intelligence]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Generative AI]]><![CDATA[Launch]]><![CDATA[News]]>6c66c1933fafb068e6c428f028a30fdcb6ca46cfAmazon Bedrock enhances generative AI data analysis with multimodal processing, graph modeling, and structured querying, accelerating AI application development.<p>Today, <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> introduces four enhancements that streamline how you can analyze data with <a href="https://aws.amazon.com/ai/generative-ai/">generative AI</a>:</p>
<p><strong>Amazon Bedrock Data Automation (preview) </strong>– A fully managed capability of Amazon Bedrock that streamlines the generation of valuable insights from unstructured, multimodal content such as documents, images, audio, and videos. With Amazon Bedrock Data Automation, you can build automated <a href="https://aws.amazon.com/ai/generative-ai/use-cases/document-processing/">intelligent document processing (IDP)</a>, media analysis, and <a href="https://aws.amazon.com/what-is/retrieval-augmented-generation/">Retrieval-Augmented Generation (RAG)</a> workflows quickly and cost-effectively. Insights include video summaries of key moments, detection of inappropriate image content, automated analysis of complex documents, and much more. You can customize outputs to tailor insights into your specific business needs. Amazon Bedrock Data Automation can be used as a standalone feature or as a parser when setting up a knowledge base for RAG workflows.</p>
<p><strong>Amazon Bedrock Knowledge Bases now processes multimodal data</strong> –To help build applications that process both text and visual elements in documents and images, you can configure a knowledge base to parse documents using either Amazon Bedrock Data Automation or use a <a href="https://aws.amazon.com/what-is/foundation-models/">foundation model (FM)</a> as the parser. Multimodal data processing can improve the accuracy and relevancy of the responses you get from a knowledge base which includes information embedded in both images and text.</p>
<p><strong>Amazon Bedrock Knowledge Bases now supports GraphRAG (preview)</strong> – We now offer one of the first fully-managed GraphRAG capabilities. GraphRAG enhances generative AI applications by providing more accurate and comprehensive responses to end users by using RAG techniques combined with graphs.</p>
<p><strong>Amazon Bedrock Knowledge Bases now supports structured data retrieval</strong> – This capability extends a knowledge base to support natural language querying of data warehouses and data lakes so that applications can access business intelligence (BI) through conversational interfaces and improve the accuracy of the responses by including critical enterprise data. Amazon Bedrock Knowledge Bases provides one of the first fully-managed out-of-the-box RAG solutions that can natively query structured data from where it resides. This capability helps break data silos across data sources and accelerates building generative AI applications from over a month to just a few days.</p>
<p>These new capabilities make it easier to build comprehensive AI applications that can process, understand, and retrieve information from structured and unstructured data sources. For example, a car insurance company can use Amazon Bedrock Data Automation to automate their claims adjudication workflow to reduce the time taken to process automobile claims, improving the productivity of their claims department.</p>
<p>Similarly, a media company can analyze TV shows and extract insights needed for smart advertisement placement such as scene summaries, industry standard advertising taxonomies (IAB), and company logos. A media production company can generate scene-by-scene summaries and capture key moments in their video assets. A financial services company can process complex financial documents containing charts and tables and use GraphRAG to understand relationships between different financial entities. All these companies can use structured data retrieval to query their data warehouse while retrieving information from their knowledge base.</p>
<p>Let’s take a closer look at these features.</p>
<p><span style="text-decoration: underline;"><strong>Introducing Amazon Bedrock Data Automation<br> </strong></span>Amazon Bedrock Data Automation is a capability of Amazon Bedrock that simplifies the process of extracting valuable insights from multimodal, unstructured content, such as documents, images, videos, and audio files.</p>
<p>Amazon Bedrock Data Automation provides a unified, API-driven experience that developers can use to process multimodal content through a single interface, eliminating the need to manage and orchestrate multiple AI models and services. With built-in safeguards, such as visual grounding and confidence scores, Amazon Bedrock Data Automation helps promote the accuracy and trustworthiness of the extracted insights, making it easier to integrate into enterprise workflows.</p>
<p>Amazon Bedrock Data Automation supports 4 modalities (documents, images, video, and audio). When used in an application, all modalities use the same asynchronous inference API, and results are written to an <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> bucket.</p>
<p>For each modality, you can configure the output based on your processing needs and generate two types of outputs:</p>
<p><strong>Standard output</strong> – With standard output, you get predefined default insights that are relevant to the input data type. Examples include semantic representation of documents, summaries of videos by scene, audio transcripts and more. You can configure which insights you want to extract with just a few steps.</p>
<p><strong>Custom output</strong> – With custom output, you have the flexibility to define and specify your extraction needs using artifacts called “blueprints” to generate insights tailored to your business needs. You can also transform the generated output into a specific format or schema that is compatible with your downstream systems such as databases or other applications.</p>
<p>Standard output can be used with all formats (audio, documents, images, and videos). During the preview, custom output can only be used with documents and images.</p>
<p>Both standard and custom output configurations can be saved in a project to reference in the Amazon Bedrock Data Automation inference API. A project can be configured to generate both standard output and custom output for each processed file.</p>
<p>Let’s look at an example of processing a document for both standard and custom outputs.</p>
<p><span style="text-decoration: underline;"><strong>Using Amazon Bedrock Data Automation</strong></span><br> On the <a href="https://console.aws.amazon.com/bedrock">Amazon Bedrock console</a>, I choose <strong>Data Automation</strong> in the navigation pane. Here, I can review how this capability works with a few sample use cases.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-bda-1.png"><img loading="lazy" class="aligncenter size-full wp-image-91923" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-bda-1.png" alt="Console screenshot." width="1563" height="935"></a></p>
<p>Then, I choose <strong>Demo</strong> in the <strong>Data Automation</strong> section of the navigation pane. I can try this capability using one of the provided sample documents or by uploading my own. For example, let’s say I am working on an application that needs to process birth certificates.</p>
<p>I start by uploading a birth certificate to see the standard output results. The first time I upload a document, I’m asked to confirm to create an S3 bucket to store the assets. When I look at the standard output, I can tailor the result with a few quick settings.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/bedrock-bda-demo-standard.png"><img loading="lazy" class="aligncenter size-full wp-image-92797" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/bedrock-bda-demo-standard.png" alt="Console screenshot." width="1603" height="1530"></a></p>
<p>I choose the <strong>Custom output</strong> tab. The document is recognized by one of the sample blueprints and information is extracted across multiple fields.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-bda-demo-custom-1.png"><img loading="lazy" class="aligncenter size-full wp-image-92499" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-bda-demo-custom-1.png" alt="Console screenshot." width="1401" height="1421"></a></p>
<p>Most of the data for my application is there but I need a few customizations. For example, the date the birth certificate was issued (<code>JUNE 10, 2022</code>) is in a different format than the other dates in the document. I also need the state that issued the certificate and a couple of flags that tell me if the child last name matches the one from the mother or the father.</p>
<p>Most of the fields in the previous blueprint use the <strong>Explicit</strong> extraction type. That means they’re extracted as they are from the document.</p>
<p>If I want a date in a specific format, I can create a new field using the <strong>Inferred</strong> extraction type and add instructions on how to format the result starting from the content of the document. Inferred extractions can be used to perform transformations, such as date or Social Security number (SSN) format, or validations, for example, to check if a person is over 21 based on today’s date.</p>
<p>Sample blueprints cannot be edited. I choose <strong>Duplicate blueprint</strong> to create a new blueprint that I can edit and then <strong>Add field</strong> from the <strong>Fields</strong> drop down.</p>
<p>I add four fields with extraction type <strong>Inferred</strong> and these instructions:</p>
<ol>
<li><code>The date the birth certificate was issued in MM/DD/YYYY format</code></li>
<li><code>The state that issued the birth certificate </code></li>
<li><code>Is ChildLastName equal to FatherLastName</code></li>
<li><code>Is ChildLastName equal to MotherLastName</code></li>
</ol>
<p>The first two fields are strings and the last two booleans.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-bda-demo-add-fields.png"><img loading="lazy" class="aligncenter size-full wp-image-92500" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-bda-demo-add-fields.png" alt="Console screenshot." width="1484" height="1375"></a></p>
<p>After I create the new fields, I can apply the new blueprint to the document I previously uploaded.</p>
<p>I choose <strong>Get result</strong> and look for the new fields in the results. I see the date formatted as I need, the two flags, and the state.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-bda-demo-results.png"><img loading="lazy" class="aligncenter size-full wp-image-92501" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-bda-demo-results.png" alt="Console screenshot." width="1617" height="668"></a></p>
<p>Now that I have created this custom blueprint tailored to the needs of my application, I can add it to a project. I can associate multiple blueprints with a project for the different document types I want to process, such as a blueprint for passports, a blueprint for birth certificates, a blueprint for invoices, and so on. When processing documents, Amazon Bedrock Data Automation matches each document to a blueprints within the project to extract relevant information.</p>
<p>I can also create a new blueprint form scratch. In that case, I can start with a prompt where I declare any fields I expect to find in the uploaded document and perform normalizations or validations.</p>
<p>Amazon Bedrock Data Automation can also process audio and video files. For example, here’s the standard output when uploading a video from a keynote presentation by <a href="https://www.linkedin.com/in/swaminathansivasubramanian/">Swami Sivasubramanian VP, AI and Data at AWS</a>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/03/bedrock-bda-demo-video-output.png"><img loading="lazy" class="aligncenter size-full wp-image-92889" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/03/bedrock-bda-demo-video-output.png" alt="Console screenshot." width="1565" height="1658"></a></p>
<p>It takes a few minutes to get the output. The results include a summarization of the overall video, a summary scene by scene, and the text that appears during the video. From here, I can toggle the options to have a full audio transcript, content moderation, or <a href="https://www.iab.com/">Interactive Advertising Bureau (IAB)</a> taxonomy.</p>
<p>I can also use Amazon Bedrock Data Automation as a parser when creating a knowledge base to extract insights from visually rich documents and images, for retrieval and response generation. Let’s see that in the next section.</p>
<p><span style="text-decoration: underline;"><strong>Using multimodal data processing in Amazon Bedrock Knowledge Bases</strong></span><br> Multimodal data processing support enables applications to understand both text and visual elements in documents.</p>
<p>With multimodal data processing, applications can use a knowledge base to:</p>
<ul>
<li>Retrieve answers from visual elements in addition to existing support of text.</li>
<li>Generate responses based on the context that includes both text and visual data.</li>
<li>Provide source attribution that references visual elements from the original documents.</li>
</ul>
<p>When creating a knowledge base in the Amazon Bedrock console, I now have the option to select <strong>Amazon Bedrock Data Automation</strong> as <strong>Parsing strategy</strong>.</p>
<p>When I select <strong>Amazon Bedrock Data Automation as parser</strong>, Amazon Bedrock Data Automation handles the extraction, transformation, and generation of insights from visually rich content, while Amazon Bedrock Knowledge Bases manages ingestion, retrieval, model response generation, and source attribution.</p>
<p>Alternatively, I can use the existing <strong>Foundation models as a parser</strong> option. With this option, there’s now support for Anthropic’s Claude 3.5 Sonnet as parser, and I can use the default prompt or modify it to suit a specific use case.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-kb-bda-3.png"><img loading="lazy" class="aligncenter size-full wp-image-91892" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-kb-bda-3.png" alt="Console screenshot." width="1241" height="1094"></a></p>
<p>In the next step, I specify the <strong>Multimodal storage destination</strong> on Amazon S3 that will be used by Amazon Bedrock Knowledge Bases to store images extracted from my documents in the knowledge base data source. These images can be retrieved based on a user query, used to generate the response, and cited in the response.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-kb-bda-storage-1.png"><img loading="lazy" class="aligncenter size-full wp-image-91886" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-kb-bda-storage-1.png" alt="Console screenshot." width="1246" height="1389"></a></p>
<p>When using the knowledge base, the information extracted by Amazon Bedrock Data Automation or FMs as parser is used to retrieve information about visual elements, understand charts and diagrams, and provide responses that reference both textual and visual content.</p>
<p><span style="text-decoration: underline;"><strong>Using GraphRAG in Amazon Bedrock Knowledge Bases<br> </strong></span>Extracting insights from scattered data sources presents significant challenges for RAG applications, requiring multi-step reasoning across these data sources to generate relevant responses. For example, a customer might ask a generative AI-powered travel application to identify family-friendly beach destinations with direct flights from their home location that also offer good seafood restaurants. This requires a connected workflow to identify suitable beaches that other families have enjoyed, match these to flight routes, and select highly-rated local restaurants. A traditional RAG system may struggle to synthesize all these pieces into a cohesive recommendation because the information lives in disparate sources and is not interlinked.</p>
<p>Knowledge graphs can address this challenge by modeling complex relationships between entities in a structured way. However, building and integrating graphs into an application requires significant expertise and effort.</p>
<p>Amazon Bedrock Knowledge Bases now offers one of the first fully managed GraphRAG capabilities that enhances generative AI applications by providing more accurate and comprehensive responses to end users by using RAG techniques combined with graphs.</p>
<p>When creating a knowledge base, I can now enable GraphRAG in just a few steps by choosing <a href="https://docs.aws.amazon.com/neptune-analytics/latest/userguide/what-is-neptune-analytics.html">Amazon Neptune Analytics</a> as database, automatically generating vector and graph representations of the underlying data, entities and their relationships, and reducing development effort from several weeks to just a few hours.</p>
<p>I start the creation of new knowledge base. In the <strong>Vector database section</strong>, when creating a new vector store, I select <strong>Amazon Neptune Analytics (GraphRAG)</strong>. If I don’t want to create a new graph, I can provide an existing vector store and select a Neptune Analytics graph from the list. GraphRAG uses <a href="https://aws.amazon.com/bedrock/claude/">Anthropic’s Claude 3 Haiku</a> to automatically build graphs for a knowledge base.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-kb-graph-rag-1.png"><img loading="lazy" class="aligncenter size-full wp-image-91887" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-kb-graph-rag-1.png" alt="Console screenshot." width="1246" height="1389"></a></p>
<p>After I complete the creation of the knowledge base, Amazon Bedrock automatically builds a graph, linking related concepts and documents. When retrieving information from the knowledge base, GraphRAG traverses these relationships to provide more comprehensive and accurate responses.</p>
<p><span style="text-decoration: underline;"><strong>Using structured data retrieval in Amazon Bedrock Knowledge Bases<br> </strong></span>Structured data retrieval allows natural language querying of databases and data warehouses. For example, a business analyst might ask, “What were our top-selling products last quarter?” and the system automatically generates and runs the appropriate SQL query for a data warehouse stored in an <a href="https://aws.amazon.com/redshift/">Amazon Redshift</a> database.</p>
<p>When creating a knowledge base, I now have the option to use a <strong>structured data store</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/kb-structured-create.png"><img loading="lazy" class="aligncenter size-full wp-image-91907" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/kb-structured-create.png" alt="Console screenshot." width="1561" height="177"></a></p>
<p>I enter a name and description for the knowledge base. In <strong>Data source details</strong>, I use <a href="https://aws.amazon.com/redshift/">Amazon Redshift</a> as <strong>Query engine</strong>. I create a new <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> service role to manage the knowledge base resources and choose <strong>Next</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/kb-structured-create-details.png"><img loading="lazy" class="aligncenter size-full wp-image-91908" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/kb-structured-create-details.png" alt="Console screenshot." width="1241" height="1252"></a></p>
<p>I choose <strong>Redshift serverless</strong> in <strong>Connection options</strong> and the <strong>Workgroup</strong> to use. Amazon Redshift provisioned clusters are also supported. I use the previously created IAM role for <strong>Authentication</strong>. Storage metadata can be managed with <strong>AWS Glue Data Catalog</strong> or directly within an Amazon Redshift database. I select a database from the list.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/kb-structured-create-query-engine.png"><img loading="lazy" class="aligncenter size-full wp-image-91909" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/kb-structured-create-query-engine.png" alt="Console screenshot." width="1241" height="1569"></a></p>
<p>In the configuration of the knowledge base, I can define the maximum duration for a query and include or exclude access to tables or columns. To improve the accuracy of query generation from natural language, I can optionally add a description for tables and columns and a list of curated queries that provides practical examples of how to translate a question into a SQL query for my database. I choose <strong>Next</strong>, review the settings, and complete the creation of the knowledge base</p>
<p>After a few minutes, the knowledge base is ready. Once synced, Amazon Bedrock Knowledge Bases handles generating, running, and formatting the result of the query, making it easy to build natural language interfaces to structured data. When invoking a knowledge base using structured data, I can ask to only generate SQL, retrieve data, or summarize the data in natural language.</p>
<p><span style="text-decoration: underline;"><strong>Things to know<br> </strong></span>These new capabilities are available today in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Regions</a>:</p>
<ul>
<li>Amazon Bedrock Data Automation is available in preview in US West (Oregon).</li>
<li>Multimodal data processing support in Amazon Bedrock Knowledge Bases using Amazon Bedrock Data Automation as parser is available in preview in US West (Oregon). FM as a parser is available in all Regions where Amazon Bedrock Knowledge Bases is offered.</li>
<li>GraphRAG in Amazon Bedrock Knowledge Bases is available in preview in all commercial Regions where Amazon Bedrock Knowledge Bases and Amazon Neptune Analytics are offered.</li>
<li>Structured data retrieval is available in Amazon Bedrock Knowledge Bases in all commercial Regions where Amazon Bedrock Knowledge Bases is offered.</li>
</ul>
<p>As usual with Amazon Bedrock, pricing is based on usage:</p>
<ul>
<li>Amazon Bedrock Data Automation charges per images, per page for documents, and per minute for audio or video.</li>
<li>Multimodal data processing in Amazon Bedrock Knowledge Bases is charged based on the use of either Amazon Bedrock Data Automation or the FM as parser.</li>
<li>There is no additional cost for using GraphRAG in Amazon Bedrock Knowledge Bases but you pay for using <a href="https://docs.aws.amazon.com/neptune-analytics/latest/userguide/what-is-neptune-analytics.html">Amazon Neptune Analytics</a> as the vector store. For more information, visit <a href="https://aws.amazon.com/neptune/pricing/">Amazon Neptune pricing</a>.</li>
<li>There is an additional cost when using structured data retrieval in Amazon Bedrock Knowledge Bases.</li>
</ul>
<p>For detailed pricing information, see <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock pricing</a>.</p>
<p>Each capability can be used independently or in combination. Together, they make it easier and faster to build applications that use AI to process data. To get started, visit the <a href="https://console.aws.amazon.com/bedrock">Amazon Bedrock console</a>. To learn more, you can access the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html">Amazon Bedrock documentation</a> and send feedback to <a href="https://repost.aws/tags/TAQeKlaPaNRQ2tWB6P7KrMag/amazon-bedrock">AWS re:Post for Amazon Bedrock</a>. You can find deep-dive technical content and discover how our Builder communities are using Amazon Bedrock at <a href="https://community.aws/">community.aws</a>. Let us know what you build with these new capabilities!</p>
<p>— <a href="https://twitter.com/danilop">Danilo</a></p>Reduce costs and latency with Amazon Bedrock Intelligent Prompt Routing and prompt caching (preview)
https://aws.amazon.com/blogs/aws/reduce-costs-and-latency-with-amazon-bedrock-intelligent-prompt-routing-and-prompt-caching-preview/
<![CDATA[Danilo Poccia]]>Wed, 04 Dec 2024 17:22:27 +0000<![CDATA[Amazon Bedrock]]><![CDATA[Amazon Machine Learning]]><![CDATA[Announcements]]><![CDATA[Artificial Intelligence]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Generative AI]]><![CDATA[Launch]]><![CDATA[News]]>4778d78f6e2e6f9f5978d0b34961edebc2bac614Route requests and cache frequently used context in prompts to reduce latency and balance performance with cost efficiency.<p><em><strong>December 5, 2024</strong>: Added instructions to request access to the Amazon Bedrock prompt caching preview. </em></p>
<p>Today, <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> has introduced in preview two capabilities that help reduce costs and latency for <a href="https://aws.amazon.com/ai/generative-ai">generative AI</a> applications:</p>
<p><strong>Amazon Bedrock Intelligent Prompt Routing</strong> – When invoking a model, you can now use a combination of <a href="https://aws.amazon.com/what-is/foundation-models/">foundation models (FMs)</a> from the same model family to help optimize for quality and cost. For example, with the <a href="https://aws.amazon.com/bedrock/claude/">Anthropic’s Claude</a> model family, Amazon Bedrock can intelligently route requests between Claude 3.5 Sonnet and Claude 3 Haiku depending on the complexity of the prompt. Similarly, Amazon Bedrock can route requests between <a href="https://aws.amazon.com/bedrock/llama/">Meta Llama</a> 3.1 70B and 8B. The prompt router predicts which model will provide the best performance for each request while optimizing the quality of response and cost. This is particularly useful for applications such as customer service assistants, where uncomplicated queries can be handled by smaller, faster, and more cost-effective models, and complex queries are routed to more capable models. Intelligent Prompt Routing can reduce costs by up to 30 percent without compromising on accuracy.</p>
<p><strong>Amazon Bedrock now supports prompt caching</strong> – You can now cache frequently used context in prompts across multiple model invocations. This is especially valuable for applications that repeatedly use the same context, such as document Q&A systems where users ask multiple questions about the same document or coding assistants that need to maintain context about code files. The cached context remains available for up to 5 minutes after each access. Prompt caching in Amazon Bedrock can reduce costs by up to 90% and latency by up to 85% for supported models.</p>
<p>These features make it easier to reduce latency and balance performance with cost efficiency. Let’s look at how you can use them in your applications.</p>
<p><span style="text-decoration: underline"><strong>Using Amazon Bedrock Intelligent Prompt Routing in the console<br> </strong></span>Amazon Bedrock Intelligent Prompt Routing uses advanced prompt matching and model understanding techniques to predict the performance of each model for every request, optimizing for quality of responses and cost. During the preview, you can use the default prompt routers for <a href="https://aws.amazon.com/bedrock/claude/">Anthropic’s Claude</a> and <a href="https://aws.amazon.com/bedrock/llama/">Meta Llama</a> model families.</p>
<p>Intelligent prompt routing can be accessed through the <a href="https://console.aws.amazon.com">AWS Management Console</a>, the <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, and the <a href="https://aws.amazon.com/tools/">AWS SDKs</a>. In the <a href="https://console.aws.amazon.com/bedrock">Amazon Bedrock console</a>, I choose <strong>Prompt routers</strong> in the <strong>Foundation models</strong> section of the navigation pane.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/bedrock-prompt-routers.png"><img loading="lazy" class="aligncenter size-full wp-image-92808" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/bedrock-prompt-routers.png" alt="Console screenshot." width="1465" height="676"></a></p>
<p>I choose the <strong>Anthropic Prompt Router</strong> default router to get more information.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-prompt-routers-anthropic-1.png"><img loading="lazy" class="aligncenter size-full wp-image-91863" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-prompt-routers-anthropic-1.png" alt="Console screenshot." width="1481" height="496"></a></p>
<p>From the configuration of the prompt router, I see that it’s routing requests between Claude 3.5 Sonnet and Claude 3 Haiku using <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html">cross-Region inference profiles</a>. The routing criteria defines the quality difference between the response of the largest model and the smallest model for each prompt as predicted by the router internal model at runtime. The fallback model, used when none of the chosen models meet the desired performance criteria, is Anthropic’s Claude 3.5 Sonnet.</p>
<p>I choose <strong>Open in Playground</strong> to chat using the prompt router and enter this prompt:</p>
<p><code>Alice has N brothers and she also has M sisters. How many sisters does Alice’s brothers have?</code></p>
<p>The result is quickly provided. I choose the new <strong>Router metrics</strong> icon on the right to see which model was selected by the prompt router. In this case, because the question is rather complex, Anthropic’s Claude 3.5 Sonnet was used.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/15/bedrock-prompt-routers-anthropic-chat.png"><img loading="lazy" class="aligncenter size-full wp-image-90288" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/15/bedrock-prompt-routers-anthropic-chat.png" alt="Console screenshot." width="1476" height="447"></a></p>
<p>Now I ask a straightforward question to the same prompt router:</p>
<p><code>Describe the purpose of a 'hello world' program in one line.</code></p>
<p>This time, Anthropic’s Claude 3 Haiku has been selected by the prompt router.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-prompt-routers-anthropic-chat-simple.png"><img loading="lazy" class="aligncenter size-full wp-image-91980" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-prompt-routers-anthropic-chat-simple.png" alt="Console screenshot." width="1615" height="365"></a></p>
<p>I select the <strong>Meta Prompt Router</strong> to check its configuration. It’s using the cross-Region inference profiles for Llama 3.1 70B and 8B with the 70B model as fallback.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-prompt-routers-meta-1.png"><img loading="lazy" class="aligncenter size-full wp-image-91864" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-prompt-routers-meta-1.png" alt="Console screenshot." width="1481" height="491"></a></p>
<p>Prompt routers are integrated with other Amazon Bedrock capabilities, such as <a href="https://aws.amazon.com/bedrock/knowledge-bases/">Amazon Bedrock Knowledge Bases</a> and <a href="https://aws.amazon.com/bedrock/agents/">Amazon Bedrock Agents</a>, or when <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/evaluation.html">performing evaluations</a>. For example, here I create a model evaluation to help me compare, for my use case, a prompt router to another model or prompt router.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-prompt-routers-evaluation.png"><img loading="lazy" class="aligncenter size-full wp-image-91856" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/bedrock-prompt-routers-evaluation.png" alt="Console screenshot." width="1304" height="777"></a></p>
<p>To use a prompt router in an application, I need to set the prompt router <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">Amazon Resource Name (ARN)</a> as model ID in the Amazon Bedrock API. Let’s see how this works with the AWS CLI and an AWS SDK.</p>
<p><span style="text-decoration: underline"><strong>Using Amazon Bedrock Intelligent Prompt Routing with the AWS CLI<br> </strong></span>The Amazon Bedrock API has been extended to handle prompt routers. For example, I can list the existing prompt routes in an AWS Region using <strong>ListPromptRouters</strong>:</p>
<div class="hide-language">
<pre class="unlimited-height-code"><code class="lang-bash">aws bedrock list-prompt-routers</code></pre>
</div>
<p>In output, I receive a summary of the existing prompt routers, similar to what I saw in the console.</p>
<p>Here’s the full output of the previous command:</p>
<pre><code class="lang-json">{
"promptRouterSummaries": [
{
"promptRouterName": "Anthropic Prompt Router",
"routingCriteria": {
"responseQualityDifference": 0.26
},
"description": "Routes requests among models in the Claude family",
"createdAt": "2024-11-20T00:00:00+00:00",
"updatedAt": "2024-11-20T00:00:00+00:00",
"promptRouterArn": "arn:aws:bedrock:us-east-1:123412341234:default-prompt-router/anthropic.claude:1",
"models": [
{
"modelArn": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.anthropic.claude-3-haiku-20240307-v1:0"
},
{
"modelArn": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.anthropic.claude-3-5-sonnet-20240620-v1:0"
}
],
"fallbackModel": {
"modelArn": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.anthropic.claude-3-5-sonnet-20240620-v1:0"
},
"status": "AVAILABLE",
"type": "default"
},
{
"promptRouterName": "Meta Prompt Router",
"routingCriteria": {
"responseQualityDifference": 0.0
},
"description": "Routes requests among models in the LLaMA family",
"createdAt": "2024-11-20T00:00:00+00:00",
"updatedAt": "2024-11-20T00:00:00+00:00",
"promptRouterArn": "arn:aws:bedrock:us-east-1:123412341234:default-prompt-router/meta.llama:1",
"models": [
{
"modelArn": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.meta.llama3-1-8b-instruct-v1:0"
},
{
"modelArn": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.meta.llama3-1-70b-instruct-v1:0"
}
],
"fallbackModel": {
"modelArn": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.meta.llama3-1-70b-instruct-v1:0"
},
"status": "AVAILABLE",
"type": "default"
}
]
}</code></pre>
<p>I can get information about a specific prompt router using <strong>GetPromptRouter</strong> with a prompt router ARN. For example, for the Meta Llama model family:</p>
<div class="hide-language">
<pre class="unlimited-height-code"><code class="lang-bash">aws bedrock get-prompt-router --prompt-router-arn arn:aws:bedrock:us-east-1:123412341234:default-prompt-router/meta.llama:1</code></pre>
</div>
<pre><code class="lang-json">{
"promptRouterName": "Meta Prompt Router",
"routingCriteria": {
"responseQualityDifference": 0.0
},
"description": "Routes requests among models in the LLaMA family",
"createdAt": "2024-11-20T00:00:00+00:00",
"updatedAt": "2024-11-20T00:00:00+00:00",
"promptRouterArn": "arn:aws:bedrock:us-east-1:123412341234:default-prompt-router/meta.llama:1",
"models": [
{
"modelArn": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.meta.llama3-1-8b-instruct-v1:0"
},
{
"modelArn": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.meta.llama3-1-70b-instruct-v1:0"
}
],
"fallbackModel": {
"modelArn": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.meta.llama3-1-70b-instruct-v1:0"
},
"status": "AVAILABLE",
"type": "default"
}
</code></pre>
<p>To use a prompt router with Amazon Bedrock, I set the prompt router ARN as model ID when making API calls. For example, here I use the Anthropic Prompt Router with the AWS CLI and the Amazon Bedrock Converse API:</p>
<div class="hide-language">
<pre class="unlimited-height-code"><code class="lang-bash">aws bedrock-runtime converse \
--model-id arn:aws:bedrock:us-east-1:123412341234:default-prompt-router/anthropic.claude:1 \
--messages '[{ "role": "user", "content": [ { "text": "Alice has N brothers and she also has M sisters. How many sisters does Alice’s brothers have?" } ] }]' \</code></pre>
</div>
<p>In output, invocations using a prompt router include a new <code>trace</code> section that tells which model was actually used. In this case, it’s Anthropic’s Claude 3.5 Sonnet:</p>
<pre><code class="lang-json">{
"output": {
"message": {
"role": "assistant",
"content": [
{
"text": "To solve this problem, let's think it through step-by-step:\n\n1) First, we need to understand the relationships:\n - Alice has N brothers\n - Alice has M sisters\n\n2) Now, we need to consider who Alice's brothers' sisters are:\n - Alice herself is a sister to all her brothers\n - All of Alice's sisters are also sisters to Alice's brothers\n\n3) So, the total number of sisters that Alice's brothers have is:\n - The number of Alice's sisters (M)\n - Plus Alice herself (+1)\n\n4) Therefore, the answer can be expressed as: M + 1\n\nThus, Alice's brothers have M + 1 sisters."
}
]
}
},
. . .
"trace": {
"promptRouter": {
"invokedModelId": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.anthropic.claude-3-5-sonnet-20240620-v1:0"
}
}
}</code></pre>
<p><span style="text-decoration: underline"><strong>Using Amazon Bedrock Intelligent Prompt Routing with an AWS SDK<br> </strong></span>Using an AWS SDK with a prompt router is similar to the previous command line experience. When invoking a model, I set the model ID to the prompt model ARN. For example, in this Python code I’m using the Meta Llama router with the <strong>ConverseStream</strong> API:</p>
<pre><code class="lang-python">import json
import boto3
bedrock_runtime = boto3.client(
"bedrock-runtime",
region_name="us-east-1",
)
MODEL_ID = "arn:aws:bedrock:us-east-1:123412341234:default-prompt-router/meta.llama:1"
user_message = "Describe the purpose of a 'hello world' program in one line."
messages = [
{
"role": "user",
"content": [{"text": user_message}],
}
]
streaming_response = bedrock_runtime.converse_stream(
modelId=MODEL_ID,
messages=messages,
)
for chunk in streaming_response["stream"]:
if "contentBlockDelta" in chunk:
text = chunk["contentBlockDelta"]["delta"]["text"]
print(text, end="")
if "messageStop" in chunk:
print()
if "metadata" in chunk:
if "trace" in chunk["metadata"]:
print(json.dumps(chunk['metadata']['trace'], indent=2))
</code></pre>
<p>This script prints the response text and the content of the trace in response metadata. For this uncomplicated request, the faster and more affordable model has been selected by the prompt router:</p>
<div class="hide-language">
<pre class="unlimited-height-code"><code class="lang-json">A "Hello World" program is a simple, introductory program that serves as a basic example to demonstrate the fundamental syntax and functionality of a programming language, typically used to verify that a development environment is set up correctly.
{
"promptRouter": {
"invokedModelId": "arn:aws:bedrock:us-east-1:123412341234:inference-profile/us.meta.llama3-1-8b-instruct-v1:0"
}
}</code></pre>
</div>
<p><span style="text-decoration: underline"><strong>Using prompt caching with an AWS SDK<br> </strong></span>You can use prompt caching with the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html">Amazon Bedrock Converse API</a>. When you tag content for caching and send it to the model for the first time, the model processes the input and saves the intermediate results in a cache. For subsequent requests containing the same content, the model loads the preprocessed results from the cache, significantly reducing both costs and latency.</p>
<p>You can implement prompt caching in your applications with a few steps:</p>
<ol>
<li>Identify the portions of your prompts that are frequently reused.</li>
<li>Tag these sections for caching in the list of messages using the new <code>cachePoint</code> block.</li>
<li>Monitor cache usage and latency improvements in the response metadata <code>usage</code> section.</li>
</ol>
<p>Here’s an example of implementing prompt caching when working with documents.</p>
<p>First, I download <a href="https://aws.amazon.com/getting-started/decision-guides/">three decision guides in PDF format from the AWS website</a>. These guides help choose the AWS services that fit your use case.</p>
<p>Then, I use a Python script to ask three questions about the documents. In the code, I create a <code>converse()</code> function to handle the conversation with the model. The first time I call the function, I include a list of documents and a flag to add a <code>cachePoint</code> block.</p>
<pre><code class="lang-python">import json
import boto3
MODEL_ID = "us.anthropic.claude-3-5-sonnet-20241022-v2:0"
AWS_REGION = "us-west-2"
bedrock_runtime = boto3.client(
"bedrock-runtime",
region_name=AWS_REGION,
)
DOCS = [
"bedrock-or-sagemaker.pdf",
"generative-ai-on-aws-how-to-choose.pdf",
"machine-learning-on-aws-how-to-choose.pdf",
]
messages = []
def converse(new_message, docs=[], cache=False):
if len(messages) == 0 or messages[-1]["role"] != "user":
messages.append({"role": "user", "content": []})
for doc in docs:
print(f"Adding document: {doc}")
name, format = doc.rsplit('.', maxsplit=1)
with open(doc, "rb") as f:
bytes = f.read()
messages[-1]["content"].append({
"document": {
"name": name,
"format": format,
"source": {"bytes": bytes},
}
})
messages[-1]["content"].append({"text": new_message})
if cache:
messages[-1]["content"].append({"cachePoint": {"type": "default"}})
response = bedrock_runtime.converse(
modelId=MODEL_ID,
messages=messages,
)
output_message = response["output"]["message"]
response_text = output_message["content"][0]["text"]
print("Response text:")
print(response_text)
print("Usage:")
print(json.dumps(response["usage"], indent=2))
messages.append(output_message)
converse("Compare AWS Trainium and AWS Inferentia in 20 words or less.", docs=DOCS, cache=True)
converse("Compare Amazon Textract and Amazon Transcribe in 20 words or less.")
converse("Compare Amazon Q Business and Amazon Q Developer in 20 words or less.")</code></pre>
<p>For each invocation, the script prints the response and the <code>usage</code> counters.</p>
<div class="hide-language">
<pre><code class="lang-bash">Adding document: bedrock-or-sagemaker.pdf
Adding document: generative-ai-on-aws-how-to-choose.pdf
Adding document: machine-learning-on-aws-how-to-choose.pdf
Response text:
AWS Trainium is optimized for machine learning training, while AWS Inferentia is designed for low-cost, high-performance machine learning inference.
Usage:
{
"inputTokens": 4,
"outputTokens": 34,
"totalTokens": 29879,
"cacheReadInputTokenCount": 0,
"cacheWriteInputTokenCount": 29841
}
Response text:
Amazon Textract extracts text and data from documents, while Amazon Transcribe converts speech to text from audio or video files.
Usage:
{
"inputTokens": 59,
"outputTokens": 30,
"totalTokens": 29930,
"cacheReadInputTokenCount": 29841,
"cacheWriteInputTokenCount": 0
}
Response text:
Amazon Q Business answers questions using enterprise data, while Amazon Q Developer assists with building and operating AWS applications and services.
Usage:
{
"inputTokens": 108,
"outputTokens": 26,
"totalTokens": 29975,
"cacheReadInputTokenCount": 29841,
"cacheWriteInputTokenCount": 0
}</code></pre>
</div>
<p>The <code>usage</code> section of the response contains two new counters: <code>cacheReadInputTokenCount</code> and <code>cacheWriteInputTokenCount</code>. The total number of tokens for an invocation is the sum of the input and output tokens plus the tokens read and written into the cache.</p>
<p>Each invocation processes a list of messages. The messages in the first invocation contain the documents, the first question, and the cache point. Because the messages preceding the cache point aren’t currently in the cache, they’re written to cache. According to the <code>usage</code> counters, 29,841 tokens have been written into the cache.</p>
<div class="hide-language">
<pre class="unlimited-height-code"><code class="lang-bash">"cacheWriteInputTokenCount": 29841</code></pre>
</div>
<p>For the next invocations, the previous response and the new question are appended to the list of messages. The messages before the <code>cachePoint</code> are not changed and found in the cache.</p>
<p>As expected, we can tell from the <code>usage</code> counters that the same number of tokens previously written is now read from the cache.</p>
<div class="hide-language">
<pre class="unlimited-height-code"><code class="lang-bash">"cacheReadInputTokenCount": 29841</code></pre>
</div>
<p>In my tests, the next invocations take 55 percent less time to complete compared to the first one. Depending on your use case (for example, with more cached content), prompt caching can improve latency up to 85 percent.</p>
<p>Depending on the model, you can set more than one cache point in a list of messages. To find the right cache points for your use case, try different configurations and look at the effect on the reported usage.</p>
<p><span style="text-decoration: underline"><strong>Things to know<br> </strong></span>Amazon Bedrock Intelligent Prompt Routing is available in preview today in US East (N. Virginia) and US West (Oregon) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Regions</a>. During the preview, you can use the default prompt routers, and there is no additional cost for using a prompt router. You pay the cost of the selected model. You can use prompt routers with other Amazon Bedrock capabilities such as <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/evaluation.html">performing evaluations</a>, <a href="https://aws.amazon.com/bedrock/knowledge-bases/">using knowledge bases</a>, and <a href="https://aws.amazon.com/bedrock/agents/">configuring agents</a>.</p>
<p>Because the internal model used by the prompt routers needs to understand the complexity of a prompt, intelligent prompt routing currently only supports English language prompts.</p>
<p>Amazon Bedrock support for prompt caching is available in preview in US West (Oregon) for Anthropic’s Claude 3.5 Sonnet V2 and Claude 3.5 Haiku. Prompt caching is also available in US East (N. Virginia) for Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro. You can <a href="https://pages.awscloud.com/promptcaching-Preview.html">request access to the Amazon Bedrock prompt caching preview here</a>.</p>
<p>With prompt caching, cache reads receive a 90 percent discount compared to noncached input tokens. There are no additional infrastructure charges for cache storage. When using Anthropic models, you pay an additional cost for tokens written in the cache. There are no additional costs for cache writes with Amazon Nova models. For more information, see <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock pricing</a>.</p>
<p>When using prompt caching, content is cached for up to 5 minutes, with each cache hit resetting this countdown. Prompt caching has been implemented to transparently support <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html">cross-Region inference</a>. In this way, your applications can get the cost optimization and latency benefit of prompt caching with the flexibility of cross-Region inference.</p>
<p>These new capabilities make it easier to build cost-effective and high-performing generative AI applications. By intelligently routing requests and caching frequently used content, you can significantly reduce your costs while maintaining and even improving application performance.</p>
<p>To learn more and start using these new capabilities today, visit the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html">Amazon Bedrock documentation</a> and send feedback to <a href="https://repost.aws/tags/TAQeKlaPaNRQ2tWB6P7KrMag/amazon-bedrock">AWS re:Post for Amazon Bedrock</a>. You can find deep-dive technical content and discover how our Builder communities are using Amazon Bedrock at <a href="https://community.aws/">community.aws</a>.</p>
<p>— <a href="https://twitter.com/danilop">Danilo</a></p>Amazon Bedrock Marketplace: Access over 100 foundation models in one place
https://aws.amazon.com/blogs/aws/amazon-bedrock-marketplace-access-over-100-foundation-models-in-one-place/
<![CDATA[Danilo Poccia]]>Wed, 04 Dec 2024 17:16:36 +0000<![CDATA[Amazon Bedrock]]><![CDATA[Amazon Machine Learning]]><![CDATA[Announcements]]><![CDATA[Artificial Intelligence]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Generative AI]]>9c7ab21de127224740b9d90929760c36e9143cf5Discover, test, and use over 100 emerging, and specialized foundation models with the tooling, security, and governance provided by Amazon Bedrock.<p>Today, we’re introducing <a href="https://aws.amazon.com/bedrock/marketplace/">Amazon Bedrock Marketplace</a>, a new capability that gives you access to over 100 popular, emerging, and specialized <a href="https://aws.amazon.com/what-is/foundation-models/">foundation models (FMs)</a> through <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a>. With this launch, you can now discover, test, and deploy new models from enterprise providers such as IBM and Nvidia, specialized models such as Upstages’ Solar Pro for Korean language processing, and Evolutionary Scale’s ESM3 for protein research, alongside Amazon Bedrock general-purpose FMs from providers such as <a href="https://aws.amazon.com/bedrock/claude/">Anthropic</a> and <a href="https://aws.amazon.com/bedrock/llama/">Meta</a>.</p>
<p>Models deployed with Amazon Bedrock Marketplace can be accessed through the same standard APIs as the serverless models and, for models which are compatible with Converse API, be used with tools such as <a href="https://aws.amazon.com/bedrock/agents/">Amazon Bedrock Agents</a> and <a href="https://aws.amazon.com/bedrock/knowledge-bases/">Amazon Bedrock Knowledge Bases</a>.</p>
<p>As <a href="https://aws.amazon.com/ai/generative-ai/">generative AI</a> continues to reshape how organizations work, the need for specialized models optimized for specific domains, languages, or tasks is growing. However, finding and evaluating these models can be challenging and costly. You need to discover them across different services, build abstractions to use them in your applications, and create complex security and governance layers. Amazon Bedrock Marketplace addresses these challenges by providing a single interface to access both specialized and general-purpose FMs.</p>
<p><span style="text-decoration: underline;"><strong>Using Amazon Bedrock Marketplace<br> </strong></span>To get started, in the Amazon Bedrock console, I choose <strong>Model catalog</strong> in the <strong>Foundation models</strong> section of the navigation pane. Here, I can search for models that help me with a specific use case or language. The results of the search include both serverless models and models available in Amazon Bedrock Marketplace. I can filter results by provider, modality (such as text, image, or audio), or task (such as classification or text summarization).</p>
<p>In the catalog, there are models from organizations like <a href="https://www.arcee.ai/">Arcee AI</a>, which builds context-adapted small language models (SLMs), and <a href="https://www.widn.ai/en/">Widn.AI</a>, which provides multilingual models.</p>
<p>For example, I am interested in the <a href="https://www.ibm.com/granite">IBM Granite</a> models and search for models from <strong>IBM Data and AI</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-search-1.png"><img loading="lazy" class="aligncenter size-full wp-image-92473" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-search-1.png" alt="Console screenshot." width="3212" height="1464"></a></p>
<p>I select <strong>Granite 3.0 2B Instruct</strong>, a language model designed for enterprise applications. Choosing the model opens the model detail page where I can see more information from the model provider such as highlights about the model, pricing, and usage including sample API calls.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-model-details-1.png"><img loading="lazy" class="aligncenter size-full wp-image-92474" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-model-details-1.png" alt="Console screenshot." width="2253" height="1609"></a></p>
<p>This specific model requires a subscription, and I choose <strong>View subscription options</strong>.</p>
<p>From the subscription dialog, I review pricing and legal notes. In <strong>Pricing details</strong>, I see the software price set by the provider. For this model, there are no additional costs on top of the deployed infrastructure. The <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a> infrastructure cost is charged separately and can be seen in <a href="https://aws.amazon.com/sagemaker/pricing/">Amazon SageMaker pricing</a>.</p>
<p>To proceed with this model, I choose <strong>Subscribe</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-subscribe-1.png"><img loading="lazy" class="aligncenter size-full wp-image-92475" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-subscribe-1.png" alt="Console screenshot." width="2901" height="1686"></a></p>
<p>After the subscription has been completed, which usually takes a few minutes, I can deploy the model. For <strong>Deployment details</strong>, I use the default settings and the recommended instance type.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-deploy.png"><img loading="lazy" class="aligncenter size-full wp-image-92481" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-deploy.png" alt="Console screenshot." width="1594" height="648"></a></p>
<p>I expand the optional <strong>Advanced settings</strong>. Here, I can choose to deploy in a <a href="https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html">virtual private cloud (VPC)</a> or specify the <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> service role used by the deployment. Amazon Bedrock Marketplace automatically creates a service role to access <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> buckets where the model weights are stored, but I can choose to use an existing role.</p>
<p>I keep the default values and complete the deployment.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-deploy-advanced.png"><img loading="lazy" class="aligncenter size-full wp-image-92476" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-deploy-advanced.png" alt="Console screenshot." width="2983" height="1717"></a></p>
<p>After a few minutes, the deployment is <strong>In Service</strong> and can be reviewed in the <strong>Marketplace deployments</strong> page from the navigation pane.</p>
<p>There, I can choose an endpoint to view details and edit the configuration such as the number of instances. To test the deployment, I choose <strong>Open in playground</strong> and ask for some poetry.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-playground-1.png"><img loading="lazy" class="aligncenter size-full wp-image-92477" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-playground-1.png" alt="Console screenshot." width="2492" height="1586"></a></p>
<p>I can also select the model from the <strong>Chat/text</strong> page of the <strong>Playground</strong> using the new <strong>Marketplace</strong> category where the deployed endpoints are listed.</p>
<p>In a similar way, I can use the model with other tools such as <a href="https://aws.amazon.com/bedrock/agents/">Amazon Bedrock Agents</a>, <a href="https://aws.amazon.com/bedrock/knowledge-bases/">Amazon Bedrock Knowledge Bases</a>, <a href="https://aws.amazon.com/bedrock/prompt-management/">Amazon Bedrock Prompt Management</a>, <a href="https://aws.amazon.com/bedrock/guardrails/">Amazon Bedrock Guardrails</a>, and <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/evaluation.html">model evaluations</a>, by choosing <strong>Select Model</strong> and selecting the <strong>Marketplace</strong> model endpoint.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-select-model-1.png"><img loading="lazy" class="aligncenter size-full wp-image-92478" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/bedrock-marketplace-select-model-1.png" alt="Console screenshot." width="2065" height="1448"></a></p>
<p>The model I used here is text-to-text, but I can use Amazon Bedrock Marketplace to deploy models with different modalities. For example, after I deploy <a href="https://stability.ai/">Stability AI</a> <a href="https://stability.ai/news/introducing-stable-diffusion-3-5">Stable Diffusion 3.5 Large</a>, I can run a quick test in the Amazon Bedrock <strong>Image playground</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/bedrock-marketplace-playground-image.png"><img loading="lazy" class="aligncenter size-full wp-image-91339" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/bedrock-marketplace-playground-image.png" alt="Console screenshot." width="2842" height="1424"></a></p>
<p>The models I deployed are now available through the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/inference-invoke.html">Amazon Bedrock InvokeModel API</a>. When a model is deployed, I can use it with the <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a> and any <a href="https://aws.amazon.com/tools/">AWS SDKs</a> using the endpoint <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html">Amazon Resource Name (ARN)</a> as model ID.</p>
<p>For chat-tuned text-to-text models, I can also use the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html">Amazon Bedrock Converse API</a>, which abstracts model differences and enables model switching with a single parameter change.</p>
<p><span style="text-decoration: underline;"><strong>Things to know<br> </strong></span><a href="https://aws.amazon.com/bedrock/marketplace/">Amazon Bedrock Marketplace</a> is available in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Regions</a>: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo).</p>
<p>With Amazon Bedrock Marketplace, you pay a software fee to the third-party model provider (which can be zero, as in the previous example) and a hosting fee based on the type and number of instances you choose for your model endpoints.</p>
<p>Start browsing the new models using the <a href="https://console.aws.amazon.com/bedrock/home#/model-catalog">Model catalog in the Amazon Bedrock console</a>, visit the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/amazon-bedrock-marketplace.html">Amazon Bedrock Marketplace documentation</a>, and send feedback to <a href="https://repost.aws/tags/TAQeKlaPaNRQ2tWB6P7KrMag/amazon-bedrock">AWS re:Post for Amazon Bedrock</a>. You can find deep-dive technical content and discover how our Builder communities are using Amazon Bedrock at <a href="https://community.aws/">community.aws</a>.</p>
<p>— <a href="https://twitter.com/danilop">Danilo</a></p>Meet your training timelines and budgets with new Amazon SageMaker HyperPod flexible training plans
https://aws.amazon.com/blogs/aws/meet-your-training-timelines-and-budgets-with-new-amazon-sagemaker-hyperpod-flexible-training-plans/
<![CDATA[Channy Yun (윤석찬)]]>Wed, 04 Dec 2024 16:57:34 +0000<![CDATA[Amazon SageMaker]]><![CDATA[Amazon SageMaker HyperPod]]><![CDATA[Announcements]]><![CDATA[Artificial Intelligence]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Generative AI]]><![CDATA[Launch]]><![CDATA[News]]>64246154015b2aa9d5937fb44950cea719c40635Unlock efficient large model training with SageMaker HyperPod flexible training plans - find optimal compute resources and complete training within timelines and budgets.<p>Today, we’re announcing the general availability of <a href="https://aws.amazon.com/sagemaker/hyperpod/">Amazon SageMaker HyperPod</a> flexible training plans to help data scientists train large <a href="https://aws.amazon.com/what-is/foundation-models/">foundation models</a> (FMs) within their timelines and budgets and save them weeks of effort in managing the training process based on compute availability.</p>
<p>At AWS re:Invent 2023, we <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-sagemaker-hyperpod-a-purpose-built-infrastructure-for-distributed-training-at-scale/">introduced SageMaker HyperPod</a> to reduce the time to train FMs by up to 40 percent and scale across thousands of compute resources in parallel with preconfigured distributed training libraries and built-in resiliency. Most generative AI model development tasks need accelerated compute resources in parallel. Our customers struggle to find timely access to compute resources to complete their training within their timeline and budget constraints.</p>
<p>With today’s announcement, you can find the required accelerated compute resources for training, create the most optimal training plans, and run training workloads across different blocks of capacity based on the availability of the compute resources. Within a few steps, you can identify training completion date, budget, compute resources requirements, create optimal training plans, and run fully managed training jobs, without needing manual intervention.</p>
<p><u><strong>SageMaker HyperPod training plans in action</strong></u><br> To get started, go to the <a href="https://console.aws.amazon.com/sagemaker/">Amazon SageMaker AI console</a>, choose <strong>Training plans</strong> in the left navigation pane, and choose <strong>Create training plan</strong>.</p>
<p><img loading="lazy" class="aligncenter wp-image-92818 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/2024-sagemaker-hyperpod-training-plans-1-get-started-1.png" alt="" width="2344" height="1396"></p>
<p>For example, choose your preferred training date and time (10 days), instance type and count (16 <code>ml.p5.48xlarge</code>) for SageMaker HyperPod cluster, and choose <strong>Find training plan</strong>.</p>
<p><img loading="lazy" class="aligncenter wp-image-92819 size-full" style="width: 90%;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/2024-sagemaker-hyperpod-training-plans-2-find-plans-1.png" alt="" width="2214" height="1530"></p>
<p>SageMaker HyperPod suggests a training plan that is split into two five-day segments. This includes the total upfront price for the plan.</p>
<p><img loading="lazy" class="aligncenter wp-image-92629 size-full" style="width: 80%;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/29/2024-sagemaker-hyperpod-training-plans-3-plan-segments-1.png" alt="" width="1194" height="1066"></p>
<p>If you accept this training plan, add your training details in the next step and choose <strong>Create</strong> your plan.</p>
<p>After creating your training plan, you can see the list of training plans. When you’ve created a training plan, you have to pay upfront for the plan within 12 hours. One plan is in the <strong>Active</strong> state and already started, with all the instances being used. The second plan is <strong>Scheduled</strong> to start later, but you can already submit jobs that start automatically when the plan begins.</p>
<p><img loading="lazy" class="aligncenter wp-image-92820 size-full" style="width: 80%;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/2024-sagemaker-hyperpod-training-plans-4-training-plans-scheduled.png" alt="" width="1760" height="1180"></p>
<p>In the active status, the compute resources are available in SageMaker HyperPod, resume automatically after pauses in availability, and terminates at the end of the plan. There is a first segment currently running and another segment queued up to run after the current segment.</p>
<p><img loading="lazy" class="aligncenter wp-image-92821 size-full" style="width: 80%;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/2024-sagemaker-hyperpod-training-plans-5-experiment-training-1.png" alt="" width="1736" height="2584"></p>
<p>This is similar to the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html">Managed Spot training in SageMaker AI</a>, where SageMaker AI takes care of instance interruptions and continues the training with no manual intervention. To learn more, visit the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/reserve-capacity-with-training-plans.html">SageMaker HyperPod training plans</a> in the Amazon SageMaker AI Developer Guide.</p>
<p><u><strong>Now available</strong></u><br> Amazon SageMaker HyperPod training plans are now available in US East (N. Virginia), US East (Ohio), US West (Oregon) AWS Regions and support <code>ml.p4d.48xlarge</code>, <code>ml.p5.48xlarge</code>, <code>ml.p5e.48xlarge</code>, <code>ml.p5en.48xlarge</code>, and <code>ml.trn2.48xlarge</code> instances. Trn2 and P5en instances are only in US East (Ohio) Region. To learn more, visit the <a href="https://aws.amazon.com/sagemaker/hyperpod">SageMaker HyperPod product page</a> and <a href="https://aws.amazon.com/sagemaker/pricing">SageMaker AI pricing page</a>.</p>
<p>Give HyperPod training plans a try in the <a href="https://console.aws.amazon.com/sagemaker">Amazon SageMaker AI console</a> and send feedback to <a href="https://repost.aws/tags/TAT80swPyVRPKPcA0rsJYPuA/amazon-sagemaker">AWS re:Post for SageMaker AI</a> or through your usual AWS Support contacts.</p>
<p>— <a href="https://twitter.com/channyun">Channy</a></p>Maximize accelerator utilization for model development with new Amazon SageMaker HyperPod task governance
https://aws.amazon.com/blogs/aws/maximize-accelerator-utilization-for-model-development-with-new-amazon-sagemaker-hyperpod-task-governance/
<![CDATA[Channy Yun (윤석찬)]]>Wed, 04 Dec 2024 16:57:30 +0000<![CDATA[Amazon SageMaker]]><![CDATA[Amazon SageMaker HyperPod]]><![CDATA[Announcements]]><![CDATA[Artificial Intelligence]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Generative AI]]><![CDATA[Launch]]><![CDATA[News]]>a9a05b0a210bac498d6dd7af724ab63c2ca16c6aEnable priority-based resource allocation, fair-share utilization, and automated task preemption for optimal compute utilization across teams.<p>Today, we’re announcing the general availability of <a href="https://aws.amazon.com/sagemaker/hyperpod/">Amazon SageMaker HyperPod</a> task governance, a new innovation to easily and centrally manage and maximize GPU and Trainium utilization across <a href="https://aws.amazon.com/ai/generative-ai/">generative AI</a> model development tasks, such as training, fine-tuning, and inference.</p>
<p>Customers tell us that they’re rapidly increasing investment in generative AI projects, but they face challenges in efficiently allocating limited compute resources. The lack of dynamic, centralized governance for resource allocation leads to inefficiencies, with some projects underutilizing resources while others stall. This situation burdens administrators with constant replanning, causes delays for data scientists and developers, and results in untimely delivery of AI innovations and cost overruns due to inefficient use of resources.</p>
<p>With SageMaker HyperPod task governance, you can accelerate time to market for AI innovations while avoiding cost overruns due to underutilized compute resources. With a few steps, administrators can set up quotas governing compute resource allocation based on project budgets and task priorities. Data scientists or developers can create tasks such as model training, fine-tuning, or evaluation, which SageMaker HyperPod automatically schedules and executes within allocated quotas.</p>
<p>SageMaker HyperPod task governance manages resources, automatically freeing up compute from lower-priority tasks when high-priority tasks need immediate attention. It does this by pausing low-priority training tasks, saving checkpoints, and resuming them later when resources become available. Additionally, idle compute within a team’s quota can be automatically used to accelerate another team’s waiting tasks.</p>
<p>Data scientists and developers can continuously monitor their task queues, view pending tasks, and adjust priorities as needed. Administrators can also monitor and audit scheduled tasks and compute resource usage across teams and projects and, as a result, they can adjust allocations to optimize costs and improve resource availability across the organization. This approach promotes timely completion of critical projects while maximizing resource efficiency.</p>
<p><strong><u>Getting started with SageMaker HyperPod task governance</u></strong><br> Task governance is available for <a href="https://aws.amazon.com/blogs/machine-learning/introducing-amazon-eks-support-in-amazon-sagemaker-hyperpod/">Amazon EKS clusters in HyperPod</a>. Find <strong>Cluster Management</strong> under <strong>HyperPod Clusters</strong> in the <a href="https://console.aws.amazon.com/sagemaker/home?#/cluster-management">Amazon SageMaker AI console</a> for provisioning and managing clusters. As an administrator, you can streamline the operation and scaling of HyperPod clusters through this console.</p>
<p><img loading="lazy" class="aligncenter wp-image-92807 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/2024-sagemaker-hyperpod-task-governance-1-clusters-1.png" alt="" width="2380" height="1004"></p>
<p>When you choose a HyperPod cluster, you can see a new <strong>Dashboard</strong>, <strong>Tasks</strong>, and <strong>Policies</strong> tab in the cluster detail page.</p>
<p><strong>1. New dashboard</strong><br> In the new dashboard, you can see an overview of cluster utilization, team-based, and task-based metrics.</p>
<p>First, you can view both point-in-time and trend-based metrics for critical compute resources, including GPU, vCPU, and memory utilization, across all instance groups.</p>
<p><img loading="lazy" class="aligncenter wp-image-92810 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/2024-sagemaker-hyperpod-task-governance-2-dashboard-1-1.png" alt="" width="1974" height="2596"></p>
<p>Next, you can gain comprehensive insights into team-specific resource management, focusing on GPU utilization versus compute allocation across teams. You can use customizable filters for teams and cluster instance groups to analyze metrics such as allocated GPUs/CPUs for tasks, borrowed GPUs/CPUs, and GPU/CPU utilization.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-92117" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sagemaker-hyperpod-task-governance-2-dashboard-2.png" alt="" width="1972" height="3292"></p>
<p>You can also assess task performance and resource allocation efficiency using metrics such as counts of running, pending, and preempted tasks, as well as average task runtime and wait time. To gain comprehensive observability into your SageMaker HyperPod cluster resources and software components, you can integrate with <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights.html">Amazon CloudWatch Container Insights</a> or <a href="https://aws.amazon.com/grafana/">Amazon Managed Grafana</a>.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-92118" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sagemaker-hyperpod-task-governance-2-dashboard-3.png" alt="" width="1974" height="2098"></p>
<p><strong>2. Create and manage a cluster policy</strong><br> To enable task prioritization and fair-share resource allocation, you can configure a cluster policy that prioritizes critical workloads and distributes idle compute across teams defined in compute allocations.</p>
<p><img loading="lazy" class="aligncenter wp-image-92811 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/2024-sagemaker-hyperpod-task-governance-3-policy-2-1.png" alt="" width="1950" height="1976"></p>
<p>To configure priority classes and fair sharing of borrowed compute in cluster settings, choose <strong>Edit</strong> in the <strong>Cluster policy</strong> section.</p>
<p>You can define how tasks waiting in queue are admitted for task prioritization: <strong>First-come-first-serve</strong> by default or <strong>Task ranking</strong>. When you choose task ranking, tasks waiting in queue will be admitted in the priority order defined in this cluster policy. Tasks of same priority class will be executed on a first-come-first-serve basis.</p>
<p>You can also configure how idle compute is allocated across teams: <strong>First-come-first-serve</strong> or <strong>Fair-share</strong> by default. The fair-share setting enables teams to borrow idle compute based on their assigned weights, which are configured in relative compute allocations. This enables every team to get a fair share of idle compute to accelerate their waiting tasks.</p>
<p><img loading="lazy" class="aligncenter wp-image-92812 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/2024-sagemaker-hyperpod-task-governance-3-edit-policy-1.png" alt="" width="1672" height="2058"></p>
<p>In the <strong>Compute allocation</strong> section of the <strong>Policies</strong> page, you can create and edit compute allocations to distribute compute resources among teams, enable settings that allow teams to lend and borrow idle compute, configure preemption of their own low-priority tasks, and assign fair-share weights to teams.</p>
<p>In the <strong>Team</strong> section, set a team name and a corresponding Kubernetes namespace will be created for your data science and machine learning (ML) teams to use. You can set a fair-share weight for a more equitable distribution of unused capacity across your teams and enable the preemption option based on task priority, allowing higher-priority tasks to preempt lower-priority ones.</p>
<p>In the <strong>Compute</strong> section, you can add and allocate instance type quotas to teams. Additionally, you can allocate quotas for instance types not yet available in the cluster, allowing for future expansion.</p>
<p><img loading="lazy" class="aligncenter wp-image-92813 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/2024-sagemaker-hyperpod-task-governance-3-edit-compute-allocation-1.png" alt="" width="1668" height="2998"></p>
<p>You can enable teams to share idle compute resources by allowing them to lend their unused capacity to other teams. This borrowing model is reciprocal: teams can only borrow idle compute if they are also willing to share their own unused resources with others. You can also specify the borrow limit that enables teams to borrow compute resources over their allocated quota.</p>
<p><strong>3. Run your training task in SageMaker HyperPod cluster</strong><br> As a data scientist, you can submit a training job and use the quota allocated for your team, using the <a href="https://github.com/aws/sagemaker-hyperpod-cli">HyperPod Command Line Interface (CLI)</a> command. With the HyperPod CLI, you can start a job and specify the corresponding namespace that has the allocation.</p>
<pre><code class="lang-bash">$ hyperpod start-job --name smpv2-llama2 --namespace hyperpod-ns-ml-engineers
Successfully created job smpv2-llama2
$ hyperpod list-jobs --all-namespaces
{
"jobs": [
{
"Name": "smpv2-llama2",
"Namespace": "hyperpod-ns-ml-engineers",
"CreationTime": "2024-09-26T07:13:06Z",
"State": "Running",
"Priority": "fine-tuning-priority"
},
...
]
}</code></pre>
<p>In the <strong>Tasks</strong> tab, you can see all tasks in your cluster. Each task has different priority and capacity need according to its policy. If you run another task with higher priority, the existing task will be suspended and that task can run first.</p>
<p><img loading="lazy" class="aligncenter wp-image-92814 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/12/02/2024-sagemaker-hyperpod-task-governance-4-run-training-task-1.png" alt="" width="2066" height="894"></p>
<p>OK, now let’s check out a demo video showing what happens when a high-priority training task is added while running a low-priority task.</p>
<p><iframe loading="lazy" title="Get started with Amazon SageMaker HyperPod task governance" width="500" height="281" src="https://www.youtube-nocookie.com/embed/_AgG4gtWXV8?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen sandbox="allow-scripts allow-same-origin"></iframe></p>
<p>To learn more, visit <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-eks-operate-console-ui-governance.html">SageMaker HyperPod task governance</a> in the Amazon SageMaker AI Developer Guide.</p>
<p><u><strong>Now available</strong></u><br> Amazon SageMaker HyperPod task governance is now available in US East (N. Virginia), US East (Ohio), US West (Oregon) AWS Regions. You can use HyperPod task governance without additional cost. To learn more, visit the <a href="https://aws.amazon.com/sagemaker/hyperpod">SageMaker HyperPod product page</a>.</p>
<p>Give HyperPod task governance a try in the <a href="https://console.aws.amazon.com/sagemaker">Amazon SageMaker AI console</a> and send feedback to <a href="https://repost.aws/tags/TAT80swPyVRPKPcA0rsJYPuA/amazon-sagemaker">AWS re:Post for SageMaker</a> or through your usual AWS Support contacts.</p>
<p>— <a href="https://twitter.com/channyun">Channy</a></p>
<p><em>P.S. Special thanks to <a href="https://www.linkedin.com/in/nisha-nadkarni-317594124/">Nisha Nadkarni</a>, a senior generative AI specialist solutions architect at AWS for her contribution in creating a HyperPod testing environment.</em></p>Amazon SageMaker Lakehouse integrated access controls now available in Amazon Athena federated queries
https://aws.amazon.com/blogs/aws/amazon-sagemaker-lakehouse-integrated-access-controls-now-available-in-amazon-athena-federated-queries/
<![CDATA[Esra Kayabali]]>Tue, 03 Dec 2024 19:19:57 +0000<![CDATA[Amazon Athena]]><![CDATA[Announcements]]><![CDATA[AWS Lake Formation]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>1e6d01ff06ee67566a055a0e8806d3eec52d0fd6Connect, discover, and govern data across silos with Amazon SageMaker Lakehouse's new data catalog and permissions capabilities, enabling centralized access and fine-grained controls.<p>Today, we announced the next generation of <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a>, which is a unified platform for data, analytics, and AI, bringing together widely-adopted AWS machine learning and analytics capabilities. At its core is <a href="https://aws.amazon.com/blogs/aws/introducing-the-next-generation-of-amazon-sagemaker-the-center-for-all-your-data-analytics-and-ai">SageMaker Unified Studio (preview)</a>, a single data and AI development environment for data exploration, preparation and integration, big data processing, fast SQL analytics, model development and training, and generative AI application development. This announcement includes Amazon SageMaker Lakehouse, a capability that unifies data across data lakes and data warehouses, helping you build powerful analytics and artificial intelligence and machine learning (AI/ML) applications on a single copy of data.</p>
<p>In addition to these launches, I’m happy to announce data catalog and permissions capabilities in Amazon SageMaker Lakehouse, helping you connect, discover, and manage permissions to data sources centrally.</p>
<p>Organizations today store data across various systems to optimize for specific use cases and scale requirements. This often results in data siloed across data lakes, data warehouses, databases, and streaming services. Analysts and data scientists face challenges when trying to connect to and analyze data from these diverse sources. They must set up specialized connectors for each data source, manage multiple access policies, and often resort to copying data, leading to increased costs and potential data inconsistencies.</p>
<p>The new capability addresses these challenges by simplifying the process of connecting to popular data sources, cataloging them, applying permissions, and making the data available for analysis through SageMaker Lakehouse and <a href="https://aws.amazon.com/athena">Amazon Athena</a>. You can use the <a href="https://docs.aws.amazon.com/glue/latest/dg/catalog-and-crawler.html">AWS Glue Data Catalog</a> as a single metadata store for all data sources, regardless of location. This provides a centralized view of all available data.</p>
<p>Data source connections are created once and can be reused, so you don’t need to set up connections repeatedly. As you connect to the data sources, databases and tables are automatically cataloged and registered with <a href="https://console.aws.amazon.com/lakeformation/">AWS Lake Formation</a>. Once cataloged, you grant access to those databases and tables to data analysts, so they don’t have to go through separate steps of connecting to each data source and don’t have to know built-in data source secrets. Lake Formation permissions can be used to define fine-grained access control (FGAC) policies across data lakes, data warehouses, and online transaction processing (OLTP) data sources, providing consistent enforcement when querying with Athena. Data remains in its original location, eliminating the need for costly and time-consuming data transfers or duplications. You can create or reuse existing data source connections in Data Catalog and configure <a href="https://docs.aws.amazon.com/athena/latest/ug/connectors-available.html">built-in connectors</a> to multiple data sources, including <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a>, <a href="https://aws.amazon.com/redshift/">Amazon Redshift</a>, <a href="https://aws.amazon.com/rds/aurora/">Amazon Aurora</a>, <a href="https://aws.amazon.com/dynamodb/">Amazon DynamoDB</a> (preview), Google BigQuery, and more.</p>
<p><span style="text-decoration: underline;"><strong>Getting started with the integration between Athena and Lake Formation<br> </strong></span>To showcase this capability, I use a preconfigured environment that incorporates Amazon DynamoDB as a data source. The environment is set up with appropriate tables and data to effectively demonstrate the capability. I use the SageMaker Unified Studio (preview)<strong> </strong>interface for this demonstration.</p>
<p>To begin, I go to SageMaker Unified Studio (preview) through the Amazon SageMaker domain. This is where you can create and manage projects, which serve as shared workspaces. These projects allow team members to collaborate, work with data, and develop ML models together. Creating a project automatically sets up AWS Glue Data Catalog databases, establishes a catalog for Redshift Managed Storage (RMS) data, and provisions necessary permissions.</p>
<p>To manage projects, you can either view a comprehensive list of existing projects by selecting <strong>Browse all projects</strong>, or you can create a new project by choosing <strong>Create project</strong>. I use two existing projects: sales-group, where administrators have full access privileges to all data, and marketing-project, where analysts operate under restricted data access permissions. This setup effectively illustrates the contrast between administrative and limited user access levels.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/10-LaunchMarketingIntake1280.png"><img loading="lazy" class="alignnone size-full wp-image-92360" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/10-LaunchMarketingIntake1280.png" alt="" width="1924" height="878"></a></p>
<p>In this step, I set up a federated catalog for the target data source, which is Amazon DynamoDB. I go to <strong>Data</strong> in the left navigation pane and choose the <strong>+</strong> (plus) sign to <strong>Add data</strong>. I choose <strong>Add connection</strong> and then I choose <strong>Next</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/07-LaunchMarketingIntake1280.png"><img loading="lazy" class="alignnone size-full wp-image-91970" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/07-LaunchMarketingIntake1280.png" alt="" width="1612" height="852"></a></p>
<p>I choose <strong>Amazon DynamoDB </strong>and choose <strong>Next</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/11-LaunchMarketingIntake1280.png"><img loading="lazy" class="alignnone size-full wp-image-92361" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/11-LaunchMarketingIntake1280.png" alt="" width="957" height="972"></a></p>
<p>I enter the details and choose <strong>Add data</strong>. Now, I have the Amazon DynamoDB federated catalog created in SageMaker Lakehouse. This is where your administrator gives you access using resource policies. I’ve already configured the resource policies in this environment. Now, I’ll show you how fine-grained access controls work in SageMaker Unified Studio (preview).</p>
<p>I begin by selecting the <strong>sales-group</strong> project, which is where administrators maintain and have full access to customer data. This dataset contains fields such as zip codes, customer IDs, and phone numbers. To analyze this data, I can execute queries using <strong>Query with Athena</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/12-LaunchMarketingIntake1280.png"><img loading="lazy" class="alignnone size-full wp-image-92372" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/12-LaunchMarketingIntake1280.png" alt="" width="1924" height="971"></a></p>
<p>Upon selecting <strong>Query with Athena</strong>, the Query Editor launches automatically, providing a workspace where I can compose and execute SQL queries against the lakehouse. This integrated query environment offers a seamless experience for data exploration and analysis.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/13a-LaunchMarketingIntake1280.png"><img loading="lazy" class="alignnone size-full wp-image-92375" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/13a-LaunchMarketingIntake1280.png" alt="" width="1924" height="970"></a></p>
<p>In the second part, I switch to <strong>marketing-project </strong>to show what an analyst experiences when they run their queries and observe that the fine-grained access control permissions are in place and working.</p>
<p>In the second part, I demonstrate the perspective of an analyst by switching to the <strong>marketing-project</strong> environment. This helps us verify that the fine-grained access control permissions are properly implemented and effectively restricting data access as intended. Through example queries, we can observe how analysts interact with the data while being subject to the established security controls.</p>
<p>Using the <strong>Query with Athena</strong> option, I execute a SELECT statement on the table to verify the access controls. The results confirm that, as expected, I can only view the <strong>zipcode</strong> and <strong>cust_id</strong> columns, while the <strong>phone</strong> column remains restricted based on the configured permissions.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/14-LaunchMarketingIntake1280.png"><img loading="lazy" class="alignnone size-full wp-image-92385" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/14-LaunchMarketingIntake1280.png" alt="" width="1924" height="973"></a></p>
<p>With these new data catalog and permissions capabilities in Amazon SageMaker Lakehouse, you can now streamline your data operations, enhance security governance, and accelerate AI/ML development while maintaining data integrity and compliance across your entire data ecosystem.</p>
<p><span style="text-decoration: underline;"><strong>Now available</strong><br> </span>Data catalog and permissions in Amazon SageMaker Lakehouse simplifies interactive analytics through federated query when connecting to a unified catalog and permissions with Data Catalog across multiple data sources, providing a single place to define and enforce fine-grained security policies across data lakes, data warehouses, and OLTP data sources for a high-performing query experience.</p>
<p>You can use this capability in US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), and Asia Pacific (Tokyo) AWS Regions.</p>
<p>To get started with this new capability, visit the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/lakehouse.html">Amazon SageMaker Lakehouse</a> documentation.</p>
<a href="https://www.linkedin.com/in/esrakayabali/">— Esra</a>Amazon SageMaker Lakehouse and Amazon Redshift supports zero-ETL integrations from applications
https://aws.amazon.com/blogs/aws/introducing-amazon-sagemaker-lakehouse-support-for-zero-etl-integrations-from-applications/
<![CDATA[Veliswa Boya]]>Tue, 03 Dec 2024 19:14:23 +0000<![CDATA[Amazon Redshift]]><![CDATA[Amazon SageMaker]]><![CDATA[Amazon Simple Storage Service (S3)]]><![CDATA[Analytics]]><![CDATA[Announcements]]><![CDATA[AWS Glue]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>4d7d26fb5b1d55256de9354799cf5117c141668fSimplify data replication and ingestion from applications such as Salesforce, SAP, ServiceNow, and Zendesk, to Amazon SageMaker Lakehouse and Amazon Redshift.<p>Today, we announced the general availability of <a href="https://aws.amazon.com/sagemaker/lakehouse">Amazon SageMaker Lakehouse</a> and <a href="https://aws.amazon.com/redshift/?nc2=h_ql_prod_an_rs">Amazon Redshift</a> support for zero-ETL integrations from applications. Amazon SageMaker Lakehouse unifies all your data across <a href="https://aws.amazon.com/s3/?nc2=h_ql_prod_st_s3">Amazon Simple Storage Service (Amazon S3)</a> data lakes and Amazon Redshift data warehouses, helping you build powerful analytics and AI/ML applications on a single copy of data. SageMaker Lakehouse gives you the flexibility to access and query your data in-place with all Apache Iceberg compatible tools and engines. Zero-ETL is a set of fully managed integrations by AWS that minimizes the need to build ETL data pipelines for common ingestion and replication use cases. With zero-ETL integrations from applications such as Salesforce, SAP, and Zendesk, you can reduce time spent building data pipelines and focus on running unified analytics on all your data in Amazon SageMaker Lakehouse and Amazon Redshift.</p>
<p>As organizations rely on an increasingly diverse array of digital systems, data fragmentation has become a significant challenge. Valuable information is often scattered across multiple repositories, including databases, applications, and other platforms. To harness the full potential of their data, businesses must enable access and consolidation from these varied sources. In response to this challenge, users build data pipelines to extract and load (EL) from multiple applications into centralized data lakes and data warehouses. Using zero-ETL, you can efficiently replicate valuable data from your customer support, relationship management, and enterprise resource planning (ERP) applications for analytics and AI/ML to datalakes and data warehouses, saving you weeks of engineering effort needed to design, build, and test data pipelines.</p>
<p><span style="text-decoration: underline;"><strong>Prerequisites</strong></span></p>
<ul>
<li>An Amazon SageMaker Lakehouse catalog configured through <a href="https://aws.amazon.com/what-is/data-catalog/">AWS Glue Data Catalog</a> and <a href="https://aws.amazon.com/lake-formation/">AWS Lake Formation</a>.</li>
<li>An <a href="https://aws.amazon.com/glue/">AWS Glue</a> database that is configured for Amazon S3 where the data will be stored.</li>
<li>A <a href="https://aws.amazon.com/secrets-manager/">secret in AWS Secret Manager</a> to use for the connection to the data source. The credentials must contain the username and password that you use to sign in to your application.</li>
<li>An <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> role for the Amazon SageMaker Lakehouse or Amazon Redshift job to use. The role must grant access to all resources used by the job, including Amazon S3 and AWS Secrets Manager.</li>
<li>A valid AWS Glue connection to the desired application.</li>
</ul>
<p><span style="text-decoration: underline;"><strong>How it works – creating a Glue connection prerequisite</strong></span><br> I start by creating a connection using the <a href="https://console.aws.amazon.com/gluestudio/home#/zero-etl-integrations">AWS Glue console</a>. I opt for a Salesforce integration as the data source.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/29/createconnect01.png"><img loading="lazy" class="aligncenter size-large wp-image-92620" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/29/createconnect01-1024x395.png" alt="" width="1024" height="395"></a></p>
<p>Next, I provide the location of the Salesforce instance to be used for the connection, together with the rest of the required information. Be sure to use the <code>.salesforce.com</code> domain instead of <code>.force.com</code>. Users can choose between two authentication methods, JSON Web Token (JWT), which is obtained through Salesforce access tokens, or OAuth login through the browser.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/25/createconnect2-1.png"><img loading="lazy" class="aligncenter size-large wp-image-91625" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/25/createconnect2-1-1024x444.png" alt="" width="1024" height="444"></a></p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/createconnect3.png"><img loading="lazy" class="aligncenter size-large wp-image-89676" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/createconnect3-1024x293.png" alt="" width="1024" height="293"></a></p>
<p>I review all the information and then choose <strong>Create connection</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/25/createconnect4-1.png"><img loading="lazy" class="aligncenter size-large wp-image-91628" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/25/createconnect4-1-1024x452.png" alt="" width="1024" height="452"></a></p>
<p>After I sign into the Salesforce instance through a popup (not shown here), the connection is successfully created.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/connection6.png"><img loading="lazy" class="aligncenter size-large wp-image-89679" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/connection6-1024x455.png" alt="" width="1024" height="455"></a></p>
<p><strong><span style="text-decoration: underline;">How it works – creating a zero-ETL integration</span></strong><br> Now that I have a connection, I choose <strong>zero-ETL integrations</strong> from the left navigation panel, then choose <strong>Create zero-ETL integration</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration1.png"><img loading="lazy" class="aligncenter size-large wp-image-89685" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration1-1024x384.png" alt="" width="1024" height="384"></a></p>
<p>First I choose the source type for my integration – in this case Salesforce so I can use my recently created connection.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration2.png"><img loading="lazy" class="aligncenter size-large wp-image-89688" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration2-1024x242.png" alt="" width="1024" height="242"></a></p>
<p>Next, I select objects from the data source that I want to replicate to the target database in AWS Glue.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration3-1.png"><img loading="lazy" class="aligncenter size-large wp-image-89689" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration3-1-1024x487.png" alt="" width="1024" height="487"></a></p>
<p>While in the process of adding objects, I can quickly preview both data and metadata to confirm that I am selecting the correct object.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration4.png"><img loading="lazy" class="aligncenter size-large wp-image-89691" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration4-1024x459.png" alt="" width="1024" height="459"></a></p>
<p>By default, zero-ETL integration will synchronize data from the source to the target every 60 minutes. However, you can change this interval to reduce the cost of replication for cases that do not require frequent updates.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration5.png"><img loading="lazy" class="aligncenter size-large wp-image-89692" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration5-1024x362.png" alt="" width="1024" height="362"></a></p>
<p>I review and then choose <strong>Create and launch integration</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration6.png"><img loading="lazy" class="aligncenter size-large wp-image-89695" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration6-1024x409.png" alt="" width="1024" height="409"></a></p>
<p>The data in the source (Salesforce instance) has now been replicated to the target database <code>salesforcezeroETL</code> in my AWS account. This integration has two phases. Phase 1: initial load will ingest all the data for the selected objects and may take between 15 min to a few hours depending on the size of the data in these objects. Phase 2: incremental load will detect any changes (such as new records, updated records, or deleted records) and apply these to the target.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration7.png"><img loading="lazy" class="aligncenter size-large wp-image-89697" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration7-1024x247.png" alt="" width="1024" height="247"></a></p>
<p>Each of the objects that I selected earlier has been stored in its respective table within the database. From here I can view the <strong>Table data</strong> for each of the objects that have been replicated from the data source.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration8.png"><img loading="lazy" class="aligncenter size-large wp-image-89696" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration8-1024x252.png" alt="" width="1024" height="252"></a></p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration9.png"><img loading="lazy" class="aligncenter size-large wp-image-89699" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration9-1024x476.png" alt="" width="1024" height="476"></a></p>
<p>Lastly, here’s a view of the data in Salesforce. As new entities are created, or existing entities are updated or changed in Salesforce, the data changes will synchronize to the target in AWS Glue automatically.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration10.png"><img loading="lazy" class="aligncenter size-large wp-image-89700" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/11/integration10-1024x215.png" alt="" width="1024" height="215"></a></p>
<p><span style="text-decoration: underline;"><strong>Now available</strong></span><br> Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from applications is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Regions</a>. For pricing information, visit the <a href="https://aws.amazon.com/glue/pricing/">AWS Glue pricing page</a>.</p>
<p>To learn more, visit our <a href="https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html">AWS Glue User Guide</a>. Send feedback to <a href="https://repost.aws/tags/questions/TABBptUaVOS1mAM7TXuTuxWQ">AWS re:Post for AWS Glue</a> or through your usual AWS Support contacts. Get started by creating a new <a href="https://console.aws.amazon.com/gluestudio/home#/zero-etl-integrations">zero-ETL integration</a> today.</p>
<p>– <a href="https://www.linkedin.com/in/veliswa-boya/">Veliswa</a></p>Simplify analytics and AI/ML with new Amazon SageMaker Lakehouse
https://aws.amazon.com/blogs/aws/simplify-analytics-and-aiml-with-new-amazon-sagemaker-lakehouse/
<![CDATA[Esra Kayabali]]>Tue, 03 Dec 2024 19:05:05 +0000<![CDATA[Amazon Machine Learning]]><![CDATA[Amazon Redshift]]><![CDATA[Amazon SageMaker]]><![CDATA[Amazon Simple Storage Service (S3)]]><![CDATA[Analytics]]><![CDATA[Announcements]]><![CDATA[Artificial Intelligence]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>c23b1db3b4b92a9aab6ab70947c0b20ce61ad660Unifying data silos, Amazon SageMaker Lakehouse seamlessly integrates S3 data lakes and Redshift warehouses, enabling unified analytics and AI/ML on a single data copy through open Apache Iceberg APIs and fine-grained access controls.<p>Today, I’m very excited to announce the general availability of Amazon SageMaker Lakehouse, a capability that unifies data across <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> data lakes and <a href="https://aws.amazon.com/redshift/">Amazon Redshift</a> data warehouses, helping you build powerful analytics and artificial intelligence and machine learning (AI/ML) applications on a single copy of data. SageMaker Lakehouse is a part of the next generation of <a href="https://aws.amazon.com/blogs/aws/introducing-the-next-generation-of-amazon-sagemaker-the-center-for-all-your-data-analytics-and-ai">Amazon SageMaker</a>, which is a unified platform for data, analytics and AI, that brings together widely-adopted AWS machine learning and analytics capabilities and delivers an integrated experience for analytics and AI.</p>
<p>Customers want to do more with data. To move faster with their analytics journey, they are picking the right storage and databases to store their data. The data is spread across data lakes, data warehouses, and different applications, creating data silos that make it difficult to access and utilize. This fragmentation leads to duplicate data copies and complex data pipelines, which in turn increases costs for the organization. Furthermore, customers are constrained to use specific query engines and tools, as the way and where the data is stored limits their options. This restriction hinders their ability to work with the data as they would prefer. Lastly, the inconsistent data access makes it challenging for customers to make informed business decisions.</p>
<p>SageMaker Lakehouse addresses these challenges by helping you to unify data across Amazon S3 data lakes and Amazon Redshift data warehouses. It offers you the flexibility to access and query data in-place with all engines and tools compatible with Apache Iceberg. With SageMaker Lakehouse, you can define fine-grained permissions centrally and enforce them across multiple AWS services, simplifying data sharing and collaboration. Bringing data into your SageMaker Lakehouse is easy. In addition to seamlessly accessing data from your existing data lakes and data warehouses, you can use zero-ETL from operational databases such as <a href="https://aws.amazon.com/rds/aurora/">Amazon Aurora</a>, <a href="https://aws.amazon.com/rds/mysql/">Amazon RDS for MySQL</a>, <a href="https://aws.amazon.com/dynamodb/">Amazon DynamoDB</a>, as well as applications such as Salesforce and SAP. SageMaker Lakehouse fits into your existing environments.</p>
<p><span style="text-decoration: underline;"><strong>Get started with SageMaker Lakehouse<br> </strong></span>For this demonstration, I use a preconfigured environment that has multiple AWS data sources. I go to the Amazon SageMaker Unified Studio (preview) console, which provides an integrated development experience for all your data and AI. Using Unified Studio, you can seamlessly access and query data from various sources through SageMaker Lakehouse, while using familiar AWS tools for analytics and AI/ML.</p>
<p>This is where you can create and manage projects, which serve as shared workspaces. These projects allow team members to collaborate, work with data, and develop AI models together. Creating a project automatically sets up AWS Glue Data Catalog databases, establishes a catalog for Redshift Managed Storage (RMS) data, and provisions necessary permissions. You can get started by creating a new project or continue with an existing project.</p>
<p>To create a new project, I choose <strong>Create project</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/25/07-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-91624" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/25/07-LaunchMarketingIntake1213.png" alt="" width="1924" height="996"></a></p>
<p>I have 2 project profile options to build a lakehouse and interact with it. First one is <strong>Data analytics and AI-ML model development</strong>, where you can analyze data and build ML and generative AI models powered by <a href="https://aws.amazon.com/emr">Amazon EMR</a>, <a href="https://aws.amazon.com/glue/">AWS Glue</a>, Amazon Athena, Amazon SageMaker AI, and SageMaker Lakehouse. Second one is <strong>SQL analytics</strong>, where you can analyze your data in SageMaker Lakehouse using SQL. For this demo, I proceed with <strong>SQL analytics</strong>.</p>
<p>I enter a project name in the <strong>Project name</strong> field and choose <strong>SQL analytics</strong> under <strong>Project profile</strong>. I choose <strong>Continue</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/29/25-LaunchMarketingIntake1213-1.png"><img loading="lazy" class="alignnone size-full wp-image-92542" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/29/25-LaunchMarketingIntake1213-1.png" alt="" width="1924" height="1072"></a></p>
<p>I enter the values for all the parameters under <strong>Tooling</strong>. I enter the values to create my <strong>Lakehouse</strong> databases. I enter the values to create my <strong>Redshift Serverless</strong> resources. Finally, I enter a name for my catalog under <strong>Lakehouse Catalog</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/12-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-92105" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/12-LaunchMarketingIntake1213.png" alt="" width="1924" height="1384"></a></p>
<p>On the next step, I review the resources and choose <strong>Create project</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/13-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-92106" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/13-LaunchMarketingIntake1213.png" alt="" width="1924" height="782"></a></p>
<p>After the project is created, I observe the project details.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/14-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-92108" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/14-LaunchMarketingIntake1213.png" alt="" width="1924" height="1200"></a></p>
<p>I go to <strong>Data</strong> in the navigation pane and choose the + (plus) sign to Add data. I choose <strong>Create catalog</strong> to create a new catalog and choose <strong>Add data</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/21-LaunchMarketingIntake1213-1.png"><img loading="lazy" class="alignnone size-full wp-image-92122" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/21-LaunchMarketingIntake1213-1.png" alt="" width="958" height="996"></a></p>
<p>After the RMS catalog is created, I choose <strong>Build</strong> from the navigation pane and then choose <strong>Query Editor</strong> under <strong>Data Analysis & Integration</strong> to create a schema under RMS catalog, create a table, and then load table with sample sales data.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/22-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-92197" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/22-LaunchMarketingIntake1213.png" alt="" width="1924" height="996"></a></p>
<p>After entering the SQL queries into the designated cells, I choose <strong>Select data source</strong> from the right dropdown menu to establish a database connection to Amazon Redshift data warehouse. This connection allows me to execute the queries and retrieve the desired data from the database.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/23a-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-92208" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/23a-LaunchMarketingIntake1213.png" alt="" width="1924" height="996"></a></p>
<p>Once the database connection is successfully established, I choose <strong>Run all</strong> to execute all queries and monitor the execution progress until all results are displayed.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/24a-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-92209" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/24a-LaunchMarketingIntake1213.png" alt="" width="1920" height="1730"></a></p>
<p>For this demonstration, I use two additional pre-configured catalogs. A catalog is a container that organizes your lakehouse object definitions such as schema and tables. The first is an Amazon S3 data lake catalog (<strong>test-s3-catalog</strong>) that stores customer records, containing detailed transactional and demographic information. The second is a lakehouse catalog (<strong>churn_lakehouse</strong>) dedicated to storing and managing customer churn data. This integration creates a unified environment where I can analyze customer behavior alongside churn predictions.</p>
<p>From the navigation pane, I choose <strong>Data</strong> and locate my catalogs under the <strong>Lakehouse</strong> section. SageMaker Lakehouse offers multiple analysis options, including <strong>Query with Athena</strong>, <strong>Query with Redshift</strong>, and <strong>Open in Jupyter Lab notebook</strong>.</p>
<p>Note that you need to choose <strong>Data analytics and AI-ML model development</strong> profile when you create a project, if you want to use <strong>Open in Jupyter Lab notebook</strong> option. If you choose <strong>Open in Jupyter Lab notebook</strong>, you can interact with SageMaker Lakehouse using Apache Spark via EMR 7.5.0 or AWS Glue 5.0 by configuring the Iceberg REST catalog, enabling you to process data across your data lakes and data warehouses in a unified manner.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/20-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-92123" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/20-LaunchMarketingIntake1213.png" alt="" width="1924" height="996"></a></p>
<p>Here’s how querying using Jupyter Lab notebook looks like:</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/25-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-92233" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/25-LaunchMarketingIntake1213.png" alt="" width="2888" height="1202"></a></p>
<p>I continue by choosing <strong>Query with Athena</strong>. With this option, I can use serverless query capability of Amazon Athena to analyze the sales data directly within SageMaker Lakehouse. Upon selecting <strong>Query with Athena</strong>, the <strong>Query Editor</strong> launches automatically, providing an workspace where I can compose and execute SQL queries against the lakehouse. This integrated query environment offers a seamless experience for data exploration and analysis, complete with syntax highlighting and auto-completion features to enhance productivity.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/19-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-92124" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/19-LaunchMarketingIntake1213.png" alt="" width="1924" height="996"></a></p>
<p>I can also use <strong>Query with Redshift</strong> option to run SQL queries against the lakehouse.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/18-LaunchMarketingIntake1213.png"><img loading="lazy" class="alignnone size-full wp-image-92125" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/18-LaunchMarketingIntake1213.png" alt="" width="1924" height="996"></a></p>
<p>SageMaker Lakehouse offers a comprehensive solution for modern data management and analytics. By unifying access to data across multiple sources, supporting a wide range of analytics and ML engines, and providing fine-grained access controls, SageMaker Lakehouse helps you make the most of your data assets. Whether you’re working with data lakes in Amazon S3, data warehouses in Amazon Redshift, or operational databases and applications, SageMaker Lakehouse provides the flexibility and security you need to drive innovation and make data-driven decisions. You can use hundreds of connectors to integrate data from various sources. Additionally, you can access and query data in-place with federated query capabilities across third-party data sources.</p>
<p><span style="text-decoration: underline;"><strong>Now available</strong></span><br> You can access SageMaker Lakehouse through the <a href="https://console.aws.amazon.com/lakeformation">AWS Management Console</a>, APIs, <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, or <a href="https://aws.amazon.com/developer/tools/">AWS SDKs</a>. You can also access through <a href="https://docs.aws.amazon.com/glue/latest/dg/catalog-and-crawler.html">AWS Glue Data Catalog</a> and <a href="https://console.aws.amazon.com/lakeformation/">AWS Lake Formation</a>. SageMaker Lakehouse is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Europe (London), Asia Pacific (Sydney), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), South America (Sao Paulo) <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a>.</p>
<p>For pricing information, visit the <a href="https://aws.amazon.com/sagemaker/lakehouse/pricing/">Amazon SageMaker Lakehouse pricing</a>.</p>
<p>For more information on Amazon SageMaker Lakehouse and how it can simplify your data analytics and AI/ML workflows, visit the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/lakehouse.html">Amazon SageMaker Lakehouse</a> documentation.</p>
<a href="https://www.linkedin.com/in/esrakayabali/">— Esra</a>
<p><em>12/6/2024: Updated Region list</em></p>New Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse
https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-zero-etl-integration-with-amazon-sagemaker-lakehouse/
<![CDATA[Donnie Prakoso]]>Tue, 03 Dec 2024 18:59:06 +0000<![CDATA[Amazon Redshift]]><![CDATA[Amazon SageMaker]]><![CDATA[Analytics]]><![CDATA[Announcements]]><![CDATA[AWS Glue]]><![CDATA[AWS re:Invent]]><![CDATA[Database]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>af2b461f857c4ce41da8face12d0558422500d98Effortlessly analyze operational data in Amazon SageMaker Lakehouse, freeing developers from building custom pipelines and enabling seamless insights extraction.<p><a href="https://aws.amazon.com/dynamodb/">Amazon DynamoDB</a>, a serverless NoSQL database, has been a go-to solution for over one million customers to build low-latency and high-scale applications. As data grows, organizations are constantly seeking ways to extract valuable insights from operational data, which is often stored in DynamoDB. However, to make the most of this data in Amazon DynamoDB for analytics and machine learning (ML) use cases, customers often build custom data pipelines—a time-consuming infrastructure task that adds little unique value to their core business.</p>
<p>Starting today, you can use Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse to run analytics and ML workloads in just a few clicks without consuming your DynamoDB table capacity. Amazon SageMaker Lakehouse unifies all your data across Amazon S3 data lakes and Amazon Redshift data warehouses, helping you build powerful analytics and AI/ML applications on a single copy of data.</p>
<p>Zero-ETL is a set of integrations that eliminates or minimizes the need to build ETL data pipelines. This zero-ETL integration reduces the complexity of engineering efforts required to build and maintain data pipelines, benefiting users running analytics and ML workloads on operational data in Amazon DynamoDB without impacting production workflows.</p>
<p><strong><span style="text-decoration: underline;">Let’s get started<br></span></strong>For the following demo, I need to set up zero-ETL integration for my data in Amazon DynamoDB with an <a href="https://aws.amazon.com/s3/?nc2=type_a">Amazon Simple Storage Service</a> data lake managed by Amazon SageMaker Lakehouse. Before setting up the zero-ETL integration, there are prerequisites to complete. If you want to learn more on how to set up, refer to this <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/amazon-sagemaker-lakehouse-for-DynamoDB.html">Amazon DynamoDB documentation</a> page.</p>
<p>With all the prerequisites completed, I can get started with this integration. I navigate to the <a href="https://aws.amazon.com/glue/">AWS Glue</a> console and select <strong>Zero-ETL integrations</strong> under <strong>Data Integration and ETL</strong>. Then, I choose <strong>Create zero-ETL integration</strong>.</p>
<p><img loading="lazy" class="aligncenter wp-image-91888 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/news-2024-riv-ddb-glue-lakehouse-0.png" alt="" width="3840" height="1938"></p>
<p>Here, I have options to select my data source. I choose <strong>Amazon DynamoDB</strong> and choose <strong>Next</strong>.</p>
<p><img loading="lazy" class="aligncenter wp-image-91889 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/news-2024-riv-ddb-glue-lakehouse-1.png" alt="" width="3098" height="2615"></p>
<p>Next, I need to configure the source and target details. In the <strong>Source details</strong> section, I select my Amazon DynamoDB table. In the <strong>Target details</strong> section, I specify the S3 bucket that I’ve set up in the AWS Glue Data Catalog.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91894" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/news-2024-riv-ddb-glue-lakehouse-2.png" alt="" width="3430" height="1804"></p>
<p>To set up this integration, I need an IAM role that grants AWS Glue the necessary permissions. For guidance on configuring IAM permissions, visit the <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/amazon-sagemaker-lakehouse-for-DynamoDB.html">Amazon DynamoDB documentation</a> page. Also, if I haven’t configured a resource policy for my AWS Glue Data Catalog, I can select <strong>Fix it for me</strong> to automatically add the required resource policies.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91895" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/news-2024-riv-ddb-glue-lakehouse-3-1.png" alt="" width="3456" height="1817"></p>
<p>Here, I have options to configure the output. Under <strong>Data partitioning</strong>, I can either use DynamoDB table keys for partitioning or specify custom partition keys. After completing the configuration, I choose <strong>Next</strong>.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91898" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/news-2024-riv-ddb-glue-lakehouse-5-1.png" alt="" width="3456" height="1815"></p>
<p>Because I select the <strong>Fix it for me</strong> checkbox, I need to review the required changes and choose <strong>Continue</strong> before I can proceed to the next step.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91896" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/news-2024-riv-ddb-glue-lakehouse-4.png" alt="" width="1781" height="939"></p>
<p>On the next page, I have the flexibility to configure data encryption. I can use <a href="https://aws.amazon.com/kms/">AWS Key Management Service (AWS KMS)</a> or a custom encryption key. Then, I assign a name to the integration and choose <strong>Next</strong>.</p>
<p><img loading="lazy" class="aligncenter wp-image-92568 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/29/news-2024-riv-ddb-glue-lakehouse-6-2.png" alt="" width="3456" height="1815"></p>
<p>On the last step, I need to review the configurations. When I’m happy, I choose <strong>Next</strong> to create the zero-ETL integration.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91900" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/news-2024-riv-ddb-glue-lakehouse-7.png" alt="" width="3456" height="1815"></p>
<p>After the initial data ingestion completes, my zero-ETL integration will be ready for use. The completion time varies depending on the size of my source DynamoDB table.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91901" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/news-2024-riv-ddb-glue-lakehouse-8.png" alt="" width="3456" height="1815"></p>
<p>If I navigate to <strong>Tables</strong> under <strong>Data Catalog</strong> in the left navigation panel, I can observe more details including <strong>Schema</strong>. Under the hood, this zero-ETL integration uses <a href="https://iceberg.apache.org/">Apache Iceberg</a> to transform related to data format and structure in my DynamoDB data into Amazon S3.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91902" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/news-2024-riv-ddb-glue-lakehouse-9-1.png" alt="" width="3456" height="1713"></p>
<p>Lastly, I can tell that all my data is available in my S3 bucket. </p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91903" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/26/news-2024-riv-ddb-glue-lakehouse-10.png" alt="" width="3456" height="1713"></p>
<p>This zero-ETL integration significantly reduces the complexity and operational burden of data movement, and I can therefore focus on extracting insights rather than managing pipelines.</p>
<p><strong><span style="text-decoration: underline;">Available now</span><br></strong>This new zero-ETL capability is available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Hong Kong, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, Stockholm).</p>
<p>Explore how to streamline your data analytics workflows using Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse. Learn more how to get started on the <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/amazon-sagemaker-lakehouse-for-DynamoDB.html">Amazon DynamoDB documentation</a> page.</p>
<p>Happy building!<br>— <a href="https://linkedin.com/in/donnieprakoso">Donnie</a></p>Discover, govern, and collaborate on data and AI securely with Amazon SageMaker Data and AI Governance
https://aws.amazon.com/blogs/aws/discover-govern-and-collaborate-on-data-and-ai-securely-with-amazon-sagemaker-data-and-ai-governance/
<![CDATA[Esra Kayabali]]>Tue, 03 Dec 2024 18:49:59 +0000<![CDATA[Amazon DataZone]]><![CDATA[Amazon SageMaker]]><![CDATA[Announcements]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>6230a6c811d64f5ec14f23d6ed6bafc60424961aManage data and AI assets through a unified catalog, granular access controls, and a consistent policy enforcement. Establish trust via automation - boost productivity and innovation for data teams.<p>Today, we announced the next generation of <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a>, which is a unified platform for data, analytics, and AI, bringing together widely-adopted AWS machine learning and analytics capabilities. This announcement includes Amazon SageMaker Data and AI Governance, a set of capabilities that streamline the management of data and AI assets.</p>
<p>Data teams often face challenges when trying to locate, access, and collaborate on data and AI models across their organizations. The process of discovering relevant assets, understanding their context, and obtaining proper access can be time-consuming and complex, potentially hindering productivity and innovation.</p>
<p><a href="https://aws.amazon.com/sagemaker/data-ai-governance">SageMaker Data and AI Governance</a> offers a comprehensive set of features by providing a unified experience for cataloging, discovering, and governing data and AI assets. It’s centered around SageMaker Catalog built on <a href="https://aws.amazon.com/datazone/">Amazon DataZone</a>, providing a centralized repository that is accessible through Amazon SageMaker Unified Studio (preview). The catalog is built directly into the SageMaker platform, offering seamless integration with existing SageMaker workflows and tools, helping engineers, data scientists, and analysts to safely find and use authorized data and models through advanced search features. With the SageMaker platform, users can safeguard and protect their AI models using guardrails and implementing responsible AI policies.</p>
<p>Here are some of the key Data and AI governance features of SageMaker:</p>
<ol>
<li><strong>Enterprise-ready business catalog</strong> – To add business context and make data and AI assets discoverable by everyone in the organization, you can customize the catalog with automated metadata generation which uses machine learning (ML) to automatically generate business names of data assets and columns within those assets. We improved metadata curation functionality, helping you attach multiple business glossary terms to assets and glossary terms to individual columns in the asset.</li>
<li><strong>Self-service for data and AI workers</strong> – To provide data autonomy for users to publish and consume data, you can customize and bring any type of asset to the catalog using APIs. Data publishers can automate metadata discovery through data source runs or manually published files from the supported data sources and enrich metadata with generative AI–generated data descriptions automatically as datasets are brought into the catalog. Data consumers can then use faceted search to quickly find, understand, and request access to data.</li>
<li><strong>Simplified access to data and tools</strong> – To govern data and AI assets based on business purpose, projects serve as business use case–based logical containers. You can create a project and collaborate on specific business use case–based groupings of people, data, and analytics tools. Within the project, you can create an environment that provides the necessary infrastructure to project members such as analytics and AI tools and storage so that project members can easily produce new data or consume data they have access to. This helps you add multiple capabilities and analytics tools to the same project, depending on your needs.</li>
<li><strong>Governed data and model sharing</strong> – Data producers own and manage access to data with a subscription approval workflow that allows consumers to request access and data owners to approve. You can now set up subscription terms to be attached to assets when published and automate subscription grant fulfillment for AWS managed data lakes and Amazon Redshift with customizations using <a href="https://aws.amazon.com/eventbridge">Amazon EventBridge</a> events for other sources.</li>
<li><strong>Bring a consistent level of AI safety across all your applications:</strong> Amazon Bedrock Guardrails helps evaluate user inputs and Foundation Model (FM) responses based on use case specific policies, and provides an additional layer of safeguards regardless of the underlying Foundation Models. AWS AI portfolio provides hundreds of built-in algorithms with pre-trained models from model hubs, including TensorFlow Hub, PyTorch Hub, Hugging Face, and MxNet GluonCV. You can also access built-in algorithms using the SageMaker Python SDK. Built-in algorithms cover common ML tasks, such as data classifications (image, text, tabular) and sentiment analysis.</li>
</ol>
<p>For seamless integration with existing processes, SageMaker Data and AI Governance provides API support, enabling programmatic access for setup and configuration.</p>
<p><span style="text-decoration: underline;"><strong>How to use Amazon SageMaker Data and AI Governance<br> </strong></span>For this demonstration, I use a preconfigured environment. I go to the Amazon SageMaker Unified Studio (preview) console, which provides an integrated development experience for all your data and AI use cases. This is where you can create and manage projects, which serve as shared workspaces. These projects allow team members to collaborate, work with data, and develop ML models together.</p>
<p>Let me start with the <strong>Govern</strong> menu in the navigation bar.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/08-LaunchMarketingIntake-1342.png"><img loading="lazy" class="alignnone size-full wp-image-92459" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/08-LaunchMarketingIntake-1342.png" alt="" width="1924" height="868"></a></p>
<p>New data governance capabilities called domain units and authorization policies that help you create business unit- and team-level organization and manage policies according to your business needs. With the addition of domain units, you can organize, create, search, and find data assets and projects associated with business units or teams. With authorization policies, you can set access policies for creating projects and glossaries.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/09-LaunchMarketingIntake-1342.png"><img loading="lazy" class="alignnone size-full wp-image-92460" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/09-LaunchMarketingIntake-1342.png" alt="" width="1924" height="1335"></a></p>
<p>Domain units also help you with self-service governance over critical actions such as publishing data assets and utilizing compute resources within Amazon SageMaker. I choose a project and navigate to the <strong>Data sources</strong> tab in the left navigation pane. You can use this section to add new or manage existing data sources for publishing data assets to the business data catalog, making them discoverable for all users.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/10-LaunchMarketingIntake-1342.png"><img loading="lazy" class="alignnone size-full wp-image-92461" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/28/10-LaunchMarketingIntake-1342.png" alt="" width="1924" height="905"></a></p>
<p>I return to the homepage and continue exploring by choosing <strong>Data Catalog</strong>, which serves as a centralized hub where users can explore and discover all available data assets across multiple data sources within the organization. This catalog connects to various data sources, including <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a>, <a href="https://aws.amazon.com/redshift/">Amazon Redshift</a>, and <a href="https://aws.amazon.com/glue/">AWS Glue</a>.</p>
<p>The semantic search feature helps you find relevant data assets quickly and efficiently using natural language queries, which makes data discovery more intuitive. I enter <strong>events</strong> in the <em>Search data</em> area.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/02-LaunchMarketingIntake-1342.png"><img loading="lazy" class="alignnone size-full wp-image-92324" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/02-LaunchMarketingIntake-1342.png" alt="" width="1834" height="867"></a></p>
<p>You can apply filters based on asset type, such as AWS Glue table and Amazon Redshift.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/03-LaunchMarketingIntake-1342.png"><img loading="lazy" class="alignnone size-full wp-image-92329" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/03-LaunchMarketingIntake-1342.png" alt="" width="1831" height="848"></a></p>
<p>Amazon Q Developer integration helps you interact with data using conversational language, making it easier for users to find and understand data assets. You can use example commands such as “Show me datasets that relate to events” and “Show me datasets that relate to revenue.” The detailed view provides comprehensive information about each dataset, including AI-generated descriptions, data quality metrics, and data lineage, helping you understand the content and origin of the data.</p>
<p>The subscription process implements a controlled access mechanism where users must justify their need for data access, providing proper data governance and security. I choose <strong>Subscribe </strong>to request access.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/04-LaunchMarketingIntake-1342.png"><img loading="lazy" class="alignnone size-full wp-image-92330" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/04-LaunchMarketingIntake-1342.png" alt="" width="1831" height="865"></a></p>
<p>In the pop-up window, I select a Project, provide a Reason for request such as need access, and choose Request. The request is sent to the data owner.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/05-LaunchMarketingIntake-1342.png"><img loading="lazy" class="alignnone size-full wp-image-92339" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/05-LaunchMarketingIntake-1342.png" alt="" width="789" height="521"></a></p>
<p>This final step makes sure that data access is properly governed through a structured approval workflow, maintaining data security and compliance requirements. During the owner approval process, the data owner receives a notification and can review the request details before choosing to approve or deny access, after which the requester can access the data table if approved.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/06-LaunchMarketingIntake-1342.png"><img loading="lazy" class="alignnone size-full wp-image-92342" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/06-LaunchMarketingIntake-1342.png" alt="" width="1831" height="760"></a></p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/07-LaunchMarketingIntake-1342.png"><img loading="lazy" class="alignnone size-full wp-image-92343" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/07-LaunchMarketingIntake-1342.png" alt="" width="743" height="697"></a></p>
<p><span style="text-decoration: underline;"><strong>Now available<br> </strong></span>Amazon SageMaker Data and AI Governance offers significant benefits for organizations looking to improve their data and AI asset management. The solution helps data scientists, engineers, and analysts overcome challenges in discovering and accessing resources by offering comprehensive features for cataloging, discovering, and governing data and AI assets, while providing security and compliance through structured approval workflows.</p>
<p>For pricing information, visit <a href="https://aws.amazon.com/sagemaker/pricing/">Amazon SageMaker pricing</a>.</p>
<p>To get started with Amazon SageMaker Data and AI Governance, visit <a href="https://docs.aws.amazon.com/sagemaker/">Amazon SageMaker Documentation</a>.</p>
<p><a href="https://www.linkedin.com/in/esrakayabali/">— Esra</a></p>Announcing the general availability of data lineage in the next generation of Amazon SageMaker and Amazon DataZone
https://aws.amazon.com/blogs/aws/announcing-the-general-availability-of-data-lineage-in-the-next-generation-of-amazon-sagemaker-and-amazon-datazone/
<![CDATA[Esra Kayabali]]>Tue, 03 Dec 2024 18:45:49 +0000<![CDATA[Amazon DataZone]]><![CDATA[Analytics]]><![CDATA[Announcements]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>b714d909a67c9c97828775b0ed6cde0427895e55Realize visual traceability of data origins, transformations, and usage - bolstering trust, governance, and discoverability for strategic data-driven decisions.<p>Today, I’m happy to announce the general availability of data lineage in <a href="https://aws.amazon.com/datazone/">Amazon DataZone</a>, following its <a href="https://aws.amazon.com/blogs/aws/introducing-end-to-end-data-lineage-preview-visualization-in-amazon-datazone/">preview release</a> in June 2024. This feature is also extended as part of the catalog capabilities in the next generation of <a href="https://aws.amazon.com/sagemaker/data-ai-governance">Amazon SageMaker</a>, a unified platform for data, analytics, and AI.</p>
<p>Traditionally, business analysts have relied on manual documentation or personal connections to validate data origins, leading to inconsistent and time-consuming processes. Data engineers have struggled to evaluate the impact of changes to data assets, especially as self-service analytics adoption increases. Additionally, data governance teams have faced difficulties in enforcing practices and responding to auditor queries about data movement.</p>
<p>Data lineage in Amazon DataZone addresses the challenges faced by organizations striving to remain competitive by using their data for strategic analysis. It enhances data trust and validation by providing a visual, traceable history of data assets, enabling business analysts to quickly understand data origins without manual research. For data engineers, it facilitates impact analysis and troubleshooting by clearly showing relationships between assets and allowing easy tracing of data flows.</p>
<p>The feature supports data governance and compliance efforts by offering a comprehensive view of data movement, helping governance teams to quickly respond to compliance queries and enforce data policies. It improves data discovery and understanding, helping consumers grasp the context and relevance of data assets more efficiently. Additionally, data lineage contributes to better change management, increased data literacy, reduced data duplication, and enhanced cross-team collaboration. By tackling these challenges, data lineage in Amazon DataZone helps organizations build a more trustworthy, efficient, and compliant data ecosystem, ultimately enabling more effective data-driven decision-making.</p>
<p>Automated lineage capture is a key feature of the data lineage in Amazon DataZone, which focuses on automatically collecting and mapping lineage information from <a href="https://aws.amazon.com/glue/">AWS Glue</a> and <a href="https://aws.amazon.com/redshift/">Amazon Redshift</a>. This automation significantly reduces the manual effort required to maintain accurate and up-to-date lineage information.</p>
<p><span style="text-decoration: underline;"><strong>Get started with data lineage in Amazon DataZone<br> </strong></span>Data producers and domain administrators get started by setting up the data source run jobs for the <a href="https://docs.aws.amazon.com/glue/latest/dg/catalog-and-crawler.html">AWS Glue Data Catalog</a> and Amazon Redshift sources to Amazon DataZone to periodically collect metadata from the source catalog. Additionally, the data producers can hydrate the lineage information programmatically by creating custom lineage nodes using APIs that accept OpenLineage compatible events from existing pipeline components—such as schedulers, warehouses, analysis tools, and SQL engines—to send data about datasets, jobs, and runs directly to Amazon DataZone API endpoint. With the information being sent, Amazon DataZone will start populating the lineage model and map them to the assets already cataloged. As new lineage events are captured, Amazon DataZone maintains versions of events that were already captured, so users can navigate to previous versions if needed.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/02-LaunchMarketingIntake1282.png"><img loading="lazy" class="alignnone size-full wp-image-92136" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/02-LaunchMarketingIntake1282.png" alt="" width="2049" height="1218"></a></p>
<p>From the consumer’s perspective, lineage can help with three scenarios. First, a business analyst browsing an asset, can go to the Amazon DataZone portal, search for an asset by name, and select an asset that interests them to dive into the details. Initially, they’ll be presented with details in the <strong>Business Metadata</strong> tab and move right to neighboring tabs. To view lineage, the analyst can go the <strong>Lineage</strong> tab for details of upstream nodes to find the source. The analyst is presented with a view of that asset’s lineage with 1-level upstream and downstream. To get the source, the analyst can choose upstream and get to the source of the asset. When the analyst is sure that this is the correct asset, they can subscribe to the asset and continue with their work.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/03-LaunchMarketingIntake1282.png"><img loading="lazy" class="alignnone size-full wp-image-92138" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/03-LaunchMarketingIntake1282.png" alt="" width="2222" height="1217"></a></p>
<p>Second, if a data issue is reported—for instance, when a dashboard unexpectedly shows a significant increase in customer count—a data engineer can use the Amazon DataZone portal to locate and examine the relevant asset details. In the asset details page, the data engineer navigates to the <strong>Lineage</strong> tab to view the details of upstream nodes of the asset in question. The engineer can dive into the details of each node, its snapshots, column mapping between each table node, the jobs that ran in between, and view the query that was executed in the job run. Using this information, the data engineer can spot that a new input table was added to the pipeline, which has introduced an uptick in customer count, because they notice that this new table wasn’t part of the previous snapshots of the job runs. This helps them clarify that a new source was added and hence the data shown in the dashboard is accurate.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/08-LaunchMarketingIntake1282.png"><img loading="lazy" class="alignnone size-full wp-image-92142" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/08-LaunchMarketingIntake1282.png" alt="" width="2218" height="1179"></a></p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/07-LaunchMarketingIntake1282.png"><img loading="lazy" class="alignnone size-full wp-image-92143" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/07-LaunchMarketingIntake1282.png" alt="" width="2219" height="1179"></a></p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/06-LaunchMarketingIntake1282.png"><img loading="lazy" class="alignnone size-full wp-image-92144" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/06-LaunchMarketingIntake1282.png" alt="" width="2219" height="1176"></a></p>
<p>Lastly, a steward looking to respond to questions from an auditor can go to the asset in question and navigates to the <strong>Lineage</strong> tab of that asset. The steward traverses the graph upstream to see where the data is coming from and notices that the data is from two different teams—for instance, from two different on-premises databases—that has its own pipelines until it reaches a point where the pipelines merge. While navigating through the lineage graph, the steward can expand the columns to make sure sensitive columns are dropped during the transformations processes and respond to the auditors with details in a timely manner.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/04-LaunchMarketingIntake1282.png"><img loading="lazy" class="alignnone size-full wp-image-92140" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/04-LaunchMarketingIntake1282.png" alt="" width="2223" height="1181"></a></p>
<p><span style="text-decoration: underline;"><strong>How Amazon DataZone automates lineage collection<br> </strong></span>Amazon DataZone now enables automatic capture of lineage events, helping data producers and administrators to streamline the tracking of data relationships and transformations across their AWS Glue and Amazon Redshift resources. To allow automatic capture of lineage events from AWS Glue and Amazon Redshift, you have to opt in because some of your jobs or connections might be for testing and you might not need any lineage to be captured. With the integrated experience available, the services will provide you an option in your configuration settings to opt-in to collect and emit lineage events directly to Amazon DataZone.</p>
<p>These events should capture the various data transformation operations you perform on tables and other objects, such as table creation with column definitions, schema changes, and transformation queries, including aggregations and filtering. By obtaining these lineage events directly from your processing engines, Amazon DataZone can build a foundation of accurate and consistent data lineage information. This will then help you, as a data producer, to further curate the lineage data as part of the broader business data catalog capabilities.<span style="text-decoration: underline;"><strong><br> </strong></span></p>
<p>Administrators can enable lineage when setting up the built-in <strong>DefaultDataLake</strong> or the <strong>DefaultDataWarehouse </strong>blueprints.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/10-LaunchMarketingIntake1282.png"><img loading="lazy" class="alignnone size-full wp-image-92263" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/10-LaunchMarketingIntake1282.png" alt="" width="2794" height="1498"></a></p>
<p>Data producers can view the status of automated lineage while setting up the data source runs.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/09-LaunchMarketingIntake1282.png"><img loading="lazy" class="alignnone size-full wp-image-92262" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/09-LaunchMarketingIntake1282.png" alt="" width="2832" height="1676"></a></p>
<p>With the recent launch of the next generation of Amazon SageMaker, data lineage is available as one of the catalog capabilities in the <a href="https://aws.amazon.com/blogs/aws/introducing-the-next-generation-of-amazon-sagemaker-the-center-for-all-your-data-analytics-and-ai">Amazon SageMaker Unified Studio (preview)</a>. Data users can set up lineage using connections, and that configuration will automate the capture of lineage in the platform for all users to browse and understand the data. Here’s how data lineage in next generation Amazon SageMaker will look.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/01-LaunchMarketingIntake1282.png"><img loading="lazy" class="alignnone size-full wp-image-92133" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/01-LaunchMarketingIntake1282.png" alt="" width="2904" height="1762"></a></p>
<p><span style="text-decoration: underline;"><strong>Now available<br> </strong></span>You can begin using this capability to gain deeper insights into your data ecosystem and drive more informed, data-driven decision-making.</p>
<p>Data lineage is generally available in all <a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region">AWS Regions</a> where Amazon DataZone is available. For a list of Regions where Amazon DataZone domains can be provisioned, visit <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Services by Region</a>.</p>
<p>Data lineage costs are dependent on storage usage and API requests, which are already included in the Amazon DataZone pricing model. For more details, visit <a href="https://aws.amazon.com/datazone/pricing/">Amazon DataZone pricing</a>.</p>
<p>To get started with data lineage in Amazon DataZone, visit the <a href="https://docs.aws.amazon.com/datazone/latest/userguide/datazone-data-lineage.html">Amazon DataZone User Guide</a>.</p>
<p><a href="https://www.linkedin.com/in/esrakayabali/">— Esra</a></p>Introducing the next generation of Amazon SageMaker: The center for all your data, analytics, and AI
https://aws.amazon.com/blogs/aws/introducing-the-next-generation-of-amazon-sagemaker-the-center-for-all-your-data-analytics-and-ai/
<![CDATA[Antje Barth]]>Tue, 03 Dec 2024 18:45:43 +0000<![CDATA[Amazon Bedrock]]><![CDATA[Amazon DataZone]]><![CDATA[Amazon Q]]><![CDATA[Amazon SageMaker]]><![CDATA[Analytics]]><![CDATA[Announcements]]><![CDATA[Artificial Intelligence]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Launch]]><![CDATA[News]]>71c5126443a291633884fd02dccd22548c4ee182Unify data engineering, analytics, and generative AI in a streamlined studio with enhanced capabilities of Amazon SageMaker.<p>Today, we’re announcing the next generation of <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a>, a unified platform for data, analytics, and AI. The all-new SageMaker includes virtually all of the components you need for data exploration, preparation and integration, big data processing, fast SQL analytics, <a href="https://aws.amazon.com/ai/machine-learning/">machine learning (ML)</a> model development and training, and <a href="https://aws.amazon.com/ai/generative-ai/">generative AI</a> application development.</p>
<p>The current Amazon SageMaker has been renamed to <a href="https://aws.amazon.com/sagemaker-ai/">Amazon SageMaker AI</a>. SageMaker AI is integrated within the next generation of SageMaker while also being available as a standalone service for those who wish to focus specifically on building, training, and deploying AI and ML models at scale.</p>
<p><strong><u>Highlights of the new Amazon SageMaker<br> </u></strong>At its core is <a href="https://aws.amazon.com/sagemaker/unified-studio">SageMaker Unified Studio</a> (preview), a single data and AI development environment. It brings together functionality and tools from the range of standalone “studios,” query editors, and visual tools that we have today in <a href="https://aws.amazon.com/athena/">Amazon Athena</a>, <a href="https://aws.amazon.com/emr/">Amazon EMR</a>, <a href="https://aws.amazon.com/glue/">AWS Glue</a>, <a href="https://aws.amazon.com/redshift/">Amazon Redshift</a>, <a href="https://aws.amazon.com/managed-workflows-for-apache-airflow/">Amazon Managed Workflows for Apache Airflow (MWAA</a>), and the existing <a href="https://aws.amazon.com/sagemaker/studio/">SageMaker Studio</a>. We’ve also integrated <a href="https://aws.amazon.com/bedrock/ide">Amazon Bedrock IDE</a> (preview), an updated version of Amazon Bedrock Studio, to build and customize generative AI applications. In addition, <a href="https://aws.amazon.com/q">Amazon Q</a> provides AI assistance throughout your workflows in SageMaker.</p>
<p>Here’s a list of key capabilities:</p>
<ul>
<li><a href="https://aws.amazon.com/sagemaker/unified-studio"><strong>Amazon SageMaker Unified Studio</strong></a> (preview) – Build with all your data and tools for analytics and AI in a single environment.</li>
<li><a href="https://aws.amazon.com/sagemaker/lakehouse"><strong>Amazon SageMaker Lakehouse</strong></a> – Unify data across <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service (Amazon S3)</a> data lakes, Amazon Redshift data warehouses, and third-party and federated data sources with Amazon SageMaker Lakehouse.</li>
<li><a href="https://aws.amazon.com/sagemaker/data-ai-governance"><strong>Data and AI Governance</strong></a> – Securely discover, govern, and collaborate on data and AI with Amazon SageMaker Catalog, built on <a href="https://aws.amazon.com/datazone/">Amazon DataZone</a>.</li>
<li><a href="https://aws.amazon.com/sagemaker/data-processing"><strong>Data Processing</strong></a> – Analyze, prepare, and integrate data for analytics and AI using open source frameworks on Amazon Athena, Amazon EMR, and AWS Glue.</li>
<li><strong>Model development </strong>– Build, train, and deploy ML and <a href="https://aws.amazon.com/what-is/foundation-models/">foundation models (FMs)</a> with fully managed infrastructure, tools, and workflows with <a href="https://aws.amazon.com/sagemaker-ai/">Amazon SageMaker AI</a>.</li>
<li><strong>Generative AI app development</strong> – Build and scale generative AI applications with <a href="https://aws.amazon.com/bedrock/ide">Amazon Bedrock</a>.</li>
<li><strong>SQL analytics</strong> – Gain insights with <a href="https://aws.amazon.com/redshift/">Amazon Redshift</a>, the most price-performant SQL engine.</li>
</ul>
<p>In this post, I give you a quick tour of the new SageMaker Unified Studio experience and how to get started with data processing, model development, and generative AI app development.</p>
<p><strong><u>Working with Amazon SageMaker Unified Studio (preview)<br> </u></strong>With SageMaker Unified Studio, you can discover your data and put it to work using familiar AWS tools to complete end-to-end development workflows, including data analysis, data processing, model training, and generative AI app building, in a single governed environment.</p>
<p>An integrated SQL editor lets you query data from multiple sources, and a visual extract, transform, and load (ETL) tool simplifies the creation of data integration and transformation workflows. New unified Jupyter notebooks enable seamless work across different compute services and clusters. With the new built-in data catalog functionality, you can find, access, and query data and AI assets across your organization. Amazon Q is integrated to streamline tasks across the development lifecycle.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-1-1.png"><img loading="lazy" class="aligncenter wp-image-92373 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-1-1.png" alt="Amazon SageMaker Unified Studio" width="1255" height="1110"></a></p>
<p>Let’s explore the individual capabilities in more detail.</p>
<p><strong>Data processing<br> </strong>SageMaker integrates with <a href="https://aws.amazon.com/sagemaker/lakehouse">SageMaker Lakehouse</a> and lets you analyze, prepare, integrate, and orchestrate your data in a unified experience. You can integrate and process data from various sources using the provided connectivity options.</p>
<p>Start by creating a project in SageMaker Unified Studio, choosing the <strong>SQL analytics</strong> or <strong>data analytics and AI-ML model development</strong> project profile. Projects are a place to collaborate with your colleagues, share data, and use tools to work with data in a secure way. Project profiles in SageMaker define the preconfigured set of resources and tools that are provisioned when you create a new project. In your project, choose <strong>Data</strong> in the left menu and start adding data sources.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-2-2.png"><img loading="lazy" class="aligncenter wp-image-92382 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-2-2.png" alt="Amazon SageMaker Unified Studio" width="1376" height="951"></a></p>
<p>The built-in SQL query editor lets you query your data stored in data lakes, data warehouses, databases, and applications directly within SageMaker Unified Studio. In the top menu of SageMaker Unified Studio, select <strong>Build</strong> and choose <strong>Query Editor</strong> to get started. Also, try creating SQL queries using natural language with Amazon Q while you’re at it.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-3-1.png"><img loading="lazy" class="aligncenter wp-image-92379 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-3-1.png" alt="Amazon SageMaker Unified Studio" width="1627" height="1234"></a></p>
<p>You should also explore the built-in visual ETL tool to create data integration and transformation workflows using a visual, drag-and-drop interface. In the top menu, select <strong>Build</strong> and choose <strong>Visual ETL flow</strong> to get started.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-8.png"><img loading="lazy" class="aligncenter wp-image-92054 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-8.png" alt="Amazon SageMaker Unified Studio" width="1392" height="1164"></a></p>
<p>If Amazon Q is enabled, you can also use generative AI to author flows. Visual ETL comes with a wide range of data connectors, pre-built transformations, and features such as scheduling, monitoring, and data previewing to streamline your data workflows.</p>
<p><strong>Model development<br> </strong>SageMaker Unified Studio includes capabilities from SageMaker AI, which provides infrastructure, tools, and workflows for the entire ML lifecycle. From the top menu, select <strong>Build</strong> to access tools for data preparation, model training, experiment tracking, pipeline creation, and orchestration. You can also use these tools for model deployment and inference, machine learning operations (MLOps) implementation, model monitoring and evaluation, as well as governance and compliance.</p>
<p>To start your model development, create a project in SageMaker Unified Studio using the <strong>data analytics and AI-ML model development</strong> project profile and explore the new unified <a href="https://jupyter.org/">Jupyter</a> notebooks. In the top menu, select <strong>Build</strong> and choose <strong>JupyterLab</strong>. You can use the new unified notebooks to seamlessly work across different compute services and clusters. You can use these notebooks to switch between environments without leaving your workspace, streamlining your model development process.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/23/2024-sm-unified-5-1.png"><img loading="lazy" class="aligncenter wp-image-91468 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/23/2024-sm-unified-5-1.png" alt="Amazon SageMaker Unified Studio" width="1233" height="1122"></a></p>
<p>You can also use <a href="https://aws.amazon.com/q/developer">Amazon Q Developer</a> to assist with tasks such as code generation, debugging, and optimization throughout your model development process.</p>
<p><strong>Generative AI app development<br> </strong>Use the new Amazon Bedrock IDE to develop generative AI applications within Amazon SageMaker Unified Studio. The Amazon Bedrock IDE includes tools to build and customize generative AI applications using FMs and advanced capabilities such as <a href="https://aws.amazon.com/bedrock/knowledge-bases/">Amazon Bedrock Knowledge Bases</a>, <a href="https://aws.amazon.com/bedrock/guardrails/">Amazon Bedrock Guardrails</a>, <a href="https://aws.amazon.com/bedrock/agents/">Amazon Bedrock Agents</a>, and <a href="https://aws.amazon.com/bedrock/prompt-flows/">Amazon Bedrock Flows</a> to create tailored solutions aligned with your requirements and responsible AI guidelines.</p>
<p>Choose <strong>Discover</strong> in the top menu of SageMaker Unified Studio to browse Amazon Bedrock models or experiment with the model playgrounds.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-6-1.png"><img loading="lazy" class="aligncenter wp-image-92377 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-6-1.png" alt="Amazon Bedrock IDE" width="1264" height="1146"></a></p>
<p>Create a project using the <strong>GenAI Application Development</strong> profile to start building generative AI applications. Choose <strong>Build</strong> in the top menu of SageMaker Unified Studio and select <strong>Chat agent</strong>.</p>
<p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-7-1.png"><img loading="lazy" class="aligncenter wp-image-92378 size-full" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/27/2024-sm-unified-7-1.png" alt="Amazon Bedrock IDE" width="1257" height="1219"></a></p>
<p>With the Amazon Bedrock IDE, you can build chat agents and create knowledge bases from your proprietary data sources with just a few clicks, enabling <a href="https://aws.amazon.com/what-is/retrieval-augmented-generation/">Retrieval-Augmented Generation (RAG)</a>. You can add guardrails to promote safe AI interactions and create functions to integrate with any system. With built-in model evaluation features, you can test and optimize your AI applications’ performance while collaborating with your team. Design flows for deterministic genAI-powered workflows, and when ready, share your applications or prompts within the domain or export them for deployment anywhere—all while maintaining control of your project and domain assets.</p>
<p>For a detailed description of all Amazon SageMaker capabilities, check the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/what-is-sagemaker-unified-studio.html">SageMaker Unified Studio User Guide</a>.</p>
<p><strong><u>Getting started<br> </u></strong>To begin using SageMaker Unified Studio, administrators need to complete several setup steps. This includes setting up <a href="https://aws.amazon.com/iam/identity-center/">AWS IAM Identity Center</a>, configuring the necessary virtual private cloud (VPC) and <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> roles, creating a SageMaker domain, and enabling Amazon Q Developer Pro. Instead of IAM Identity Center, you can also configure SAML through IAM federation for user management.</p>
<p>After the environment is configured, users sign in through the provided SageMaker Unified Studio domain URL with single sign-on. You can create projects to collaborate with team members, choosing from pre-configured project profiles for different use cases. Each project connects to a Git repository for version control and includes an example unified Jupyter notebook to get you started.</p>
<p>For detailed setup instructions, check the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/what-is-service.html">SageMaker Unified Studio Administrator Guide</a>.</p>
<p><strong><u>Now available<br> </u></strong>The next generation of Amazon SageMaker is available today in the US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) AWS Regions. Amazon SageMaker Unified Studio and Amazon Bedrock IDE are available today in preview in these AWS Regions. Check the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">full Region list</a> for future updates.</p>
<p>For pricing information, visit <a href="https://aws.amazon.com/sagemaker/pricing/">Amazon SageMaker pricing</a> and <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock pricing</a>. To learn more, visit <a href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a>, <a href="https://aws.amazon.com/sagemaker/unified-studio">SageMaker Unified Studio</a>, and <a href="https://aws.amazon.com/bedrock/ide">Amazon Bedrock IDE</a>.</p>
<p>Existing Amazon Bedrock Studio preview domains will be available until February 28, 2025, but you may not create new workspaces. To experience the advanced features of Bedrock IDE, create a new SageMaker domain following the instructions in the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/what-is-service.html">Administrator Guide</a>.</p>
<p>Give the new Amazon SageMaker a try in the <a href="https://console.aws.amazon.com/datazone">console</a> today and let us know what you think! Send feedback to <a href="https://repost.aws/tags/knowledge-center/TAT80swPyVRPKPcA0rsJYPuA">AWS re:Post for Amazon SageMaker</a> or through your usual AWS Support contacts.</p>
<p>— <a href="https://www.linkedin.com/in/antje-barth/" target="_blank" rel="noopener noreferrer">Antje</a></p>Amazon Q Business is adding new workflow automation capability and 50+ action integrations
https://aws.amazon.com/blogs/aws/amazon-q-business-is-adding-new-workflow-automation-capability-and-50-action-integrations/
<![CDATA[Donnie Prakoso]]>Tue, 03 Dec 2024 18:35:15 +0000<![CDATA[Amazon Q]]><![CDATA[Announcements]]><![CDATA[Artificial Intelligence]]><![CDATA[AWS re:Invent]]><![CDATA[Featured]]><![CDATA[Generative AI]]><![CDATA[Launch]]><![CDATA[News]]>53acf68346bcf1a49a229ac287d0780f93e785f3Amazon Q Business extends productivity with generative AI-powered workflow automation capability and 50+ actions for enterprise efficiency, enabling seamless task execution across tools like ServiceNow, PagerDuty, and Asana.<p><a href="https://aws.amazon.com/q/business/">Amazon Q Business</a>, a generative AI–powered assistant designed to enhance productivity across various business applications, became <a href="https://aws.amazon.com/blogs/aws/amazon-q-business-now-generally-available-helps-boost-workforce-productivity-with-generative-ai/">generally available</a> earlier this year. Since its launch, Amazon Q Business has been helping customers tackle the challenges of improving workforce productivity.</p>
<p>In this post, we have two announcements for Amazon Q Business:</p>
<ol>
<li><a href="#feat-1">AI-powered workflow automation in Amazon Q Business (coming soon)</a></li>
<li><a href="#feat-2">Supports for more than 50 action integrations (generally available)</a></li>
</ol>
<p>Let’s get started with these new announcements from Amazon Q Business:</p>
<p id="feat-1"><strong><span style="text-decoration: underline;">AI-powered workflow automation in Amazon Q Business (coming soon)</span><br></strong>Organizations handle hundreds, if not thousands, of complex workflows that demand precise, repeatable execution. Automating these workflows has been a time-consuming process, often taking months and requiring specialized expertise. As a result, many potentially valuable business processes remain manual, leading to inefficiencies and missed opportunities.</p>
<p>Available soon, Amazon Q Business will have a new capability to simplify the creation and maintenance of complex business workflows.</p>
<p>With this capability, you only need to describe your desired workflow using natural language, upload a standard operating procedure (SOP), or record a video of the process being performed. Amazon Q Business uses generative AI to automatically author a detailed workflow plan from your inputs in minutes. Then, with the recommended workflow, you can review, test, modify, or approve.</p>
<p><img loading="lazy" class="aligncenter wp-image-91631 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/25/news-2024-qbusiness-announcements-r0.png" alt="" width="1418" height="820"></p>
<p>Let’s consider an example of automotive claim processing. This process typically involves manually reading claim emails, reviewing attachments, and creating claims in the system. With the new capability in Amazon Q Business, I can create this workflow more efficiently, reducing the time and complexity typically associated with workflow creation.</p>
<p>First, I upload the relevant SOP.</p>
<p><img loading="lazy" class="aligncenter wp-image-91632 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/25/news-2024-qbusiness-announcements-r1.png" alt="" width="2127" height="1230"></p>
<p>During the workflow creation process, Amazon Q Business may ask questions to clarify and gather any additional information needed to complete the workflow design.</p>
<p><img loading="lazy" class="aligncenter wp-image-91633 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/25/news-2024-qbusiness-announcements-r2.jpg" alt="" width="2097" height="1200"></p>
<p>Based on the provided inputs, Amazon Q Business generates an initial workflow template. As an automation author, I can then customize this workflow using a visual drag-and-drop interface and integrate it with supported third-party applications for testing. The workflow can include API calls, automatic UI actions, execution logic, AI agents, and human-in-the-loop steps to cater to the unique needs of every business process across a wide range of industries and business functions.</p>
<p><img loading="lazy" class="aligncenter wp-image-91634 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/25/news-2024-qbusiness-announcements-r3.jpg" alt="" width="2127" height="1230"></p>
<p>When it’s finalized, I can publish the workflow and configure it to run either on a schedule or in response to specific triggers. Once published, I can actively track its performance using a feature-rich monitoring dashboard. This dashboard offers built-in analytics, providing detailed insights into the execution and efficiency of all published workflows.</p>
<p><img loading="lazy" class="aligncenter wp-image-92531 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/29/news-2024-qbusiness-announcements-r5.png" alt="" width="1224" height="708"></p>
<p>When executing the workflow, Amazon Q Business uses a UI agent trained on thousands of websites and desktop applications to seamlessly navigate changes to page layouts and unexpected pop-up windows in real time. Amazon Q Business includes UI automation, API integrations, and workflow orchestration in a single system, eliminating the need to integrate multiple products and services to create a complete enterprise workflow automation system.</p>
<p id="feat-2"><strong><span style="text-decoration: underline;">Supports for more than 50 action integrations</span><br></strong>With Amazon Q Business plugins, you have the flexibility to connect to third-party apps and perform specific tasks related to supported third-party services directly within your web experience chat. These plugins are accessible through Amazon Q Apps, a feature within Amazon Q Business that helps you create AI-powered apps that streamline tasks and boost productivity. Additionally, when workflow automation capabilities launch, you will be able to integrate these plugins directly into your workflows.</p>
<p>In this announcement, we’re introducing a ready-to-use library of platforms with over 50 action integrations and 11 popular business applications. These business applications include Microsoft Teams, PagerDuty Advance, Salesforce, ServiceNow, and more. </p>
<p>To get started with the new integrations, access Amazon Q Business through your existing account and explore the new plugins and action integrations.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-90088" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/14/news-2024-qbusiness-announcements-a0.png" alt="" width="3474" height="3892"></p>
<p>With these integrations, you can perform various tasks across multiple applications within the Amazon Q Business web application.</p>
<p>Let’s say I need to create a new opportunity with Salesforce. First, I open my Amazon Q Business web application.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-90089" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/14/news-2024-qbusiness-announcements-a0-1.png" alt="" width="1838" height="1034"></p>
<p>Then, I trigger Amazon Q Business plugins and select the <strong>Create Opportunity</strong> action.</p>
<p><img loading="lazy" class="aligncenter wp-image-91301 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/news-2024-qbusiness-announcements-s0.png" alt="" width="1920" height="1080"></p>
<p>Then, I ask Amazon Q Business to create an opportunity record.</p>
<p><img loading="lazy" class="aligncenter wp-image-91302 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/news-2024-qbusiness-announcements-s1.png" alt="" width="1920" height="1080"></p>
<p>If the action plugin requires more information, it will prompt me to gather more information.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91303" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/news-2024-qbusiness-announcements-s2.png" alt="" width="1920" height="1080"></p>
<p>The Amazon Q Business plugin will automatically create the record for me with the Salesforce action plugin.</p>
<p><img loading="lazy" class="aligncenter wp-image-91304 size-full" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/news-2024-qbusiness-announcements-s4.png" alt="" width="1920" height="1080"></p>
<p>From here, I can complete additional tasks, such as associating the opportunity record with the account.</p>
<p><img loading="lazy" class="aligncenter size-full wp-image-91305" style="border: 1px solid black; padding: 3px;" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2024/11/22/news-2024-qbusiness-announcements-s6.png" alt="" width="1920" height="1080"></p>
<p><span style="text-decoration: underline;"><strong>Get started with Amazon Q Business today</strong></span> <br>The new Amazon Q Business plugins are available today in all AWS Regions where Amazon Q Business is available. The new capability to orchestrate workflows in Amazon Q Business will be available in preview soon.</p>
<p>Boost productivity and innovation in your organization with Amazon Q Business. Learn more about how to get started on the <a href="https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/getting-started.html">Amazon Q Business documentation</a> page.</p>
<p>Happy building, <br>— <a href="https://linkedin.com/in/donnieprakoso">Donnie</a></p>