Applying MACH Principles to Generative AI

By Member

My two favorite topics these days are Generative AI and MACH, two topics that don’t on the surface seem very related. But actually, we can apply the lessons learned from a MACH approach to the implementation of Generative AI and reap some of the same benefits. Agility, flexibility, and vendor-lock avoidance are bonus outcomes achieved if we leverage a MACH strategy.

Before going much further, let’s back up to frame the discussion. MACH is an approach at modularization and decoupling that facilitates assembling best-of-breed components into solutions. Amazon Web Service (AWS) joined the MACH Alliance a while back, and my colleagues have written about how Great MACH Runs on AWS. Using this approach, retailers are able to construct a solution from different vendors, picking and choosing what works best for them now, with the flexibility to swap components later as necessary. For example. The Very Group in the UK, is assembling their new ecommerce platform from MACH members such as commercetools, Constructor, and Amplience, all running on AWS.

Switching gears to Generative AI, it’s important to first understand that Generative AI is powered by foundation models (FMs), which are very time-consuming and expensive to train. Training requires months of processing on specialized hardware, like that supplied by Nvidia running in the AWS cloud, or AWS Trainium. But once trained, they can form the basis for many derivative solutions by fine-tuning. For example, the large language model (LLM) Claude from Anthropic is a foundation model capable of general conversations. A retailer could fine-tune it to handle typical ecommerce interactions and deploy to their contact center, thus enabling retail-specific prompts for the operator during calls. This could just as easily be used for a chatbot on the retailer’s website as well. I’ve also written about the positive impact Generative AI could have for retail, including a list of other use cases.

Here's where the two ideas come together. Retailers would be wise not to lock themselves in to a particular FM. The fact is, there are many FMs available today, each with its own benefits and costs, and there are more emerging every week. Although AWS offers Amazon Titan, its own FM, we also provide easy access to FMs from other providers since that choice depends on the end-goal. So, we created Amazon Bedrock, a fully managed service that makes FMs from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that's best suited for your use case. AWS provides the tools and environment where you can choose to build new FMs, or fine-tune existing ones. We offer access to various FMs and even different training and inferencing hardware. This is a different approach than pushing one particular FM.

A great example of this was given at the recent AWS London Summit [video 23:18-26:16]. Imagine a retailer wants to promote a new product. The product description is generated by Anthropic’s Claude AI assistant, StableDiffusion creates the product image, social media copy comes from AI21’s Jurassic-1 LLM, and Amazon Titan provides the SEO-optimized terms, all accessed through Amazon Bedrock. This gives the ultimate freedom to the retailer to select the cost-effective FMs that work best for their situation then harmonizes the outputs. Of course, for simplicity’s sake, you might not want multiple FMs, and you’re free to follow that path as well.

Adopting what appears to be the best FM of the day might get to market first, but taking a more MACH-inspired approach to enabling business agility with freedom of choice has more long-term benefits.

Author: David Dorf, Global Head of Retail Industry Solutions, AWS