Mach Two Conference

13 & 14 June / Amsterdam — MACH TWO Conference

A little History of Technology Leading to MACH

‘90s technology is very different from what we have today. The evolution between then and now facilitated modern architectural approaches that simply weren’t possible 25 years ago. Now let’s look at how and when those technologies evolved, and how we’ve gotten to where we are today.

Before we get to the 1990s, though, I think it’s worth a quick jump back to the 1950s and 1960s. In the 1950s, mainframe computers emerged, and were enormously expensive and difficult to maintain. While the largest companies bought mainframes from the likes of IBM, Burroughs, NCR, and others, many companies didn’t need, or couldn’t afford, their own dedicated mainframes.

To serve that market, in the 1960s, IBM and a handful of other companies offered a “Service Bureau” service, which would include time-sharing on central mainframes. Some of this was remote access to pure computing and storage resources, however, there were also database and business applications offered up in this model.

The Service Bureau would manage the hosting and running of business applications and databases, and client companies would connect remotely to access their services in the multi-tenant environment. Billing was often based on usage of connection time, compute (by the second) and storage, or application specific metrics.

This model is strikingly like the Cloud computing and SaaS services of today. As Peter Allen sang and Stephen King wrote: “Everything old is new again.”

The evolution from the 1990s to 2020 as it relates to modern eCommerce and web application architecture happened across four main paths: Infrastructure, Connectivity, Solution Delivery, and UI Architecture. Let’s have a quick look at each in turn.

Infrastructure

Computing and infrastructure started off in the 1990s primarily on big servers, typically owned and hosted in a company’s own data center. Eventually, a larger number of smaller servers became the preferred infrastructure for many companies. There was also a bit of a shift in the late '90s and early 2000s from internal, company-owned data centers, to more remote 3rd party data centers. Leasing servers also became more popular.

aws.png


The next big evolutionary leap in infrastructure happened in 2006. In March of 2006, Amazon launched the S3 service. AWS had existed since 2002 but had not had much impact and went through some restructuring. This new AWS’s flagship S3 service was a REST based storage API, with usage-based billing. EC2 was released in August of that year, offering computing billed based on hours used. Not only was S3 using REST APIs for vending the service, but all the management functions for S3 and EC2 were only available via REST calls. The web console wouldn’t be released for three more years.

AWS was a huge success and was a fundamental turning point in how application infrastructure worked. You no longer had to buy or lease dedicated servers, with a five year sizing and utilization plan in place, you could run your applications on virtual servers at AWS, spin whole environments up and down quickly and easily, and pay for only what you used. Mainstream Cloud Computing was born.

Google launched their App Engine solution in May 2008, with Storage coming two years later. Microsoft’s Azure Cloud offering was announced in October of 2008. The big three Cloud services have been growing at an incredible pace ever since. They have expanded their offerings to include more and more services and are gaining ever-increasing numbers of clients. Cloud Computing is the new normal.

gps.png


Serverless Computing is, for some use cases, the next step beyond Cloud computing. Taking the idea of only paying for what you actually need to use, and simplifying application hosting from complete virtual server and stack management down to running pure functions direction, Serverless can offer a very cost effective way to host and run code with greatly reduced infrastructure management requirements. AWS rolled out its Lambda service in 2014, with Google following in 2016 and Microsoft in 2017. Serverless is not ideal for all workloads, but for specific use cases, offers a compelling evolution from standard Cloud computing.

Take the concept of Serverless computing, and now imagine taking it from the central data center(s) and moving it close to the end user for dramatically reduced latency, via a global CDN. Now you have Edge Computing. CloudFlare released their Workers platform in early 2018 and recently rolled out an unconstrained offering called Workers Unbound for more complex applications. Akamai’s EdgeWorker’s service is in Beta now.

Infrastructure and server-side computing have come a long way in 25 years, from large expensive servers running in internal data corporate centers, to being able to run just your code, without worrying about the servers or stacks under it, in thousands of locations globally, automatically deployed and underplayed as needed, paying only for what you use. Whatever comes next, we can be sure it will bring more flexibility, better performance, and lower costs.

Connectivity

Once you have your applications running on your infrastructure, how they communicate with other applications, and with end users, is important. As communication becomes more standardized, easier, and more efficient, this allows for more disparate applications to be involved in a solution, which in turn allows for smaller scope of functionality from any one application or service, and more flexibility in constructing, or modifying, the overall solution.

Starting with the web, HTTP was the primary communications protocol especially for end client communication. App to app communication could be done with highly custom direct TCP connections, or HTML scraping via HTTP. There were very little standards for app-to-app integration at this time, and most work in this area was difficult and not reusable. Microsoft offered DCOM and CORBA, but the binary protocol was difficult to implement if you were not using Microsoft software.

While SOAP was created in 1998, due to a lot of inter-corporate politics, it wasn’t really released until the end of 1999. XML-RPC was released first based on the same effort to bring SOAP to the public. So now there was some defined way to codify an XML based API.

In February of 2000, Salesforce rolled out a new sales automation tool, which in many ways might be considered one of the first mainstream SaaS offerings as well, which was driven by an XML over HTTP based API. This may be considered the original web service. To be fair, it was amazingly complex, and not especially well documented.

api.png


Later that year, on November 20th, eBay released their own API which was much simpler and easier for developers to use. Other companies soon followed suit. SOAP, using XML, and defined by WSDLs was the primary technology for web services. Many standardized clients libraries and tools were developed to make it easy to generate SOAP clients for any given published WSDL and made integrating your application with a 3rd party application a lot quicker and easier.

REST was proposed in 2000 and focused more on objects than the overall protocol like SOAP, and as such, had some advantages, especially for smaller lightweight clients like we would soon find on mobile devices. After JSON came out in 2013, REST primarily used JSON instead of XML, although technically REST supports XML and plain text as well as JSON. JSON is simple to parse and human readable, which was attractive to developers used to dealing with complex XML. However, it took a while before REST became commonly used.

SOAP and SOA (Service Oriented Architecture) dominated the inter-application communication space throughout the early and mid-2000’s. Web browser communication to the backend was still typically plain HTML.

REST and JSON grew in popularity over time, due to being simpler and requiring less complexity and resources, especially on the client end. From 2010 onward, REST eclipsed SOAP for most APIs. SOAP is still around and provides some advantages over REST, but the majority of web services are REST based now.

Early web services, usually SOAP, were typically larger services, which had many functions mapped to many requests and responses. It was common to have a web service vending dozens (or even hundreds) of methods. In 2005 Peter Rodgers proposed the concept of micro-services (called REST-services or Micro-Web-Services) as a proposed improvement to the complexity of most SOA architectures. The approach essentially calls for a larger number of smaller scope, often single function, web services, instead of one giant one, and at the same time pushes REST and JSON instead of SOAP and XML. The concept of micro-services really came into its own around 2012-2013. Netflix Cloud Systems Director Adrian Cockcroft called it “fine grained SOA”.

microser.png


Now REST/JSON micro-services are the standard for web services, and not only app to app communication, but also driving the UI of websites via AJAX.

GraphQL is an increasingly popular approach for web services. It was developed within Facebook in 2012 and released to the public in 2015. GraphQL moved out of Facebook to the GraphQL Foundation in November of 2018. GraphQL allows the client to specify which data points and relationships it would like returned from a query. This can reduce the chattiness of many REST APIs, and reduce the bandwidth required for responses. While this streamlining improves performance. there are also many disadvantages to GraphQL compared to REST, so it really comes down to using the right technology for the job. Some web services will be better served with REST, while others will benefit more from GraphQL.

Solution Delivery

By solution delivery, I really mean how an application or service is managed and vended to consumers. The default mechanism, and what was done most of the time in the ’90s, is that software is running on the company’s own dedicated hardware, on a software stack setup and maintained by the company. Think of running an Oracle Database cluster or running an ATG Oracle Commerce cluster in your data center.

The ’90s and early 2000s also had popular ASPs, or Application Service Providers. These companies would run and maintain enterprise software for a company. The ASP would manage the infrastructure often owned by them in their own datacenter, but occasionally used a 3rd party data center. The ASP would bill their clients based on usage metrics or a monthly fee. In many cases, the ASP would own the software licenses and lease or resell to the client company. These software installs were typically single tenant, with dedicated hardware and environments per client. Historically most of this software was client-server, not necessarily web or REST based.

SaaS is a newer and more popular approach to the same concept. The key differences are that SaaS providers are vending their own software, rather than running software from Oracle or PeopleSoft or IBM. SaaS providers have a multi-tenant architecture instead of the single tenant environments of ASPs. SaaS providers almost exclusively offer REST based APIs for their software, and their solutions are typically focused on websites and web applications.

Most SaaS offerings are running on Cloud infrastructure and provide auto-scaling and auto-healing for the multi-tenant environments. Cloud-native SaaS products are built in the Cloud and are engineered from the ground up to take advantage of the Cloud provider’s features and benefits (such as auto scaling, API gateways, infinitely scalable storage, etc..), rather than a more traditionally developed product that is simply running on VMs in the Cloud. Cloud-native SaaS is the latest best of breed (for many many use cases) approach for vending services and solutions.

UI Architecture

The front end of web applications in the 90s was usually pure server-side rendered HTML. Limited dynamic features were introduced through JavaScript. As web services became more prevalent, more JavaScript was used for loading in dynamic content, making form submissions without reloading pages, and generally evolving how web UIs worked.

In the latter half of the 2000’s a series of JavaScript libraries emerged which made it easier to build dynamic UIs driven by back-end web services, rather than server-side rendered HTML. jQuery was released in August 2006, allowing for simple AJAX calls to web services and easy replacement of DOM elements programmatically. NodeJS came out in 2009 and AngularJS in 2010, followed by React in 2013. These tools, along with too many others to list, have laid the groundwork for advanced front-end frameworks, high throughput server-side JavaScript engines, and micro-service centric UI development.

Headless has become the preferred approach to building web application front ends. Fundamentally Headless means a de-coupling from the front-end UI, and the backend logic. Unlike the 90’s, and frankly much of the 2000’s, where the backend application would render HTML for the front end, in a typical Headless architected application the backend application(s) provide an array of micro-services. The web UI can be built in any framework and any language, completely separate from any backend development. Those same micro-services can be used to drive not only the dynamic web front end, but also mobile applications, in-store kiosks, in-car touchscreens, or any other type of client you could imagine, making it easy to provide a unified multichannel customer experience.

Conclusion

Through the technological advancements described above, we’ve gone from monolithic applications, rendering the front-end HTML, running on dedicated owned hardware, to an independent Headless UI making calls to Microservices vended by Cloud-native SaaS solutions running on Cloud, Serverless, and Edge computing infrastructures.

While there are always tradeoffs between technologies and architectures, this modern approach allows for a great degree of flexibility, performance, and cost savings.

It’s a brave new world!

mach_timeline.jpg

Media

Formats

Content Type

Authors

Topics

2023-05-26

MACH Alliance

MACH Global Research 2023 Blog Series Pt. 3: Adapting To An Evolving Landscape

In March, we released the results of our third annual global IT decision maker research, which surveyed 500 tech leaders from Australia, France, Germany, UK and the US. We sought to understand how MACH investment has accelerated over the past 12 months and what impact that is having on organizations’ ability to thrive in a difficult economy. We’re publishing a four part blog series to dive further into each of the four key research areas within the report.

2023-05-24

Jimmy Ekbäck

What is an Experience Data Platform and how can it fast-forward your composable/MACH journey?

The MACH/Composable movement has gained great momentum in the last years and during 2023 Gartner predicts that 60% of companies will seek composability in their tech investments to gain speed and agility. With a composable approach to the technology stack, companies can quickly adopt changes and meet the evolving needs of customers. A composable approach involves breaking down the technology stack into modular, reusable components that can be easily assembled and reassembled to meet changing business needs. Using this approach, companies can easily add or remove components as needed, without disrupting the entire system.

2023-05-22

MACH Alliance

MACH Certification - Prerequisites for SIs

As the adoption of modern, composable technology continues to grow, businesses are seeking out experienced and certified MACH System Integrators (SI) to help them transition and manage their tech environments. If you're interested in becoming a certified MACH SI, there are certain prerequisites your agency must meet before being considered. This blog focuses on criteria for a standard SI. The MACH Alliance also encourages boutique and global SIs, and the criteria is adjusted accordingly.