Updated: Sat Dec 22 22:38:11 EST 2007
Updated: Sat Dec 22 22:41:31 EST 2007
Today's enterprise architect has to contend with a modern generation of challenges. Greater demands are being placed on our infrastructure, both technical and non-technical. Technical challenges include maximising scalability and robustness of our systems for minimum cost. Non-technical challenges include meeting the functional an integration requirements of modern business quickly and sustainably.
The techniques of Object-Orientation and the classical Relational Database Management System have served us well in developing software and architecture for small systems that can be upgraded and understood as a single unit. However, Object-Oriented and even methods for defining interfaces between systems based on database schema have proven brittle, and need careful consideration before upgrade. Interfaces are needing to be thought and rethought as new versions of vendor and proprietary products are deployed. Often binding code or legacy database views are required to ensure the system as a whole continues to function after the upgrade of a subsystem.
The modern enterprise is no longer an island. Attempts to integrate systems owned by different business units or different companies within the supply chain have lead to tighter than expected coupling. Solving the problems of a modern business environment will require a new approach to dealing with the boundaries between human, language, and the machine. This approach has roots both in the Web, and in wider systems engineering disciplines.
The new enterprise architect faces significant challenges in delivering interoperability and related "serendipitous reuse" network effects, evolvability of interfaces within the architecture, and in maintaining or improving the efficiency of the architecture. Luckily, the path of the interconnected network has been trodden for us. The World-Wide Web is an architectural beachhead that we can learn valuable lessons from, and acts as a staging point for the development of our own architectures.
This book is designed to provide practical guidance and experience with web-style architecture, with a focus on how enterprise architects can apply this experience. It contains practical recommendations on how to modify the interfaces within your systems and out to other systems to meet modern demands. The rest of this section is dedicated to background material. Skip ahead if this seems boring.
Updated: Sat Dec 22 23:14:38 EST 2007
The principle of building software components that can be reused in many different contexts is a founding ideal of Object-Orientation, but is rarely achieved in practice. The overarching vision is to develop software in the same way as we deal with many other engineering disciplines. Parts are manufactured or licensed en masse by one or more suppliers. The next link in the supply chain combines these parts along with custom parts into an engineered assembly. The assembly is either put to work directly by an end user, or itself treated as a part by another link the chain.
Any identified part needs to have a defined
It must correspond to one or more standard interfaces, use resources in a well-understood way, and behave functionally in a well-understood way. The top-down "fit" constraint is at a first glance consistent with an Object-Orientated world view.
Object-Orientation routinely reduces coupling between one part of a program and another by employing information hiding techniques. It constrains the programmer not to expose the internal data structures or workings of one class to another class. Instead, it makes only the interface available.
The interface of a class generally consists of a set of public methods, each with a parameter list. Classes have either a defined type (strong typing), or are able to respond even to method invocations that are not understood (duck typing). Strong typing enables compile-time checks to be applied, while duck typing defers errors to runtime by indicating an error.
The information hiding technique of Object-Orientation is obvious, and obviously useful. However, true components have some special characteristics that separate them from simply parts of a larger program.
Version mismatch is an achllies heel of traditional Object-Orientation and Relational Database models.
Changes to the set of methods on a strongly-typed interface introduce new versions of that interface. Each new version consumes a separate type identifier. The first weakness of this approach is that old clients do not have the new identifier. Without the interface type identifier they can't make calls on the new component. This is easily dealt with by having the new type inherit from the old. The new component version then becomes accessible by either identifier.
The second problem is harder to solve: How does a new client component work with an old version of the server component? If it only looks up the new identifier, it won't be found. It must search backwards through the chain of versions until it finds the latest version the component supports.
This searching can be effective if the number of versions is small, however we must also consider the longer term. Well-established standards in other industries have lasted tens or even hundreds of years. The number of versions that accumulate over this time could be problematic for a client component. Worse still, forking of the interface may occur. A splintering of the language may mean that finding the right identifier by which to invoke the component becomes too great a burden on client components.
Duck-typing goes a long way to resolving this issue by not using a type identifier for interfaces. A client component can take any object passed in and attempt to interact with it. An error is indicated to the client component whenever a particular method is not understood. This may feel to some like taking the guards off a piece of heavy machinery, but nicely deals with many of the problems that strongly-typed languages face in this area.
A database has a single fixed database schema at any point in time. This structure consists of tables, views, triggers, and typically some code in the form of stored procedures or similar. Modern databases have some in-built mechanisms to deal with differences in the expected version of the schema and the actual version. A program that nominates specific columns in its SELECT and UPDATE statements will automatically be immune extra columns being added, and should fare well with INSERT so long as default values are supplied for the new columns. New tables can also be safely ignored by old applications, except where referential integrity is at stake.
Database clients behave much like a duck-typed object-oriented client, because the exact version of the schema is never nominated. If a client issues an SQL statement that no longer makes sense an error will be returned. Views and triggers can also be constructed to ensure that an obsolete client is still able to issue its SQL statements or invoke its stored procedures without coupling other clients to own version.
Updated: Sun Dec 23 00:48:18 EST 2007
Where even duck typing and its related database capabilities fail is in the assumption that defining new interfaces is something that we should be doing on a day-to-day basis. Of course this will always be going on to some extent to meet special challenges. However, it would be much better if the hard work of defining new interfaces had already been dealt with by an appropriate standards body.
Defining a new standard is hard work, and a risky business. If your industry or the software industry as a whole heads in a different direction, you may be forced to unwind any investment in your own proprietary interface. It would be helpful if we could move forward in confidence that the interface we choose is going to be used widely across our industry and possibly others. Whenever we are unable to make use of standard interfaces defining our own shouldn't be as hard as inventing a whole new protocol, especially when the distributed computing problems of unreliable communications and scalability are thrown into the mix.
The Web's solution to this problem is to break it up into smaller parts. A transport protocol is used to move data from one place to another, and the format of the data itself is defined separately. The main problem facing an interface designer in this model is not to define mechanisms for moving data around or even to define formats for data. Both should usually be defined already. The main job of an interface designer in this context is to define the set of shared concepts between client and server components called resources, and to give them appropriate identifiers.
Consider the case of light switch that wants to convey to the light whether to be "on" or "off". The job of the interface designer is to define a shared concept between the light and its switch that uses standard transport protocol methods and a standard data type. The shared concept in this case is whether the light should be on or off. Since this is a neat boolean concept we can transfer a simple plain text value from the switch to this resource using a standard PUT method. The request to turn the lightbulb on is:
PUT "true" to https://lighbulb.example.com/on
A corresponding request to turn the lightbulb off is:
PUT "false" to https://lighbulb.example.com/on
Finally, the shared concept may support sampling via the converse transfer request:
GET https://lighbulb.example.com/on -> returns "true" or "false"
If no standard method is capable of conveying the intent of an interaction one is invented. If no standard data format is capable of encoding the information in an interaction one is invented. Invention of new data formats is fairly common, but is much less common than the definition of new resources. Invention of new methods is rare.
With a standard transport protocol and data format in use we have a basis for building components that are truly and serendipitously reusable. Any given interaction between a client and server component that transfers similar data schema and similar intent is likely to be understood, and therefore successful. The final meaning of the interaction is determined by context: The interpretation of the request by the server, and the interpretation of the response by the client. This context is nearly completely captured by the resource identifier.
The Hypertext Transfer Protocol (HTTP) is used to transfer documents to and from identified resources. This one interface is capable of scalably and reliably moving information between components, so long as the format of that data is agreed between the client and server component and the identity of the server component is known to the client. An impressive array of data pumping and storing tools can even be developed without specific knowledge of any data formats. These tools do not need to be modified as new data formats or resources emerge in the overall architecture. Those that do understand particular data formats can be used with any resource that can exchange those formats.
HTTP introduces a number of features such as content negotiation, layering, and redirection. It uses a duck typing approach, where requests can be made without knowing in advance the version of HTTP the server understands. Requests that are not understood return errors to the client component.
By leaving the data format problem unsolved, HTTP allows a range of possible solutions. Some data formats are centrally controlled and widely used. These types include the Hyper-Text Markup Language (HTML), Atom, and others. Some are industry-specific, such as railML or many other of the *ML language set. Still others are used an invented on the spot as needed by individual organisations.
Updated: Sun Dec 23 12:08:07 EST 2007
The moving parts of a HTTP interaction are all capable of evolving separately. The protocol itself can develop using its duck typing approach. The set of data formats can evolve separately to the context in which it is used. Finally, the set of resources is able to change and develop without introducing new methods or document types. More than this, the Web is designed with specific mechanisms for evolution built in.
The most important important design for evolution on the Web of today is the use of must-ignore semantics. Must-ignore is the principle that parts of a document that cannot be understood must be ignored by parsers. This allows new structure to be added to the schema of a data format without affecting old consumers of the data format (either client or server components). Presuming that the data format consists of information only, the new information will mean nothing to old consumers and can be safely ignored.
Data format evolution is a tricky business that requires careful attention by standards bodies. It isn't possible to upgrade all consumers of a widely-deployed document type at once, or even all producers. Changes to the document type must respect the fact that mandatory "must-understand" extensions cannot be effectively implemented, except as part of a negotiation exercise with a fall-back interaction in mind.
The redirection semantics of HTTP allow the set of resources to evolve over time. Resources can be moved to new identifiers, or requests can be redirected via a third party. Redirections can be temporary, allowing for short term alterations during system upgrade and maintainence. They may also be permanent, allowing for organisational changes that can result in changes in identification schemes over time.
The final evolution mechanism is the commonly-misunderstood content negotiation. Sometimes seen as a way of delivering different content to different clients, its most useful application is in dealing with legacy data formats. Data formats tend to compete with each other during their early days of development. For example, the RSS and Atom can be seen as competing data formats that capture essentially the same information. As one type supersedes another old clients can be left behind. Content negotiation allows a single resource to serve the needs of client both old and new. Old clients can request and receive the legacy format, while new clients are dealt the latest and greatest.
These evolution mechanisms are increasingly important as we move from the human-driven Web to one that is driven more by machines.
Updated: Sun Dec 23 13:38:47 EST 2007
One of the most touted benefits of the Web and its underlying REST architectural style is the ability to optimise bandwidth and response times through caching. This primarily affects GET requests to resources, which can be intercepted by intermediaries who understand the interaction they are seeing pass through them. A common transport protocol that conforms to REST's constraints is essential to the ease of implementing this performance optimisation system.
REST's official constraints are as follows:
Many of these constraints are second nature to users of HTTP, leading to strange conversations about how to make a particular interaction "more RESTful" when the constraints of REST are already met.
The client/server constraint requires that each component is either a client or server, but not both.... [[expand]] Roy Fielding has since talked publicly about how relaxing the client/server constraint is not particularly harmful to a REST architecture.
Stateless communication requires that the meaning of any request made by a client is not dependent on the sequence of prior requests. This is a difficult concept with a number of facets, but one that is central to highly scalable systems. The principle is that each request comes through to a server as an independent entity that can be processed by any server in the cluster or clusters that might handle the request. HTTP achieves this by placing all authentication and other contextual information into each individual request, much of it being communicated in the request's resource identifier.
To push the scalability concept beyond what REST requires, we might conceive of two servers on different sides of the globe that answer subsequent requests from the same client. The ideal is that any communication required between these two servers to answer the second request is either minimised or completely eliminated. This certainly suggests a minimisation or elimination of sessions being used to track particular clients as they navigate through a series of resources.
Where state is required to be stored on the server side, REST suggests that it be addressable as a specific resource. For example, a particular user's shopping cart should be something the client can navigate to. A stateless design would see the entire shopping cart stored on the client side, and resubmitted with every request that made use of the shopping cart information. However, there are benefits to maintaining the cart on the server side. If a server-side cart is used, clearly some communication needs to occur between the server that handles a request to update the cart and the server that handles a subsequent request to view the cart.
The next constraint of REST is that of explicit caching. A large client-intensive system such as the Web involves a great number of data fetching operations. These operations can be optimised by providing explicit cache guidance. Many Internet Service Providers (ISPs) are no longer providing caches for data retrieved from resources. As the cost of data falls it is often becoming uneconomic. However, caching is on the rise in other areas. Client caches are particularly important in improving interactivity as users navigate through web sites. Likewise, caching is becoming more important in the clusters of large web sites. Edge networks are also springing up to efficiently move data from a web site out closer to geographically distributed clients. Despite the apparent inefficiencies of a text-based protocol like HTTP, its caching model achieves significant apparent improvements both to bandwidth and latency.
The Uniform Interface constraint of REST is both central to its success, and somewhat overstated. The principle is that every message sent throughout the overall system should be understandable to components that may handle the message along the way. The first of these components is typically a HTTP library with the client program. Subsequently, the request can flow through a series of proxies and other intermediataries before reaching the origin server. The origin server may itself be constructed as a HTTP parsing or handling library plus forwarding rules until the request finally arrives at a piece of code that fulfils the request. A similar path is taken for the returned response.
I say the Uniform Interface constraint is central because it supports the use of caching proxies, authentication servers, and a wide range of other components along the way. Having a basic level of understanding of the message is what makes all of this possible. This is a contrast to the use of bespoke interfaces across a Corba or Web-Services software stack. Intermediataries don't typically know what a particular message means, and therefore can't do very much clever with the message.
Applied to its fullest extent we can achieve a system that has both standard methods for moving data around, and a standard set of data formats that all components understand. Consider the human Web: Almost any browser can access almost any site without considering the version of the interface it is trying to access, which method it should invoke, or what the type of the response code is. This is achieved by using a data format that has a fairly low value for machine consumers, but can be understood by human users.
At the same time, the concept that every message is understood in its entirety by every component in the system is certainly not correct for a machine-to-machine or "semantic" web. It isn't always going to be appropriate for a banking client to access a search engine resource, nor is the client likely to understand what to do with any response it might get back.
In enterprise practice, each resource will understand a subset of the total set of data formats in use across the system and a subset of the methods. At the same time, the duck typing approach used by HTTP means that error responses can be generated for requests that are not understood.
Another of REST's constraints is layering. Again, this is pretty obvious to anyone who has set up a proxy server. However, it is often forgotten in bespoke interfaces. The central requirement of layering is that the whole request is capable of being passed from one component to another without changing its meaning or having its identifiers truncated. For example, the domain name is not removed from a URL just because the message has already been sent to a server responsible for the domain.
Code on demand is described as an optional constraint. Essentially, it means that servers should be able to deploy additional capabilities into clients on the fly. For example, the server could deploy some javascript on a web page that describes how to include parts of the page that might be missing. A newer client that supports the inclusion mechanism natively may safely ignore the javascript.
The final and somewhat implicit constraint of REST is hyperlinking. It is contained in Roy's esoteric statement of "Hypermedia as the engine of application state". Once a uniform interface is established it stops making sense to differentiate between different types of objects. It doesn't make sense to have separate identifier schemes. Instead, the single scheme of the Uniform Resource Locator (URL) is used. The URL can be used to manage and contain some of the complexity associated with other identification schemes. Parsers typically do not interpret URLs as data, or construct them from parts. A URL is typically treated as a definite whole.
You can read more about these constraints in Roy Fielding's doctoral thesis dissertation, the document that defines the REST architectural style. Some of these constraints have also made their way across the wall into Web-Services-based SOA systems.
In the end it is difficult to tease performance optimisation, network effects, and evolvability apart to prioritise them in your architecture. They are all crucial as the scale of your architecture increases. These issues have all been faced before by the Web and in REST we have a valuable roadmap to applying the features of the Web to our own architectures.
Updated: Sun Apr 8 07:34:48 EST 2007
Metcalfe's Law suggests that the value of a telecommunications network increases faster than the growth in size of that network. Metcalfe suggests that the value is proportional to the square of the number of participants, and reasons this out by drawing direct lines from each participant to every other participant. The law suggests that every connection that can be made directly between two participants is able to be assigned a fixed amortised value.
Variations of Metacalfe's Law exist that suggest both underestimation and overestimation of the network's value by Metcalfe. The basic argument stacks up, however. The World-Wide Web becomes more valuable each time a Web site is added and again each time a client is added.
Metcalfe's Law needs some reexamination when it comes to network effects between pure software components. The law makes sense on the Web, where a human can interpret the content of any Web site that is written in a natural language the user understands. Sites written in different languages don't necessarily carry the same network effects. One might suggest that there is an English-speaking Web and a Japanese-speaking Web, as well as Webs in many other languages. If not for multilingual individuals and machine translation of Web sites these two Webs would not share network effects at all.
The same fault in Metcalfe's Law applies to pure machine-to-machine communications. We cut our network effects back to zero each time we introduce a new type of message to an architecture. Components that understand the message type share network effects with each other, but not with components that communicate using different messages. This suggests that a traditional Service-Oriented Architecture (SOA) or database Extract/Transform/Load based on point-to-point integration between components does not exhibit significant network effects. Network effects are not achieved between different interfaces even when we use a common messaging structure like SOAP.
One possible way of applying Metcalfe's Law to client-server systems is to draw connections between clients through the server. This reasoning is used to justify the network effects of social networking sites on the Web. It suggests that clients who use the same site can communicate through the site, and are therefore connected. If a single service has a large number of clients in an SOA, this may introduce network effects. However, any such service must offer a means for clients to connect to and communicate with each other for this line of reasoning to hold up.
Ultimately, what we are looking for in order to add significant value to our networks is for new client code not to have to be written when a new service is added to the network. Likewise, new server code should not have to be written when a new client is added to the network.
Any messages not understood by all components are part of a different, possibly overlapping, network. Uniform messaging is the outcome of a social process of agreement between architecture participants. This outcome can only emerge from a shared economic incentive to converge on a uniform set of messages.
This convergence on a uniform set of messages will typically only occur to solve a specific problem set. The next problem set will likely require its own special vocabulary and its own separate set of uniform messages. The goal in uniform messaging is to reduce the number of unique message types in use to a state of natural complexity. As software architecture evolves through the twenty-first century we should expect to see dozens of important problems being solved with uniform messaging schemes on the wider Web. Within your organisation you will likely have a similar number emerge to deal with special needs and vocabulary.
Updated: Sun Apr 8 07:34:48 EST 2007
Standards become more important as your architecture becomes more connected with other architectures. They reflect consensus by architecture professionals on various aspects of the uniform interface, and are often substantiated by a large installed base. REST is an architectural style that can at an abstract level be applied to a range of technologies. However, the uniform messaging protocol of the Web is hard to ignore if you anticipate your architecture ever interacting with other architectures outside of your control.
Updated: Mon Apr 9 06:46:41 EST 2007
If you already have a messaging system internal to your architecture there is probably little value up-front in switching to HTTP. The sensible approach initially is to build bridges so that you can easily interact with HTTP-based systems. There may be several ways to approach this depending on which protocols you use internally. The simplest is to define an interface within your own protocols that reflects the message structure of HTTP. An alternative employed by WS-Transfer is to define a fairly high-level view of HTTP and depend on compatible lower-level message semantics being translated correctly by any bridging service.
Your architecture will tend to extend upon the uniform message set of external architectures depending on how focused your architecture is on internal interactions or external interactions. I purely introspective architecture can exercise a fair degree of novelty an independence. An architecture that consists purely of Web servers for browser access is likely to conform solely to the uniform interface of the Web. An architecture that sits somewhere between these extremes is likely to be based on or accommodate actual HTTP messages, but introduce new content types not found on the Web and potentially even new kinds of client/server interaction not found in HTTP.
Principle #: A REST architecture should be controlled through a centralised registry. This REST Registry may explicitly relate itself to other registries your architecture may be in contact with. The registry should specify at least the set of special content types and special interactions your architecture supports. This registry is distinct from and master to any particular service's interface control documentation.
The picture is complicated again when architectures somewhere between the scale of a single enterprise and the scale of the Web are introduced. It is likely that a particular component in your architecture will need to exchange messages with components from several other architectures. In this kind of multi-party environment it is important that the set of uniform message sets across the architectures are as consistent as possible to avoid duplication of effort.
Earlier in this section I described the Web as a beachhead for our own software architectures. Beachhead is a military term associated with invasion by sea. The idea is that a small elite force mark out and defend an area along the enemy coastline while a larger force masses behind them and prepares for the main push inland.
The Web is our beachhead. It clearly demonstrates the possibilities open to software architects and defends that position against other forces that demonstrably cannot achieve the same scales or efficiencies. Most importantly, it provides us with a uniform messaging technology that can be built upon without controversy. No matter which architecture we interact with, we should be able to agree that the HTTP "GET" interaction is the right way to perform one-off server-to-client state transfers. Likewise, we can point to the successes of the Web to separate the wheat from the chaff in terms of architectural advice. If it can be seen working on the Web, it is probably a good idea. Advice should be questioned when it does not reflect the way Web works, even when it appears to be in line with REST thinking or REST constraints.
Updated: Mon Apr 9 06:56:32 EST 2007
Hyperlinking is a key concept of the Web. It means that a URL for any service can be provided to a client, and the client should be able to interact with the associated resource. Hyperlinks should be found in the configuration of automated clients and in content retrieved from services.
Hyperlinking with structured content (including structured configuration files) frees service providers in a number of ways. It frees them from having to offer all of the functionality of the architecture from a single service. It also allows different services that offer similar functionality independence in how they structure their resources.
Consider a client that always adds "/atom" to the end of a URL to get a news feed for the resource. While this might work for the service the client was originally designed for, the client won't be able to assume the existence of this extension in other services. One particular service might completely delegate the handling of feeds to an external service. Another might not support the concept at all. Yet another might have a different concept of what should be supplied at the atom URL and return the wrong content to the client.
Principle #: Prefer hyperlinking with structured documents over URL construction by clients.
The structured document in the atom case might be some xhtml with a link tag:
<link rel="alternate" type="application/atom+xml" href="atom"/>
This allows us to deal with the alternate URL schemes we described earlier: An resource that delegates its feed to an external service can provide an absolute URL to the appropriate resource. Resources that do not support feeds or have a different use for the /atom url can provide either no link element or a correct one pointing elsewhere.
All structured documents of this kind should be standard, and controlled in the REST Registry. For example, the rel="alternate" and type="application/atom+xml" combination should be documented as meaning that an atom feed can be found. There is no point introducing structured document types that only specific clients can understand, or that are only standard across a single service. Use of special case structured documents limit network effects.
Updated: Mon Apr 9 06:28:11 EST 2007
A simple GET request to a known URL is not a complicated interaction. However, many interactions require data to be submitted to the server. On the Web these interactions are usually realised with forms.
There are two common kinds of forms on the Web: A GET form, and a POST form. Both begin with the client downloading a form document and rendering it to a user. The form contains fields that can be populated by a human, and human-readable text or iconography is placed alongside these fields in the form to instruct the user as to the correct way to submit their data.
Form population is a concept that is difficult to apply from the human Web to the machine Web. Forms themselves always require a human in the loop to instruct the client to fill the form out correctly. In a machine-only Web the client must know ahead of time what data to submit and in what format.
Let's examine the two main kinds of Web form:
A GET form is a query that constructs a URL from a base URL and a set of parameters. This is a form of hyperlinking, where the service owner mass-publishes URLs that are valid to issue GET requests to.
In the machine-only Web the form cannot be understood, so the client must construct the URL from base URL and parameters in a standard way. If we draw from Principle 2, that new client code should not be written when new services are added to the network, we see that the query part of a URL should match this standard construction over all similar base URLs in the network. This standard construction is in direct contrast with the freedom afforded by more direct forms of hyperlinking to the service owner in the structure of their URLs.
Principle #: URL construction by clients should be limited to filling out the query part of a URL in standard ways. The structure of these query parts should be controlled in the REST Registry.
POST forms carry similar implications for the machine-only network. Clients cannot interpret forms, so must submit standard content. POST itself has problems on the Web. You will often see sites that are adamant that clients should not submit a POST form twice, lest they accidentally submit duplicate purchase orders. In a machine-only world we require robust recovery mechanisms from failure. One particular failure mode is where a request times out from client to server. For a client to correctly recover, it must be able to reliably retry its request.
To solve this problem we step outside of the Web envelope slightly, but remain firmly within the HTTP messaging envelope. We use an alternative method called PUT. While POST can carry a wide range of semantics to a server, PUT narrows the semantics down by guaranteeing idempotent behaviour to clients.
Principle #: Prefer the PUT interaction over the POST interaction. Make sure your PUT is always idempotent.
Idempotency is the ability to submit a request twice, and only have the effect of submitting it once. This is exactly what we want when we don't know whether the first request succeeded or not. We can think of PUT as directly symmetrical with GET. GET transfers data from server to client, and we expect that repeating the GET will return the same data as only doing it once. PUT transfers data from client to server, and we expect that repeating the PUT will set the server to the same state whether the operation is enacted once or multiple times. For example, we can PUT the lightbulb state to "on" as often as we like. At the end of the series of PUT requests it will always be on, regardless of how many requests we issue.
Updated: Sun Sep 16 06:46:53 EST 2007
REST's approach to capturing semantics is different from the Remote Procedure Call (RPC) model common in SOA today. RPC places almost all semantics in the method of an operation. It can have preconditions and postconditions, service level agreements, and so forth attached to it. REST splits the semantics of each interaction into the three components. Most of REST theory can be boiled down to using URLs for identification, using the right transport protocol, and transporting the right data formats over it.
REST's interaction semantics can still be seen in its methods. We know from seeing a HTTP PUT request that information is being transferred from client to server. We can infer information such as cacheability from a completed HTTP GET operation. However, this is clearly not the whole interaction.
The next level of semantics is held in the document being transferred. Perhaps it is a purchase order, which can be interpreted in the same way regardless of the interaction context. Was the purchase order PUT to my service to indicate that the client wants to make a purchase, or did I GET the purchase order from a storage area? Either way, I can process the purchase order and make use of it in my software.
This leads to the final place where semantics are captured in REST's uniform interface: The context of the interaction, in particular the URL used. I could PUT (text/plain, "1") to my mortgage calculator program, and it might adjust my repayments based on a one year honeymoon rate. I could issue the same PUT (text/plain "1") to the defence condition URL at NORAD and trigger world war three.
This variability in possible impact is a good thing. It is a sign that by choosing standard interactions and content types we can make the protocol get out of the way of communication. Humans can make the decisions about which URL a client will access with its operations. Humans can make the decision about how to implement both the client and server in the interaction. Some shared understanding of the interaction must exist between the implementors of client and server in order for the communication to make sense, but the technology does not get in the way.
When you build your greenhouse abatement scheduling application, it won't just be able to turn the lights off at night. It will be able to turn off the air conditioning as well. When you build your stock market trending application it will be able to obtain information from multiple markets, and also handle commodity prices. Chances are, you'll be able to overlay seasonal weather forecast reports that use the same content types as well.
Moving the semantics out of the method might feel to some like jumping out of a plane without a parachute, but it is more like using a bungee rope. The loose coupling of REST means that applications work together without you having to plan for it quite as much, and that the overall architecture is able to evolve as changes are required.
Updated: Sun Sep 16 06:46:53 EST 2007
The introduction of a transport protocol is a key innovation of the Web over earlier telemetry-style protocols. It defines a set of messages that can be used to transfer data from one place to another. These messages identify the format the data is in, but the definition of each data format is part of a separate specification.
The importance of this distinction can be seen when we look back at telemetry protocols of the past. They defined the whole protocol in one specification. When changes to data formats were needed, whole protocols were superseded. Using a transport protocol allows us to alter the data formats in use over time without throwing away existing code that is agnostic to data format.
The essential operations of a transport protocol are:
These operations should be defined in a way that ensures they are safe to repeat if they time out. In other words, they should be "idempotent". Operations may be conditional, and various forms of these operations may exist to improve the performance or efficiency over a naive specification.
Reducing all communication down to variations on two essential operations may seem like a big change to how you are doing things now. However, if you look through the list of methods on your existing IDL-defined interfaces I think you'll find most match well to this scheme. In some cases you'll have redefine your methods around idempotent equivalents in order to make use of these standard methods.
The precedent to many developers for expressing functionality in this way will be in Java Beans. Java Beans use a naming convention to achieve something similar to a transport protocol. The methods obj.setName("foo") and name = obj.getName() could be automatically translated to their REST equivalents: PUT obj/Name text/plain "foo", and GET obj/Name Accept:text/plain.
TODO: Example needed...
Reducing the set of interactions to GET and PUT is a way of reducing the complexity of network communications back to a natural level. Your existing IDLs may contain hundreds of methods that effectively amount to GET or PUT.
In some cases you might have to really bend over to make this concept stick. For these, transport protocols may allow you an escape valve back to ad hoc messaging. However, architects should be aware of the coupling that ad hoc messaging creates between components that understand the special messages before they dive headlong into it.
We can talk in abstract terms about identifier schemes in REST, but the hard work has already been done in this area. Little controversy remains around the use of URLs for identification in architectures that use the Internet Protocol. If any does, it is in whether additional context information needs to be transported with the URL for to a
Updated: Mon Jan 7 13:11:14 EST 2008
You are a software architect, or perhaps a systems engineer looking to get your hands dirty with REST. REST architecture isn't fundamentally different to Service-Oriented Architecture you may be used to. The tools you will use to manage the architecture may differ slightly, and this section will discuss these tools in detail. However, you shouldn't be looking to create a completely different set of applications just because you are doing REST. REST is an interfacing technology. It does not alter the set of applications you intend to produce, except to allow some bespoke tools to be replaced by non-proprietary equivalents.
In this section I will describe an approach to using UML with REST architecture, will discuss the use of a REST Registry, and will walk through how to apply REST in your day to day software experience.
Updated: Sat Aug 4 11:07:57 EST 2007
Perhaps the key conclusion of IEEE-Std-1471-2000 "Recommended Practice for Architectural Description of Software-Intensive Systems" is that architectural descriptions should be broken into views. These views should conform to the viewpoints of relevant stakeholders. Attempting to meet the expectations of all stakeholders in a single diagram or a single type of diagram is likely to result in unreasonable complexity for any particular stakeholder.
Philippe Kruchten's "4+1 View" Architecture is an example of a set of views that meet a set of common expectations. Philippe breaks his model into four main views, plus an additional view to tie things together and represent the customer's viewpoint. He includes:
Various attempts to map these views to UML diagrams or to specific architectural descriptions. I will skip Philippe's process view, but attempt to apply other views both to UML Diagramming and a distributed REST software architecture.
Updated: Sat Aug 4 11:30:40 EST 2007
Phillipe's Logical View is intended to decribe the design's object model. This makes it an easy fit to the UML Class Diagram. I like to align the UML Class concept in the logical view to two main kinds of component: Services and URL templates.
The purpose of this view from my perspective is to ensure that the components of an architecture will be able to talk to each other. All communication between REST services occurs via URLs. This means that URLs should feature prominently in any architectural description designed to constrain and manage these interactions. Services and clients are drawn as either realising a specific URL Template, or depending on it. All URL templates should have at least one dependency relationship and usually one realisation relationship.
It is useful for the Logical View of the system to incorporate not only the set of URLs exposed in the system, but also the exposed methods and content types being exchanged. Including these facets in the Logical view describes the basic level of detail to ensure that messages exchanged between client and server are understood. More detailed explanations of special content types and the meaning of each interactions with a specific resource should also be made clear somewhere in the architectural description. Being explicit about the set of URLs enables configuration to be developed for clients to interact with those URLs. Being explicit about the set of methods ensures that the possible directions of data flow and between client and server will be understood. Being explicit about the content type ensures that the schema of information transferred between client and server is dealt with early in the development cycle.
I use UML methods to model the set of methods and content types supported by each URL Template class. I list a GET method with a return type specifier for each content type that can be returned from the URL. I list a PUT method with a single type-specified parameter for each content type that can be stored at the URL. I list a DELETE method with no parameters if DELETE is supported on the URL.
It should be easy to extract a list of content types from the model, and compare them for opportunities to rationalise the list. It is likely that most content types will be XML-based. A specification should be attached to the model for each XML content type and for other content types. Schemas may be used as part of the specification, however the specification should make clear any must-ignore requirements you are using to build an evolution path for each specification. It is usually inappropriate to validate input directly against the schema, as this may cause messages conforming to future versions of the specification to be rejected inappropriately.
The Logical view can optionally include both REST and non-REST services. Services with opaque protocol connections are drawn with UML2-style ball-and-socket connectors. Services with more of an SOA style can identify network-exposed objects in the same way as REST services identify their own resources. The main difference between SOA-style objects and REST-style resources in the logical view is that the SOA-style objects will have a greater diversity of methods and parameters to those methods.
Updated: Mon Jan 7 13:29:28 EST 2008
The function of a development view is to describe how software appears in the factory as it is being developed. It should identify components that relate both to a specific set of source code files, and a final deliverable such as a software package. This concept relates closely to what is known in the open source software world as a "project".
Components can be classified into Parts and Configuration Items. Parts are sourced from internal or external suppliers, and carry with them their own versioning regime. A Configuration Item fulfils a function for the customer. Traces should be present from modules in the logical view to components in the development view. Functional requirements can also be traced to Configuration Items in the development view.
Updated: Mon Jan 7 13:36:44 EST 2008
The function of a deployment view is to describe how components are deployed on target hardware. This view butts up against the hardware design, and may contain some redundant information. A trace should appear from the development view to this deployment view.