I was just watching a video of Stefan Tilkov at the BeJUG SOA conference. I have seen most of this material before, but this time I wanted to comment on slide 31.
The original slide compares REST to "Technical" SOA ((T)SOA) by placing two SOA-style interface definitions beside five URLs conforming to a uniform interface. One implication that could be drawn from this diagram is that REST fundamentally changes the structure of the architecture. My view is that the change isn't fundamental. I see REST as simply tweaking the interface to achieve a specific set of properties.
Following is my diagram. Apologies for its crudeness, I don't have my regular tools at hand:
Some differences to Stefan's model:
- I include a client and the software modules that implement the two services. In my REST model, the services are retained: Only their interface to the client changes.
- I keep the URLs of the two services under separate authorities
- I use only safe and idempotent methods
- I use the query part of a URL whenever I expect an automated client to insert parameters as part of a URL
- I don't provide all valid methods on every URL
- I name some media types in my model, and would in any real architectural description. In this particular case it isn't a fair comparison, because neither Stephan nor I have included the same level of detail in the SOA interfaces.
Separate domain names
In the business I am in we might use the word "subsystem" instead of "service", taking a military-style systems engineering approach. The client would also be or be part of a subsystem. It is useful to be able to define and control the interface between subsystems separately to the definition and control of interfaces within each subsystem. Stephan puts the URLs for the two services under one authority, but I use a separate authority for each service/subsystem (orders.example.com and customers.example.com). The definition of these URL-spaces would be controlled and evolve separately over time.
Safe and Idempotent methods
I use only safe and idempotent methods, meaning that I have reliable messaging built in: A client retries requests that time out. Reliable messaging is critical to automated software agents. Idempotency provides the simplest, most reliable, and most scalable approach. Note that for automated clients this may mean IDs have to be chosen on the client side. This has some obvious and non-obvious "cons".
HTTP introduces some special difficulties when it comes to reliable ordering of messages, so automated HTTP clients should ensure they don't have different PUT requests outstanding to the same URL at the same time.
Query part of the URL
I use the query part of a URL whenever I expect an automated client to insert parameters as part of a URL. I know that there is a move to do this with URI templates, but I personally view the query part of the URL and its use as a good feature. It helps highlight the part of the URL that needs special knowledge somewhere in client code. Opaque URLs can be passed around without special knowledge, but where a client constructs a URL it first needs to know how. This is especially important for automated clients who don't have a user to help them supply data to a form.
Don't supply every method
I don't provide all valid methods on every URL. Obviously, these are really responded to in practice. If the client requests a DELETE on a URL that doesn't allow it, the request will be rejected with an appropriate error. However, I don't want to complicate the architectural description with these additional no-op methods. Nor do I want developers or architects to feel that they have to provide functions that are not required. It should always be easy to describe what you would expect a GET to the /allorders url to mean, but that doesn't mean we actually need to provide it when we don't expect any client to issue the request.
Conclusion
REST doesn't have to redraw the boundaries of your services or your subsystems. It is a technology that improves interoperability and evolvability over time. It is worth doing because of the short term and long term cost savings and synergies. It provides a shorter path to translating your high-level data-flow diagrams into working code, and should ultimately reduce your time to market and improve your business agility. That said: It needn't erode your existing investments, and from the high level isn't really a big change. In the end, the same business logic will be invoked within the same clients and services.
Benjamin