What is the "composite computing model,” you ask? The most straightforward definition we’ve found is:
An architecture that uses a distributed, discovery-based execution environment to expose and manage a collection of service-oriented software assets.
A software asset is nothing more than a piece of business logic; it can be a component, a queue, or a single method that performs a useful function that you decide to expose to the outside world. Like the client-server and n-tier computing models, the composite computing model represents the architectural principles for governing roles and responsibilities of its constituents. It was designed to solve a specialized group of business problems that have the following requirements:
Dynamic discovery of the business logic’s capabilities
Separation between the description of the business logic’s capabilities and its implementation
The ability to quickly assemble impromptu computing communities with minimal coordinated planning efforts, installation procedures, or human intervention
The computing industry has been moving towards this model for some time now; much of the last decade has been devoted to defining and refining distributed-computing technologies that allow you to look up components on the fly; discovering a component’s interface at runtime; and building applications from components on an ad-hoc basis, often using components in ways that weren’t anticipated when they were developed. ...