Enterprise Architecture Management in Business Networks

In my last blog post, I wrote about multi-cloud scenarios for enterprise applications focusing on enterprise applications of one company distributed over several different cloud providers. This blog post will be about enterprise applications connecting data, processes and the organization of different companies within business networks. Particularly complex scenarios with a high competition and margins, such as third party logistics (3PL) require a sophisticated approach ensuring and extending competitive advantages. We will see challenges when applying reference models, such as EDIFACT, ASC X12 or SCOR. Nevertheless, I see reference models – or more particularly their combination – as key success factors for business networks, since they represent best practices, common understanding and can significantly improve on-boarding as well as continuous education of new business network members. Hence, I will discuss how enterprise architecture and portfolio management can support the application and combination of different reference models in business networks. Finally, I present how the emerging concept of virtual software containers can support this approach from a technology perspective.

Types of Business Networks

One interesting question is what constitutes a business network [1]. Of course, it can be predefined and agreed upon, but there are a lot of business networks, where there are undefined and informal relations between two companies that have also (in-)dependent relations with other companies. The whole network of relations is called a business network. This is very similar to social networks where there are two related human beings that have independent relations to other human beings. However, all types of business networks have different forms of implicit or explicit governance, i.e. decision-making structures. Implicit governance refers to the fact that the chosen governance model has not been defined or agreed on by all involved parties in a business network. Explicit governance refers to an awareness and definition of governance arrangements by all parties in a business network.

The following generic modes of governance can exist in a business network (see next figure):

  • One inherits most / all types of decision-making roles and the others have merely an execution role

  • A group inherits most / all types of decision-making roles and a majority has only a execution role

  • Several large groups with decision-making roles related to different aspects and a majority has only execution role

  • Everybody has every role

businessnetworkgovernance

Additionally, business networks may expose a different degree of awareness and intensity of relations. On the one hand you may have a very structured business network, such as supply chains and on the other hand there is the free market where two parties directly interact without considering other parties in their interaction. Both extremes are unlikely and we will find companies on the whole spectrum. For instance, within a larger supply chain, one company may know only the direct predecessor and the direct successor company. It may just agree on the specification of the product to be delivered, but may not include any data or impose any processes on how the product should be manufactured. This means there is a limited degree of awareness and the intensity is less strong, because they do not really know how something is achieved by the other organizations in a business network.

contractlogistics

It can be observed that business networks become more complex, because new types of business networks emerge, such as contract logistics or third party logistics, where your business partners directly integrate dynamically in your manufacturing plant or point of sale as well as corresponding business processes. Hence, you need to work out best practices and stay ahead of the competition. An example can be seen in the previous figure, where the third party logistics provider has a packaging business process deployed at “Manufacturing Plant A”. This business process leverages applications and other resources within the sphere of “Manufacturing Plant A”. Besides delivery, the third party logistics provider integrates similarly in “Manufacturing Plant B”, where it does pre-assembly of the delivered parts from “Manufacturing Plant A”.

Applying Reference Models for Business Networks

Reference models exist since several decades in the area of business information systems, management and software engineering. Some are driven by academia and others are driven by industry. Usually both have been validated scientifically and in practice.

Reference models represent best industry practices for business processes derived from experts and organizations. They can cover the process, organization/governance, product, data and/or IT application perspective within a given business domain. Hence, they can also be viewed as standards. Examples for reference models are EDIFACT, SCOR, Prince2 or TOGAF. These are rather generic models, but there are also industry specific ones, such as the one existing for humanitarian supply chain operations [2] or retailing [3].

The main benefits of reference models with respect to business networks are:

  • Support your Enterprise Architecture Management (e.g. by reduced modeling efforts, transparency or common language)

  • Benchmark against industry

  • Evaluation of applications for enabling business networks

  • Business network integration by integrating available applications in a business network

There some issues involved when using reference models:

  • They are “just” models. Having them is like having a book on a shelf – pretty useless

  • Some of them are very generic applying to any business case/network and others are very specific

  • Some focus on business processes (e.g. SCOR), some on business data (e.g. EDIFACT, ASC X12), nearly none on organizational/governance aspects, others on material or money flows and others combine only some of the aspects (e.g. ARIS)

  • Some do provide key performance indicators for benchmarking your performance against the reference model, but most do not

  • It is unclear how different reference models can be combined and tailored to enable business networks

  • Tools supporting definition, viewing, visualizing, expertise provisioning, publishing or adaptation of these models are not standardized and a wide variety exists

  • Tools supporting monitoring the implementation of reference models in information systems consisting of technology and humans do not really exist

There exist already reference models for business networks, such as EDIFACT, and they are used successfully in practice. However, in order to gain benefit from reference models in a business network, you will need to have an integrated approach addressing the aforementioned issues as I will present in the next section.

Enterprise Architecture Management in Business Networks

Reference models are needed for superior business performance to deal with the increasing complexity of business networks. You will never have a perfect world by using only one reference model. Hence, you will need an enterprise architecture management approach for business networks to efficiently and effectively address the issues of one single reference model by combining several reference models (see next figure). Traditionally, enterprise architecture management focused only on the single enterprise and not business networks, but given the growing complexity of business networks and disrupting societal changes, it is mandatory to consider the business network dimension.

referencemodelpuzzle

Establishing an enterprise architecture management approach depends on the type of business network as I have explained before. For example, you may have one organization selecting and managing your reference model portfolio and application landscape for the whole business network. You may have also no one responsible, but you need to align and be aware of each other’s portfolio. For instance, you can create a steering board for this. Additionally, you will need to establish key performance indicators and benchmarking processes with respect to the business network’s reference model portfolio.

Once you have your enterprise architecture management approach leveraging combined and tailored reference models, you will have to address the aforementioned dynamics as well as tight integration between business partners in the business network’s information systems. Traditional ERP, CRM and SCM software packages will face difficulties, because even if all partners would use the same systems, there would be a huge variety of configurations to reflect the different internal business processes of members of a business network. Additionally, you will have to manage access and provisioning over the Internet.

Cloud-based solutions address these challenges already partially. They help you to understand how to manage access, governance and provide clearly defined interfaces via the emerging concept of API Management. However, these approaches do not reach far enough. You cannot move dynamically business processes and corresponding applications and data between organizations as a package to integrate it at your business partners’ premise. Furthermore, business processes may change quickly and you want to reuse as well as leverage the change in many different organizations using corresponding applications. This may facilitate a lot of scenarios, such as “bring your own digital business process” in third party logistics. Hence, there is still a need for further technology innovation and research.

Conclusion: Software Containers for “Bring your own Digital Business Process”

We have seen that new complex scenarios in business networks, such as third party logistics, as well as the high competition, tight network integration and dynamics impose new challenges. Instant business network adaptation as well as tight integration between business partners will be a key differentiator between business networks and ultimately decide about their success. Reference models representing industry best practice need to be combined and tailored on the business network level to achieve its future goals. However, no silver bullet exists, so you will also need to enable enterprise architecture management at the business network level. Finally, you need tools to enable dynamic movement of business processes as well as applications between different organizations in a business network. A coherent and reusable approach should be used.

Unfortunately, these tools do not exist at the moment, but there are some first approaches, which you should investigate in this context. Docker can create containers consisting of digital business process artifacts, applications, databases and many more. These containers can be sent over the business network and easily be integrated with containers existing in other organizations. Hence, the vision of instant dynamic business network adaption might not be as far-fetched as we think. The next figure illustrates this idea: The third party logistics provider sends the containers “Packaging” and “Pre-Assembly” to its business partners. These containers consists of applications supporting the corresponding business process. They are executed in the business partners’ clouds and they integrate with the existing business processes and applications there (e.g. the ERP system). Employees of the third party logistics provider use them at the side of the business partner. The containers are executed at the business partner side, because the business process takes anyway place there and thus it makes sense to let it also digitally happen there, before we send a lot of data and information to the network back and forth or having a lack of application integration.

businessnetworksoftwarecomponent

References

[1] Harland, C.M.: Supply Chain Management: Relationships, Chains and Networks, British Journal of Management, Volume 7, Issue Supplement s1, p . 63-680, 1996.

[2] Franke, Jörn; Widera, Adam; Charoy, François; Hellingrath, Bernd; Ulmer, Cédric: Reference Process Models and Systems for Inter-Organizational Ad-Hoc Coordination – Supply Chain Management in Humanitarian Operations, 8th International Conference on Information Systems for Crisis Response and Management (ISCRAM’2011), Lisbon, Portugal, 8-11 May, 2011.

[3] Becker, Jörg; Schütte, Reinhard: A Reference Model for Retail Enterprise, Reference Modeling for Business Systems Analyses, (eds.) Fettke, Peter; Loos, Peter, pp. 182-205, 2007.

[4] Verwijmeren, Martin: Software component architecture in supply chain management, Computers in Industry, 53, p. 165-178, 2004.

[5] Themistocleous, Marinos; Irani, Zahir; Love, Peter E.D.: Evaluating the integration of supply chain information systems: a case study”, European Journal of Operational Research, 159, p. 393-405, 2004.

Advertisements

Scenarios for Inter-Cloud Enterprise Architecture

The unstoppable cloud trend has arrived at the end users and companies. Particularly the first ones openly embrace the cloud, for instance, they use services provided by Google or Facebook. The latter one is more cautious fearing vendor lock-in or exposure of secret business data, such as customer records. Nevertheless, for many scenarios the risk can be managed and is accepted by the companies, because the benefits, such as scalability, new business models and cost savings, outweigh the risks. In this blog entry, I will investigate in more detail the opportunities and challenges of inter-cloud enterprise applications. Finally, we will have a look at technology supporting inter-cloud enterprise applications via cloudbursting, i.e. enabling them to be extended dynamically over several cloud platforms.

What is an inter-cloud enterprise application?

Cloud computing encompasses all means to produce and consume computing resources, such as processing units, networks and storage, existing in your company (on-premise) or the Internet. Particularly the latter enable dynamic scaling of your enterprise applications, e.g. you get suddenly a lot of new customers, but you do not have the necessary resources to serve them all using your own computing resources.

Cloud computing comes in different flavors and combinations of them:

  • Infrastructure-as-a-Service (IaaS): Provides hardware and basic software infrastructure on which an enterprise application can be deployed and executed. It offers computing, storage and network resources. Example: Amazon EC2 or Google Compute.
  • Platform-as-a-Service (PaaS): Provides on top of an IaaS a predefined development environment, such as Java, ABAP or PHP, with various additional services (e.g. database, analytics or authentication). Example: Google App Engine or Agito BPM PaaS.
  • Software-as-a-Service (SaaS): Provides on top of a IaaS or PaaS a specific application over the Internet, such as a CRM application. Example: SalesForce.com or Netsuite.com.

When designing and implementing/buying your enterprise application, e.g. a customer relationship management (CRM) system, you need to decide where to put in the cloud. For instance, you can put it fully on-premise or you can put it on a cloud in the Internet. However, different cloud vendors exist, such as Amazon, Microsoft, Google or Rackspace. They offer also a different flavor of cloud computing. Depending on the design of your CRM, you can put it either on a IaaS, PaaS or SaaS cloud or a mixture of them. Furthermore, you may only put selected modules of the CRM on the cloud in the Internet, e.g. a module for doing anonymized customer analytics. You will also need to think about how this CRM system is integrated with your other enterprise applications.

Inter-Cloud Scenario and Challenges

Basically, the exemplary CRM application is running partially in the private cloud and partially in different public clouds. The CRM database is stored in the private cloud (IaaS), some (anonymized) data is sent to different public clouds on Amazon EC2 (IaaS) and Microsoft Azure (IaaS) for doing some number crunching analysis. Paypal.com is used for payment processing. Besides customer data and buying history, the databases contains sensor information from different point of sales, such as how long a customer was standing in front of an advertisement. Additionally, the sensor data can be used to trigger some actuators, such as posting on the shop’s Facebook page what is currently trending, using the cloud service IFTTT. Furthermore, the graphical user interface presenting the analysis is hosted on Google App Engine (PaaS). The CRM is integrated with Facebook and Twitter to enhance the data with social network analysis. This is not an unrealistic scenario: Many (grown) startups already deploy a similar setting and established corporations experiment with it. Clearly, this scenario supports cloud-bursting, because the cloud is used heavily.

I present in the next figure the aforementioned scenario of an inter-cloud enterprise application leveraging various cloud providers.

intercloudarchitecture

There are several challenges involved when you distribute your business application over your private and several public clouds.

  • API Management: How to you describe different type of business and cloud resources, so you can make efficient and cost-effective decisions where to run the analytics at a given point in time? Furthermore, how to you represent different storage capabilities (e.g. in-memory, on-disk) in different clouds? This goes further up to the level of the business application, where you need to harmonize or standardize business concepts, such as “customer” or “product”. For instance, a customer described in “Twitter” terms is different from a customer described in “Facebook” or “Salesforce.com” terms. You should also keep in mind that semantic definitions change over time, because a cloud provider changes its capabilities, such as new computing resources, or focus. Additionally, you may dynamically change your cloud provider without disruption to the operation of the enterprise application.
  • Privacy, risk and Security: How do you articulate your privacy, risk and security concerns? How do you enforce them? While there are already technology and standards for this, the cloud setting imposes new problems. For example, once you update the encrypted data regularly the cloud provider may be able to determine from the differences parts or all of your data. Furthermore, it may maliciously change it. Finally, the market is fragmented without an integrated solution.
  • Social Network Challenge: Similarly to the semantic challenge, the problem of semantically describing social data and doing efficient analysis over several different social networks exist. Users may also change arbitrarily their privacy preferences making reliable analytics difficult. Additionally, your whole company organizational structure and the (in-)official networks within your company are already exposed in social business networks, such as LinkedIn or Xing. This blurs the borders of your enterprise further to which it has to adapt by integrating social networks into its business applications. For instance, your organizational hierarchy, informal networks or your company’s address book exist probably already partly in social networks.
  • Internet of Things: The Internet of Things consists of sensors and actuators delivering data or executing actions in the real world supported by your business applications and processes. Different platforms exist to source real world data or schedule actions in the real world using actuators. The API Management challenge exists here, but it goes even beyond: You create dynamic semantic concepts and relate your Internet of Things data to it. For example, you have attached an RFID and a temperature sensor to your parcels. Their data needs to be added to the information about your parcel in the ERP system. Besides the semantic concept “parcel” you have also that one of a “truck” transporting your “parcel” to a destination, i.e. you have additional location information. Furthermore it may be stored temporarily in a “warehouse”. Different business applications and processes may need to know where the parcel is. They do not query the sensor data (e.g. “give me data from tempsen084nl_98484”), but rather formulate a query “list all parcels in warehouses with a temperature above 0 C” or “list all parcels in transit”. Hence, Internet of Thing data needs to be dynamically linked with business concepts used in different clouds. This is particularly challenging for SaaS applications, which may have different conceptualization of the same thing.

Enterprise Architecture for Inter-Cloud Applications

You may wonder how you can integrate the above scenario at all in your application landscape and why you should do it at all. The basic promise of cloud computing is that it scales according to your needs, that you can outsource infrastructure to people who have the knowledge and capabilities to run the infrastructure. Particularly, small and medium size enterprises benefit from this and the cost advantage. It is not uncommon that modern startups start their IT using the cloud (e.g. FourSquare).

However, also large corporations can benefit from the cloud, e.g. as a “neutral” ground for a complex supply chain with a lot of partners or to ramp up new innovative business models where the outcome is uncertain.

Be aware that in order to offer some solution based on the cloud you need to first have a solid maturity of your enterprise architecture. Without it you are doomed to fail, because you cannot make proper risk and security analysis, scaling and benefit from cost reductions as well as innovation.

I propose in the following figure an updated model of the enterprise architecture with new components for managing cloud-based applications. The underlying assumption is that you have an enterprise architecture, more particularly a semantic model of business objects and concepts.

intercloudarchitecturenew

  • Public/Private Border Gateway: This gateway is responsible for managing the transition between your private cloud and different public clouds. It may also deploy agents on each cloud to enable a secure direct communication between different cloud platforms without the necessity to go through your own infrastructure. You might have more fine granular gateways, such as private, closest supplier and public. A similar idea came to me a few years ago when I was working on inter-organizational crisis response information systems. The gateway is not only working on the lower network level, but also on the business processes and objects level. It is business-driven and depending on business processes as well as rules, it decides where the borders should be set dynamically. This may also mean that different business processes have access to different things in the Internet of Things.
  • Semantic Matcher: The semantic matcher is responsible for translating business concepts from and to different technical representations of business objects in different cloud platforms. This can be simple transformations of not-matching data types, but also enrichment of business objects from different sources. This goes well beyond current technical standards, such as EDI or ebXML, which I see as a starting point. Semantic matching is done automatically – there is no need for creating time consuming manual mappings. Furthermore, the semantic matcher enhances business objects with Internet of Things information, so that business applications can query or trigger them on the business level as described before. The question here is how you can keep people in control of this (see Monitor) and leverage semantic information.
  • API Manager: Cloud API management is the topic of the coming years. Besides the semantic challenge, this component provides all necessary functionality to bill, secure and publish your APIs. It keeps track how is using your API and what impact changes on it may have. Furthermore, it supports you to compose new business software distributed over several cloud platforms using different APIs subject to continuous change. The API Manager will also have a registry of APIs with reputation and quality of service measures. We see now a huge variety of different APIs by different service providers (cf. ProgrammableWeb). However, the scientific community and companies have not picked up yet the inherent challenges, such as the aforementioned semantic matching, monitoring of APIs, API change management and alternative API compositions. While there exists some work in the web service community, it has not yet been extended to the full Internet dimension as it has been described in the scenario here. Additionally, it is unclear how they integrate the Internet of Thing paradigm.
  • Monitor: Monitoring is of key importance in this inter-cloud setting. Different cloud platforms offer different and possible very limited means for monitoring. A key challenge here will be to consolidate the monitoring data and provide an adequate visual representation to do risk analysis and selecting alternative deployment strategies on the aggregated business process level. For instance, by leveraging semantic integration we can schedule request to semantically similar cloud and business resources. Particularly, in the Internet of Thing setting, we may observe unpredictable delays, which lead to delayed execution of real-world activities, e.g. a robot is notified that a parcel flew off the shelf only after 15 minutes.

Developing and Managing Inter-Cloud Business Applications

Based on your enterprise architecture you should ideally employ a model-driven engineering approach. This approach enables you automation of the software development process. Be aware that this is not easy to do and failed often in practice – However, I have also seen successful approaches. It is important that you select the right modeling languages and you may need to implement your own translation tools.

Once you have all this infrastructure, you should think about software factories, which are ideal for developing and deploying standardized services for selected platforms. I imagine that in the future we will see small emerging software factories focusing on specific aspects of a cloud platform. For example, you will have a software factory for designing graphical user interfaces using map applications enhanced with selected Odata services (e.g. warehouse or plant locations). In fact, I expect soon a market for software factories which enhances the idea of very basic crowd sourcing platforms, such as the Amazon Mechanical Turk.

Of course, since more and more business applications shift towards the private and public clouds, you will introduce new roles in your company, such as the Chief Cloud Officer (CCO). This role is responsible for managing the cloud suppliers, integrating them in your enterprise architecture and proper controlling as well as risk management.

Technology

The cloud exists already today! More and more tools emerge to manage it. However, they do not take into account the complete picture. I described several components for which no technologies exist. However, some go in the right direction as I will briefly outline.

First of all, you need technology to manage your API to provide a single point of management towards your cloud applications. For instance, Apache Delta Cloud allows managing different IaaS provider, such as Amazon EC2, IBM SmartCloud or OpenStack.

IBM Research also provides a single point of management API for cloud storage. This goes beyond simple storage and enables fault tolerance and security.

Other providers, such as Software AG, Tibco, IBM or Oracle provide “API Management” software, which is only a special case of API Management. In fact, they provide software to publish, manage the lifecycle, monitor, secure and bill your own APIs for the public on the web. Unfortunately, they do not describe the necessary business processes to enable their technology in your company. Besides that, they do not support B2B interaction very well, but focusing on business to development aspects only. Additionally, you find registries for public web APIs, such as ProgrammableWeb or APIHub, which are first starting point to find APIs. Unfortunately, they do not feature sematic description and thus no semantic matching towards your business objects, which means a lot of laborious manual work for doing the matching towards your application.

There is not much software for managing the borders between private and public cloud or even allowing more fine-granular borders, such as private, closest partner and the public. There is software for visualizing and monitoring these borders, such as the eCloudManager by Fluid Operations. It features semantic integration of different cloud resources. However, it is unclear how you can enforce these borders, how you control them and how can you manage the different borders. Dome 9 goes into this direction, but focuses only on security policies for IaaS applications. It does only understand data and low level security, but not security and privacy over business objects. Deployment configuration software, such as Puppet or Chef, are only first steps, since they focus only on deployment, but not on operation.

On the monitoring side you will find a lot of software, such as Apache Flume or Tibco HAWK. While these operate more on the lower level of software development, IFTTT enables execution of business rules over data on several cloud providers providing public APIs. Surprisingly, it considers itself at the moment more as a end user facing company. Additionally, you find in the academic community approaches for monitoring distributed business processes.

Unfortunately, we find little ready to go software in the area “Internet of Things”. I worked myself with several R&D prototypes enabling cloud and gateways, but they are not ready for the market. Products have emerged but they are only for a special niche, e.g. Internet of Things enabled point of sale shop. They lack particularly a vision how they can be used in an enterprise-wide application landscape or within a B2B enterprise architecture.

Conclusion

I described in this blog the challenges of inter-cloud business applications. I think in the near future (3-5 years) all organizations will have some them. Technically they are already possible and exist to some extent. The risk and costs will be for many companies lower than managing everything on their own. Nevertheless key requirement is that you have a working enterprise architecture management strategy. Without it you won’t have any benefits. More particularly, from the business side you will need adequate governance strategies for different clouds and APIs.

We have seen already key technologies emerging, but there is still a lot to do. Despite decades of research on semantic technologies, there exists today no software that can perform automated semantic matching of cloud and business concepts existing in different components of an inter-cloud business application. Furthermore, there are no criteria on how to select a semantic description language for business purposes that are as broad as described here. Enterprise Architecture Management tools in this area only slowly emerge. Monitoring is still fragmented with many low level tools, but only few high-level business monitoring tools. They cannot answer simple questions, such as “what if cloud provider A goes down then how fast can I recover my operations and what are the limitations”. API Management is another evolving area, but which will have a significant impact in the coming years. However, current tools only consider low-level technical aspects and not high-level business concepts.

Finally, you see that a lot of challenges mentioned in the beginning, such as the social network challenge or Internet of Thing challenge, are simply not yet solved, but large scale research efforts are on their way. This means further investigation is needed to clarify the relationships between the aforementioned components. Unfortunately, many of the established middleware vendors lack a clear vision of cloud computing and the Internet of Things. Hence, I expect this gap will be filled by startups in this area.