Enterprise Architecture Management in Business Networks

In my last blog post, I wrote about multi-cloud scenarios for enterprise applications focusing on enterprise applications of one company distributed over several different cloud providers. This blog post will be about enterprise applications connecting data, processes and the organization of different companies within business networks. Particularly complex scenarios with a high competition and margins, such as third party logistics (3PL) require a sophisticated approach ensuring and extending competitive advantages. We will see challenges when applying reference models, such as EDIFACT, ASC X12 or SCOR. Nevertheless, I see reference models – or more particularly their combination – as key success factors for business networks, since they represent best practices, common understanding and can significantly improve on-boarding as well as continuous education of new business network members. Hence, I will discuss how enterprise architecture and portfolio management can support the application and combination of different reference models in business networks. Finally, I present how the emerging concept of virtual software containers can support this approach from a technology perspective.

Types of Business Networks

One interesting question is what constitutes a business network [1]. Of course, it can be predefined and agreed upon, but there are a lot of business networks, where there are undefined and informal relations between two companies that have also (in-)dependent relations with other companies. The whole network of relations is called a business network. This is very similar to social networks where there are two related human beings that have independent relations to other human beings. However, all types of business networks have different forms of implicit or explicit governance, i.e. decision-making structures. Implicit governance refers to the fact that the chosen governance model has not been defined or agreed on by all involved parties in a business network. Explicit governance refers to an awareness and definition of governance arrangements by all parties in a business network.

The following generic modes of governance can exist in a business network (see next figure):

  • One inherits most / all types of decision-making roles and the others have merely an execution role

  • A group inherits most / all types of decision-making roles and a majority has only a execution role

  • Several large groups with decision-making roles related to different aspects and a majority has only execution role

  • Everybody has every role

businessnetworkgovernance

Additionally, business networks may expose a different degree of awareness and intensity of relations. On the one hand you may have a very structured business network, such as supply chains and on the other hand there is the free market where two parties directly interact without considering other parties in their interaction. Both extremes are unlikely and we will find companies on the whole spectrum. For instance, within a larger supply chain, one company may know only the direct predecessor and the direct successor company. It may just agree on the specification of the product to be delivered, but may not include any data or impose any processes on how the product should be manufactured. This means there is a limited degree of awareness and the intensity is less strong, because they do not really know how something is achieved by the other organizations in a business network.

contractlogistics

It can be observed that business networks become more complex, because new types of business networks emerge, such as contract logistics or third party logistics, where your business partners directly integrate dynamically in your manufacturing plant or point of sale as well as corresponding business processes. Hence, you need to work out best practices and stay ahead of the competition. An example can be seen in the previous figure, where the third party logistics provider has a packaging business process deployed at “Manufacturing Plant A”. This business process leverages applications and other resources within the sphere of “Manufacturing Plant A”. Besides delivery, the third party logistics provider integrates similarly in “Manufacturing Plant B”, where it does pre-assembly of the delivered parts from “Manufacturing Plant A”.

Applying Reference Models for Business Networks

Reference models exist since several decades in the area of business information systems, management and software engineering. Some are driven by academia and others are driven by industry. Usually both have been validated scientifically and in practice.

Reference models represent best industry practices for business processes derived from experts and organizations. They can cover the process, organization/governance, product, data and/or IT application perspective within a given business domain. Hence, they can also be viewed as standards. Examples for reference models are EDIFACT, SCOR, Prince2 or TOGAF. These are rather generic models, but there are also industry specific ones, such as the one existing for humanitarian supply chain operations [2] or retailing [3].

The main benefits of reference models with respect to business networks are:

  • Support your Enterprise Architecture Management (e.g. by reduced modeling efforts, transparency or common language)

  • Benchmark against industry

  • Evaluation of applications for enabling business networks

  • Business network integration by integrating available applications in a business network

There some issues involved when using reference models:

  • They are “just” models. Having them is like having a book on a shelf – pretty useless

  • Some of them are very generic applying to any business case/network and others are very specific

  • Some focus on business processes (e.g. SCOR), some on business data (e.g. EDIFACT, ASC X12), nearly none on organizational/governance aspects, others on material or money flows and others combine only some of the aspects (e.g. ARIS)

  • Some do provide key performance indicators for benchmarking your performance against the reference model, but most do not

  • It is unclear how different reference models can be combined and tailored to enable business networks

  • Tools supporting definition, viewing, visualizing, expertise provisioning, publishing or adaptation of these models are not standardized and a wide variety exists

  • Tools supporting monitoring the implementation of reference models in information systems consisting of technology and humans do not really exist

There exist already reference models for business networks, such as EDIFACT, and they are used successfully in practice. However, in order to gain benefit from reference models in a business network, you will need to have an integrated approach addressing the aforementioned issues as I will present in the next section.

Enterprise Architecture Management in Business Networks

Reference models are needed for superior business performance to deal with the increasing complexity of business networks. You will never have a perfect world by using only one reference model. Hence, you will need an enterprise architecture management approach for business networks to efficiently and effectively address the issues of one single reference model by combining several reference models (see next figure). Traditionally, enterprise architecture management focused only on the single enterprise and not business networks, but given the growing complexity of business networks and disrupting societal changes, it is mandatory to consider the business network dimension.

referencemodelpuzzle

Establishing an enterprise architecture management approach depends on the type of business network as I have explained before. For example, you may have one organization selecting and managing your reference model portfolio and application landscape for the whole business network. You may have also no one responsible, but you need to align and be aware of each other’s portfolio. For instance, you can create a steering board for this. Additionally, you will need to establish key performance indicators and benchmarking processes with respect to the business network’s reference model portfolio.

Once you have your enterprise architecture management approach leveraging combined and tailored reference models, you will have to address the aforementioned dynamics as well as tight integration between business partners in the business network’s information systems. Traditional ERP, CRM and SCM software packages will face difficulties, because even if all partners would use the same systems, there would be a huge variety of configurations to reflect the different internal business processes of members of a business network. Additionally, you will have to manage access and provisioning over the Internet.

Cloud-based solutions address these challenges already partially. They help you to understand how to manage access, governance and provide clearly defined interfaces via the emerging concept of API Management. However, these approaches do not reach far enough. You cannot move dynamically business processes and corresponding applications and data between organizations as a package to integrate it at your business partners’ premise. Furthermore, business processes may change quickly and you want to reuse as well as leverage the change in many different organizations using corresponding applications. This may facilitate a lot of scenarios, such as “bring your own digital business process” in third party logistics. Hence, there is still a need for further technology innovation and research.

Conclusion: Software Containers for “Bring your own Digital Business Process”

We have seen that new complex scenarios in business networks, such as third party logistics, as well as the high competition, tight network integration and dynamics impose new challenges. Instant business network adaptation as well as tight integration between business partners will be a key differentiator between business networks and ultimately decide about their success. Reference models representing industry best practice need to be combined and tailored on the business network level to achieve its future goals. However, no silver bullet exists, so you will also need to enable enterprise architecture management at the business network level. Finally, you need tools to enable dynamic movement of business processes as well as applications between different organizations in a business network. A coherent and reusable approach should be used.

Unfortunately, these tools do not exist at the moment, but there are some first approaches, which you should investigate in this context. Docker can create containers consisting of digital business process artifacts, applications, databases and many more. These containers can be sent over the business network and easily be integrated with containers existing in other organizations. Hence, the vision of instant dynamic business network adaption might not be as far-fetched as we think. The next figure illustrates this idea: The third party logistics provider sends the containers “Packaging” and “Pre-Assembly” to its business partners. These containers consists of applications supporting the corresponding business process. They are executed in the business partners’ clouds and they integrate with the existing business processes and applications there (e.g. the ERP system). Employees of the third party logistics provider use them at the side of the business partner. The containers are executed at the business partner side, because the business process takes anyway place there and thus it makes sense to let it also digitally happen there, before we send a lot of data and information to the network back and forth or having a lack of application integration.

businessnetworksoftwarecomponent

References

[1] Harland, C.M.: Supply Chain Management: Relationships, Chains and Networks, British Journal of Management, Volume 7, Issue Supplement s1, p . 63-680, 1996.

[2] Franke, Jörn; Widera, Adam; Charoy, François; Hellingrath, Bernd; Ulmer, Cédric: Reference Process Models and Systems for Inter-Organizational Ad-Hoc Coordination – Supply Chain Management in Humanitarian Operations, 8th International Conference on Information Systems for Crisis Response and Management (ISCRAM’2011), Lisbon, Portugal, 8-11 May, 2011.

[3] Becker, Jörg; Schütte, Reinhard: A Reference Model for Retail Enterprise, Reference Modeling for Business Systems Analyses, (eds.) Fettke, Peter; Loos, Peter, pp. 182-205, 2007.

[4] Verwijmeren, Martin: Software component architecture in supply chain management, Computers in Industry, 53, p. 165-178, 2004.

[5] Themistocleous, Marinos; Irani, Zahir; Love, Peter E.D.: Evaluating the integration of supply chain information systems: a case study”, European Journal of Operational Research, 159, p. 393-405, 2004.

Advertisements

Scenarios for Inter-Cloud Enterprise Architecture

The unstoppable cloud trend has arrived at the end users and companies. Particularly the first ones openly embrace the cloud, for instance, they use services provided by Google or Facebook. The latter one is more cautious fearing vendor lock-in or exposure of secret business data, such as customer records. Nevertheless, for many scenarios the risk can be managed and is accepted by the companies, because the benefits, such as scalability, new business models and cost savings, outweigh the risks. In this blog entry, I will investigate in more detail the opportunities and challenges of inter-cloud enterprise applications. Finally, we will have a look at technology supporting inter-cloud enterprise applications via cloudbursting, i.e. enabling them to be extended dynamically over several cloud platforms.

What is an inter-cloud enterprise application?

Cloud computing encompasses all means to produce and consume computing resources, such as processing units, networks and storage, existing in your company (on-premise) or the Internet. Particularly the latter enable dynamic scaling of your enterprise applications, e.g. you get suddenly a lot of new customers, but you do not have the necessary resources to serve them all using your own computing resources.

Cloud computing comes in different flavors and combinations of them:

  • Infrastructure-as-a-Service (IaaS): Provides hardware and basic software infrastructure on which an enterprise application can be deployed and executed. It offers computing, storage and network resources. Example: Amazon EC2 or Google Compute.
  • Platform-as-a-Service (PaaS): Provides on top of an IaaS a predefined development environment, such as Java, ABAP or PHP, with various additional services (e.g. database, analytics or authentication). Example: Google App Engine or Agito BPM PaaS.
  • Software-as-a-Service (SaaS): Provides on top of a IaaS or PaaS a specific application over the Internet, such as a CRM application. Example: SalesForce.com or Netsuite.com.

When designing and implementing/buying your enterprise application, e.g. a customer relationship management (CRM) system, you need to decide where to put in the cloud. For instance, you can put it fully on-premise or you can put it on a cloud in the Internet. However, different cloud vendors exist, such as Amazon, Microsoft, Google or Rackspace. They offer also a different flavor of cloud computing. Depending on the design of your CRM, you can put it either on a IaaS, PaaS or SaaS cloud or a mixture of them. Furthermore, you may only put selected modules of the CRM on the cloud in the Internet, e.g. a module for doing anonymized customer analytics. You will also need to think about how this CRM system is integrated with your other enterprise applications.

Inter-Cloud Scenario and Challenges

Basically, the exemplary CRM application is running partially in the private cloud and partially in different public clouds. The CRM database is stored in the private cloud (IaaS), some (anonymized) data is sent to different public clouds on Amazon EC2 (IaaS) and Microsoft Azure (IaaS) for doing some number crunching analysis. Paypal.com is used for payment processing. Besides customer data and buying history, the databases contains sensor information from different point of sales, such as how long a customer was standing in front of an advertisement. Additionally, the sensor data can be used to trigger some actuators, such as posting on the shop’s Facebook page what is currently trending, using the cloud service IFTTT. Furthermore, the graphical user interface presenting the analysis is hosted on Google App Engine (PaaS). The CRM is integrated with Facebook and Twitter to enhance the data with social network analysis. This is not an unrealistic scenario: Many (grown) startups already deploy a similar setting and established corporations experiment with it. Clearly, this scenario supports cloud-bursting, because the cloud is used heavily.

I present in the next figure the aforementioned scenario of an inter-cloud enterprise application leveraging various cloud providers.

intercloudarchitecture

There are several challenges involved when you distribute your business application over your private and several public clouds.

  • API Management: How to you describe different type of business and cloud resources, so you can make efficient and cost-effective decisions where to run the analytics at a given point in time? Furthermore, how to you represent different storage capabilities (e.g. in-memory, on-disk) in different clouds? This goes further up to the level of the business application, where you need to harmonize or standardize business concepts, such as “customer” or “product”. For instance, a customer described in “Twitter” terms is different from a customer described in “Facebook” or “Salesforce.com” terms. You should also keep in mind that semantic definitions change over time, because a cloud provider changes its capabilities, such as new computing resources, or focus. Additionally, you may dynamically change your cloud provider without disruption to the operation of the enterprise application.
  • Privacy, risk and Security: How do you articulate your privacy, risk and security concerns? How do you enforce them? While there are already technology and standards for this, the cloud setting imposes new problems. For example, once you update the encrypted data regularly the cloud provider may be able to determine from the differences parts or all of your data. Furthermore, it may maliciously change it. Finally, the market is fragmented without an integrated solution.
  • Social Network Challenge: Similarly to the semantic challenge, the problem of semantically describing social data and doing efficient analysis over several different social networks exist. Users may also change arbitrarily their privacy preferences making reliable analytics difficult. Additionally, your whole company organizational structure and the (in-)official networks within your company are already exposed in social business networks, such as LinkedIn or Xing. This blurs the borders of your enterprise further to which it has to adapt by integrating social networks into its business applications. For instance, your organizational hierarchy, informal networks or your company’s address book exist probably already partly in social networks.
  • Internet of Things: The Internet of Things consists of sensors and actuators delivering data or executing actions in the real world supported by your business applications and processes. Different platforms exist to source real world data or schedule actions in the real world using actuators. The API Management challenge exists here, but it goes even beyond: You create dynamic semantic concepts and relate your Internet of Things data to it. For example, you have attached an RFID and a temperature sensor to your parcels. Their data needs to be added to the information about your parcel in the ERP system. Besides the semantic concept “parcel” you have also that one of a “truck” transporting your “parcel” to a destination, i.e. you have additional location information. Furthermore it may be stored temporarily in a “warehouse”. Different business applications and processes may need to know where the parcel is. They do not query the sensor data (e.g. “give me data from tempsen084nl_98484”), but rather formulate a query “list all parcels in warehouses with a temperature above 0 C” or “list all parcels in transit”. Hence, Internet of Thing data needs to be dynamically linked with business concepts used in different clouds. This is particularly challenging for SaaS applications, which may have different conceptualization of the same thing.

Enterprise Architecture for Inter-Cloud Applications

You may wonder how you can integrate the above scenario at all in your application landscape and why you should do it at all. The basic promise of cloud computing is that it scales according to your needs, that you can outsource infrastructure to people who have the knowledge and capabilities to run the infrastructure. Particularly, small and medium size enterprises benefit from this and the cost advantage. It is not uncommon that modern startups start their IT using the cloud (e.g. FourSquare).

However, also large corporations can benefit from the cloud, e.g. as a “neutral” ground for a complex supply chain with a lot of partners or to ramp up new innovative business models where the outcome is uncertain.

Be aware that in order to offer some solution based on the cloud you need to first have a solid maturity of your enterprise architecture. Without it you are doomed to fail, because you cannot make proper risk and security analysis, scaling and benefit from cost reductions as well as innovation.

I propose in the following figure an updated model of the enterprise architecture with new components for managing cloud-based applications. The underlying assumption is that you have an enterprise architecture, more particularly a semantic model of business objects and concepts.

intercloudarchitecturenew

  • Public/Private Border Gateway: This gateway is responsible for managing the transition between your private cloud and different public clouds. It may also deploy agents on each cloud to enable a secure direct communication between different cloud platforms without the necessity to go through your own infrastructure. You might have more fine granular gateways, such as private, closest supplier and public. A similar idea came to me a few years ago when I was working on inter-organizational crisis response information systems. The gateway is not only working on the lower network level, but also on the business processes and objects level. It is business-driven and depending on business processes as well as rules, it decides where the borders should be set dynamically. This may also mean that different business processes have access to different things in the Internet of Things.
  • Semantic Matcher: The semantic matcher is responsible for translating business concepts from and to different technical representations of business objects in different cloud platforms. This can be simple transformations of not-matching data types, but also enrichment of business objects from different sources. This goes well beyond current technical standards, such as EDI or ebXML, which I see as a starting point. Semantic matching is done automatically – there is no need for creating time consuming manual mappings. Furthermore, the semantic matcher enhances business objects with Internet of Things information, so that business applications can query or trigger them on the business level as described before. The question here is how you can keep people in control of this (see Monitor) and leverage semantic information.
  • API Manager: Cloud API management is the topic of the coming years. Besides the semantic challenge, this component provides all necessary functionality to bill, secure and publish your APIs. It keeps track how is using your API and what impact changes on it may have. Furthermore, it supports you to compose new business software distributed over several cloud platforms using different APIs subject to continuous change. The API Manager will also have a registry of APIs with reputation and quality of service measures. We see now a huge variety of different APIs by different service providers (cf. ProgrammableWeb). However, the scientific community and companies have not picked up yet the inherent challenges, such as the aforementioned semantic matching, monitoring of APIs, API change management and alternative API compositions. While there exists some work in the web service community, it has not yet been extended to the full Internet dimension as it has been described in the scenario here. Additionally, it is unclear how they integrate the Internet of Thing paradigm.
  • Monitor: Monitoring is of key importance in this inter-cloud setting. Different cloud platforms offer different and possible very limited means for monitoring. A key challenge here will be to consolidate the monitoring data and provide an adequate visual representation to do risk analysis and selecting alternative deployment strategies on the aggregated business process level. For instance, by leveraging semantic integration we can schedule request to semantically similar cloud and business resources. Particularly, in the Internet of Thing setting, we may observe unpredictable delays, which lead to delayed execution of real-world activities, e.g. a robot is notified that a parcel flew off the shelf only after 15 minutes.

Developing and Managing Inter-Cloud Business Applications

Based on your enterprise architecture you should ideally employ a model-driven engineering approach. This approach enables you automation of the software development process. Be aware that this is not easy to do and failed often in practice – However, I have also seen successful approaches. It is important that you select the right modeling languages and you may need to implement your own translation tools.

Once you have all this infrastructure, you should think about software factories, which are ideal for developing and deploying standardized services for selected platforms. I imagine that in the future we will see small emerging software factories focusing on specific aspects of a cloud platform. For example, you will have a software factory for designing graphical user interfaces using map applications enhanced with selected Odata services (e.g. warehouse or plant locations). In fact, I expect soon a market for software factories which enhances the idea of very basic crowd sourcing platforms, such as the Amazon Mechanical Turk.

Of course, since more and more business applications shift towards the private and public clouds, you will introduce new roles in your company, such as the Chief Cloud Officer (CCO). This role is responsible for managing the cloud suppliers, integrating them in your enterprise architecture and proper controlling as well as risk management.

Technology

The cloud exists already today! More and more tools emerge to manage it. However, they do not take into account the complete picture. I described several components for which no technologies exist. However, some go in the right direction as I will briefly outline.

First of all, you need technology to manage your API to provide a single point of management towards your cloud applications. For instance, Apache Delta Cloud allows managing different IaaS provider, such as Amazon EC2, IBM SmartCloud or OpenStack.

IBM Research also provides a single point of management API for cloud storage. This goes beyond simple storage and enables fault tolerance and security.

Other providers, such as Software AG, Tibco, IBM or Oracle provide “API Management” software, which is only a special case of API Management. In fact, they provide software to publish, manage the lifecycle, monitor, secure and bill your own APIs for the public on the web. Unfortunately, they do not describe the necessary business processes to enable their technology in your company. Besides that, they do not support B2B interaction very well, but focusing on business to development aspects only. Additionally, you find registries for public web APIs, such as ProgrammableWeb or APIHub, which are first starting point to find APIs. Unfortunately, they do not feature sematic description and thus no semantic matching towards your business objects, which means a lot of laborious manual work for doing the matching towards your application.

There is not much software for managing the borders between private and public cloud or even allowing more fine-granular borders, such as private, closest partner and the public. There is software for visualizing and monitoring these borders, such as the eCloudManager by Fluid Operations. It features semantic integration of different cloud resources. However, it is unclear how you can enforce these borders, how you control them and how can you manage the different borders. Dome 9 goes into this direction, but focuses only on security policies for IaaS applications. It does only understand data and low level security, but not security and privacy over business objects. Deployment configuration software, such as Puppet or Chef, are only first steps, since they focus only on deployment, but not on operation.

On the monitoring side you will find a lot of software, such as Apache Flume or Tibco HAWK. While these operate more on the lower level of software development, IFTTT enables execution of business rules over data on several cloud providers providing public APIs. Surprisingly, it considers itself at the moment more as a end user facing company. Additionally, you find in the academic community approaches for monitoring distributed business processes.

Unfortunately, we find little ready to go software in the area “Internet of Things”. I worked myself with several R&D prototypes enabling cloud and gateways, but they are not ready for the market. Products have emerged but they are only for a special niche, e.g. Internet of Things enabled point of sale shop. They lack particularly a vision how they can be used in an enterprise-wide application landscape or within a B2B enterprise architecture.

Conclusion

I described in this blog the challenges of inter-cloud business applications. I think in the near future (3-5 years) all organizations will have some them. Technically they are already possible and exist to some extent. The risk and costs will be for many companies lower than managing everything on their own. Nevertheless key requirement is that you have a working enterprise architecture management strategy. Without it you won’t have any benefits. More particularly, from the business side you will need adequate governance strategies for different clouds and APIs.

We have seen already key technologies emerging, but there is still a lot to do. Despite decades of research on semantic technologies, there exists today no software that can perform automated semantic matching of cloud and business concepts existing in different components of an inter-cloud business application. Furthermore, there are no criteria on how to select a semantic description language for business purposes that are as broad as described here. Enterprise Architecture Management tools in this area only slowly emerge. Monitoring is still fragmented with many low level tools, but only few high-level business monitoring tools. They cannot answer simple questions, such as “what if cloud provider A goes down then how fast can I recover my operations and what are the limitations”. API Management is another evolving area, but which will have a significant impact in the coming years. However, current tools only consider low-level technical aspects and not high-level business concepts.

Finally, you see that a lot of challenges mentioned in the beginning, such as the social network challenge or Internet of Thing challenge, are simply not yet solved, but large scale research efforts are on their way. This means further investigation is needed to clarify the relationships between the aforementioned components. Unfortunately, many of the established middleware vendors lack a clear vision of cloud computing and the Internet of Things. Hence, I expect this gap will be filled by startups in this area.

Modularizing your Business and Software Component Design

In this blog, I will talk about modularizing your enterprise from a business and software perspective. We start from the business perspective, where I provide some background how today’s businesses are modularized. Afterwards, we will investigate how we can support the modularized business with software components and how they can be designed. Finally, we will see some software tools enabling component-based design for a modularized business, such as the service component architecture (SCA) or OSGi.

Business perspective

You will find a lot of different definitions of what and how a business can be modularized. Most commonly, business modules are known as business functions, such as controlling, finance, marketing, sales or production. Of course you can view this also on a more fine granular level. Furthermore, we may have several instances of the same module. This is illustrated in the following figure. On the left-hand side the business modules of a single enterprise are shown. On the right-hand side you see the business modules of decentralized organizations. There, the enterprise is split up in several enterprises, one for each region. Business modules are replicated across regions, but adapted to local needs.

businessarchitecture

A module has usually clear interfaces to other modules. For instance, in earlier times you used paper forms to order something from the production department.

One of the most interesting questions is how one should design business modules. Well there is no easy answer to this, but one goal is to reduce complexity between modules. This means there should not be many dependencies between modules, if any. There can be a lot of dependencies within one module. For instance, people work very closely together in the production department, because they share common knowledge and resources, such as machines or financial ones.

On the other side, production and sales have some very different business processes. Obviously, they are still dependent, but this should be done through a clear interface between them. For example, there can be regular feedback from a sales person to the production engineer on what the customer needs.

Clearly, it depends on the economic environment how you define business modules and the organization. However, this environment changes and this means business modules can be retired, new interfaces or completely new business modules be created.

Unfortunately, this is usually not very well documented and communicated in many businesses. Particularly, the conditions why a business has been designed out of a given set of modules and dependencies exists usually only in the head of some people. Additionally, the interfaces between business modules and their purpose are often not obvious. This can mean significant loss of competitive advantages.

Linking Business and IT Perspective: Enterprise Architecture

Business and IT have not necessarily the same goals. This means they need to be aligned, so that they are not conflicting. Hence, you need to map your business modules to your IT components. This process is called Enterprise Architecture Management. During this process the Enterprise Architecture is constantly modified and adapted to the economic environment.

Of course, you cannot map all your business and your whole IT, because this would be too costly. Nevertheless, you need to choose the important parts that you want to map. Additionally, you will need to define governance processes and structures related to this, but this is not part of this blog entry.

One popular, but simple, illustration is an enterprise architecture composed of four architectures:

  • The Business Architecture describes the business functions/modules, their relations within business processes, people, the underlying strategy, business goals and the relevant economic environment.
  • The Information Architecture is about the business data, their relationships to business functions/modules and processes, the people, its value as well as security concerns.
  • The Software Architecture depicts different kind of components according to IT goals, their relations to business data and business functions/modules.
  • The Technology Architecture is about the technology foundation for enabling the other architectures. It describes the basic infrastructure in form of hardware and software components. This includes local environments as well as cloud environments, such as OpenStack, Google Compute or Amazon EC2.

Some people advocate additionally an IT security architecture. I propose to model it not as an additional architecture, but include IT security concerns in each of the aforementioned architectures. This increases the awareness for IT security in your business. Nevertheless, appropriate tools can generate from the models a complete security view over all architectures.

There are many popular tools, such as the ARIS toolset to map your enterprise architecture.

Of course, you cannot only define top-down from business to IT how this architecture should be designed. You need to take into account the IT perspective.

IT perspective

As mentioned, IT and business goals are not necessarily the same. IT focuses on three areas: storing of information (storage), processing of information (computation) and transporting information (network). The goal is to do this in an efficient manner: Only the minimum of information should be stored, processing information should be as fast as possible and transporting information should only consume minimal resources. Clearly, there are constraints ranging from physical laws over business goals to IT Security concerns. For instance, the three key IT Security goals, namely confidentiality, integrity and availability often have negative impact on all three IT goals.

As I have explained before: business modules are mapped to software components and vice versa. One big question here is about the design of software components, i.e. what software functionality (representing business functionality) should be part of one software component and not one of the others. Software components are usually different from the business modules, because they have different design criteria.

In the past, people often used heuristics. E.g. they introduce “data components” and “functional components”. This makes sense, because you should not have 50 different databases, but only the right amount of databases for your purpose (e.g. one for NoSQL, one for SQL and/or probabilistic databases). This reduces the resource needs and avoids inconsistent data. However, there is no general approach how these heuristics should be applied by different enterprise architects. Another problem is that communication patterns (e.g. via message brokers, such as RabbitMQ) are completely left out.

Hence, I think a more scientific or general approach should be taken towards the design of components, because these heuristics do not give you good guidelines for a sustainable architecture. Based on the three IT focus areas, I propose to have software components for storage (e.g. database), computation (e.g. business logic) and network (e.g. message brokers). Once you have identified these components, you need to decide which functionality you should put in which component. One key goal should be to reduce the dependencies between components. The more communication you have, the more dependencies you have between the different functions in components. Evaluating this manually can be costly and error prone. Luckily, some approaches do this for your and they can be integrated with business modeling as well as software component management tools (cf. here an approach that derives the design of software components (managed using the service component architecture (SCA)) from the communication pattern in business processed (modeled using the business process modeling notation (BPMN)).

Another mean for coherent software component design is to have enterprise architects responsible for mapping one part of the business (e.g. controlling) reviewing the software architecture for another part of the business (e.g. marketing). I introduced such a process in an enterprise some time ago. Such an approach ensures that architecture decisions are made consistent across the enterprise architecture and this fosters also learning from other areas.

Finally, a key problem that you need to consider is the lifecycle management of a software component. Similar to the lifecycle of business modules, software components are designed, implemented, deployed and eventually retired. You need tools to appropriately manage software components.

Tools for Managing Software Components

Previously, I elaborated on the requirements for managing software components:

  • Handle interfaces with other components

  • Support the lifecycle of software components

Two known information technologies for managing software components are OSGi and the Service Component Architecture (SCA).

OSGi

OSGi is a framework for managing software components and their dependencies/interfaces to other components. It is developed by the OSGi alliance. It origins from the Java world and is mostly suitable for Java components, although it has limited support for other non-Java platforms. It considers the lifecycle of components by ensuring that all needed components are started when a component is started and by being able to stop components during runtime. Furthermore, other components and their interfaces can be discovered at runtime. However, there is no deployment method for software components part of the standard.

Since Java can run on many different devices, it is available for Android, iOS, embedded devices, personal computers and servers.

Unfortunately, tool support for linking OSGi and business or information architecture is very limited. Furthermore, an automatic generation and deployment of OSGi components from your enterprise architecture does not exist at the moment. This makes it difficult to understand software component and their relations within your enterprise architecture.

Many popular software projects are based on OSGi, such as the Eclipse project.

Service Component Architecture (SCA)

The service component architecture is a specification for describing software components, their interfaces and their dependencies. It is developed by members of the Organization for the Advancement of Structured Information Standards (OASIS). It does not depend on a specific programming platform, e.g. it supports Java and C++. It supports policies that govern components, a set of components or their communication. However, SCA does not consider the software component lifecycle or how they are deployed exactly.

It is supported by many middleware frameworks, such as TIBCO Active Matrix or Oracle Fusion Middleware.

Similarly to OSGi there is little tool support for linking SCA components and the business or information architecture. However, the SCA specification has a graphical modeling guideline and some recent work describes how they can be linked via business processes. Since OASIS is responsible for other enterprise architecture relevant modeling notations (e.g. BPMN), it can be expected that enterprise architecture tools can be adapted to provide support for linking different parts of the enterprise architecture.

Conclusion

Modularizing your business and designing software component is a difficult task. Not many people understand the whole chain from business to software components. While enterprise architecture and modeling has become a popular topic in research and practice, the whole tool chain from business to software components is not. There have been attempts to introduce model-driven-architecture (MDA), but the supported models were mostly only restricted to the Unified-Modeling-Language, which is not very suitable for business modeling and can be very complex. Additionally it does not take into account the software component lifecycle. Furthermore, the roles of the different stakeholders (e.g. business and IT) using these tools are unclear.

Nevertheless, new approaches based on the business process modeling notation and frameworks for managing software components make me confident that this will change in the near future. Growing IT complexity in terms of communication and virtualization infrastructure will require software support for managing software components. Companies should embrace these new tools to gain competitive advantages in terms of agility and flexibility.

Bitcoin & Co: Perspectives for Cryptomoney in your Business

I will talk in this blog entry about Cryptomoney with special emphasis on how you can offer payment with Cryptomoney in your business and what you might need to consider when doing this.

Cryptomoney is produced by individuals and used for transactions involving real as well as virtual goods. It does not require a central authority for managing the creation, distribution and transactions using Cryptomoney. Particularly, it does not require a government in charge of this.

Virtual money, such as Amazon Coin, Facebook Credit or Microsoft Points, needs to be distinguished from Cryptomoney. Virtual money is under control of one company and you can only use it to buy goods from this company or companies that are certified by the company issuing the virtual money. Hence, their distribution is limited.

A Cryptocurrency is a similar to a “normal” currency, i.e. it is a “flavour” of Cryptomoney. In the following paragraphs, we will investigate the ecosystem of Cryptocurrencies including exchanges for converting Crytocurrencies into normal currencies. Afterwards, I will discuss potential business implications.

What functionality does Cryptomoney need to have?

Based on my observations, Cryptomoney has the following functionality (cf. also properties of “normal” money):

  • Creating Money
    • Money needs to be created in order to use it
    • It is difficult to create fake money
    • Money has value
    • Cryptomoney-specific
      • Everybody can create money – there is no monopoly for create money (a counter example is the European Central Bank (ECB), which has the monopoly for generating Euros)
  • Transactions
    • Cryptomoney can be used for transactions, such as selling or buying goods
    • Transactions can be done via a central third party (e.g. bank) or decentral (e.g. direct transaction between two people)
    • Cryptomoney-specific
      • Verification of correctness of transaction can be done by everyone (a counter example is the Society for Worldwide Interbank Financial Telecommunication (SWIFT), where only selected banks can participate)
  • Ownership
    • It is clear who owns money
    • Users can store themselves money in their wallet or ask other providers to store money for them (e.g. banks)
    • Cryptomoney-specific
      • Cryptomoney is always stored virtually.

I wrote three different points on purpose, because all three concepts are technically not dependent on each other. In deployment, the whole package consists only of selected technologies, so that different currencies cannot be combined. Nevertheless, it is up to you to use them in parallel.

Please note that “normal” money requires as well cryptographic mechanisms for proper transactions. Hence, I added the Cryptomoney-specific points, which do not apply to “normal” money.

In the following, I will briefly describe how these properties can be achieved using Cryptomoney.

Creating Cryptomoney

The aforementioned property for creating Cryptomoney seems to be antimonial: If everybody should be able to create money then it is somehow strange that it should be difficult to fake money. The goal of making it difficult is to have a steady supply of new Cryptomoney in the system, where it is rather impossible that one single party can have an advantage over the other parties to create comparably more Cryptomoney than the others.

Approaches for Cryptomoney address this problem by requiring for each unit of Cryptomoney a proof-of-work/proof-of-stake, i.e. a solution to a computationally complex problem. Examples for these kinds of problems are calculating the SHA-2 hash function or the Scrypt system. Both are also used in other contexts, such as ensuring data integrity or preventing attacks to computer systems.

Transactions using Cryptomoney

I assume now that Cryptomoney has been created as described before. The questions are (1) how can I use the money to buy something (2) how can I receive money and (3) how can I claim ownership of Cryptomoney.

Basically, you will need a transaction history to be able to check who has given somebody money. SWIFT is used internationally for doing this when you are acting with “normal” money, such as Euro or US-Dollar.

In the Cryptomoney world, the transaction history can be kept and shared by everyone in a peer-to-peer fashion. Once a unit of Cryptomoney is created, the author can sign it and insert it into the transaction history by referring to the latest transaction in this transaction history. Hence, the transactions build a chain. Everybody can now verify that the author is owner of this unit of Cryptomoney.

Cryptomoney can be transferred by the originator to another party by creating a new transaction from the originator to the other party. This transaction includes a reference to the transaction history (to be more precise to the longest “known” transaction history), the identification of the other party (usually the public key of the other party – see below) and the signature of the originator.

By linking transaction in the history, it is very difficult to modify single transactions within the history, because this can be easily detected using hashing algorithms. It is also not possible to spend more Cryptomoney than you have, because this can as well be easily verified by analysing the transaction history. One can also introduce more requirements, such as a transaction is valid only if at least 20 other parties have verified the correctness of the transaction.

Transactions may involve a small transaction fee to reward people creating Cryptomoney, verifying transactions or maintaining the transaction history.

Excursus: public/private cryptography schemes (also known as asymmetric 
cryptography). Without going much into detail, basically you keep your
private key secret and your public key can be known by everyone. 
If someone wants to send you, for instance, an encrypted text, it uses 
your public key to encrypt it. You can only decrypt this text using 
your private key only known by you. You can also use your private key 
for signing a text that you have written. The public key can be used 
by everyone to verify that only you signed it, since only you know the 
private key belonging to the public key.

Storing Cryptomoney

As mentioned before, the transaction history is kept and shared by everyone. Hence, there is no need for an individual to store transactions or even units of Cryptomoney locally. The ownership of Cryptomoney can be verified from the transaction history, which is publically available.

However, a user still needs software for storing at least one public/private key pair that is used for transactions to and from the user (see above). If this key pair gets stolen then all the Cryptomoney owned by the user is stolen as well. Hence, you need to take serious care of your wallet. A user can have also several wallets, i.e. several public/private key pairs, to reduce the risks of money getting stolen.

Advcanced Cryptomoney Developments

Cryptocurrency Exchanges

As explained before, it is very difficult for individual users to create Cryptomoney and they may need more Cryptomoney than they are able to generate in time. Hence, Cryptocurrency exchanges exist. There, one can buy Cryptomoney using “normal” money, such as Euro or US-Dollar. However, you need to trust these exchanges and they have been subject to serious attacks.

Examples for these exchanges are BTC (Bitcoins, Litecoins, PPCoins and many more), Virvcurex (Bitcoins, PPCoins, Litecoins and many more), Bitstamp (for Bitcoins) or MtGox (for Bitcoins)

Cryptocurrency Payment Processors

If you have a business then you probably have a payment processor. This could be an internal one or you may want to outsource services, such as transaction or currency management. In the context of Cryptocurrencies, you may want to have a payment processor who converts after each transaction the Cryptomoney into “normal” money to avoid speculation with Cryptomoney.

Example for a Cryptocurreny payment processor are Bitpay or Bitinstant.

Derivates on Cryptocurrencies

Cryptocurrencies are currently subject to speculation and have a high volatility in value in terms of “normal” money. Also “normal” currencies can be subject to high volatility – a recent example is the Japanese Yen. Usually you can buy derivatives to insure against volatility – if you do it right. These derivatives exist now also for Cryptocurrenceies.

Get involved

If you want to get involved you can try it yourself! Most of the existing software is open source. The advantage is that you can check yourself that it is correct and does what it promises to do. In the following, I will present some software for Bitcoin related to the aforementioned aspects. Currently, Bitcoin is one of the most popular Cryptocurrencies and exists roughly since 4 ½ years.

Alternatively, there are also some other Cryptocurrencies, such as Litecoins or PPCoins. They roughly follow the principles described here, but they are differences in the details how they are implemented, the money available for transactions, stakeholders in the ecosystem, such as money-creators, transaction history maintainer/verifier or users.

Create money

Bitcoin-mining software can be used to generate Bitcoins. Since it is very difficult to create Bitcoins (see above), you may want to buy specialized hardware as well. Usually, you will spend more money on energy and hardware then you will be able to generate from Bitcoins and transaction fees. Other Cryptocurrencies may reduce the costs for hardware and/or energy. There is no need to create Bitcoins to be able to use them for transactions – you can buy them on the aforementioned exchanges.

Wallet & Maintaining Transaction History

Many clients are available for maintaining the transaction history and sharing them with others. These clients can also generate a public/private key pair, so that you can do your own transaction using Bitcoins. Hence, these clients are also called Wallets. Furthermore, they verify the correctness of the Bitcoin transaction block chain (transaction history).

I tried Multibit, which offers you the possibility to have several Wallets, which reduces the risks of getting Bitcoins stolen, if you use it properly.

You can use now some Bitcoin exchanges to transfer some money to your Wallet. You do not have to buy a full Bitcoin, but you can also by parts of a Bitcoin.

You can even use Bitcoins on your mobile and transfer Bitcoins from one mobile to the other by using NFC or QR-Codes. There is no need for a bank in-between. You can use your smartphone as a payment terminal for offering Bitcoin payment in your offline store.

Stores accepting Bitcoins

There are some stores accepting Bitcoins. In Germany, we find Miro’s coffeeshop (an off-line coffee bar) or Bitmip (an online auction house).

Advices for your business

Here some advices for your business:

  • Trust your government and “normal” currencies
    • Without this – even a Cryptocurrency is nothing worth
    • Cryptocurrencies do not help you with legal disputes. In fact, it is not foreseen in current Cryptocurrencies to revoke a transaction – it is very difficult to get your money back
  • Use Cryptomoney at your own risk
    • Have a deep understanding of financials (currency conversion, taxation and derivatives)
    • Have an understanding of the technology, particularly about possible attacks to Cryptomoney
  • Understand your public/private key infrastructure and how to secure it
  • Observe the ecosystem of your Cryptocurrencies
    • Understand the players, such as money-creators, transaction verifier and other users using it.
    • You don’t want to use Cryptomoney that is used for crime and corruption
    • A general currently known rule is that if 51% of the ecosystem is owned by dishonest (collaborating) players then it is possible to fake money.
    • Use Cryptocurrencies that are well-researched (check on Arxiv or Google Scholar)
    • Use advanced analytics to analyze the distributed transaction history to understand the stakeholder system
  • Use it currently only for a small proportion of your transactions
  • Have several Cryptocurrencies to spread risk
  • Use it as a cool distinguishing feature for your shop if it fits to your customers

Conclusion

Cryptocurrencies seem to be for now an obscurity or used for marketing. However, they potentially can be used in the future as any other “normal” currency. They share common characteristics. However, you need to understand them even more than “normal” money. They are not without any flaws. Cryptocurrencies may not be environmental friendly, because a lot of energy is needed for generating and maintaining Cryptomoney. However, this may hold to some extent also for normal currencies. Finally, it seems that they are currently not much cheaper than centralized systems, such as EC cash. Use them with great care, deep knowledge about them and at your own risk.

The Next Generation HTTP Protocol (HTTP 2.0) for Enterprise Applications

I will talk in this blog about the next generation HTTP Protocol (HTTP 2.0) and put special emphasis on the implications for enterprise applications. Starting with the challenges and recent improvements to the HTTP protocol, such as WebSockets, I will describe the current state of the HTTP 2.0 specification. Finally, I will discuss implications for distributed web applications and enterprise service bus/complex event processing based enterprise applications. The WebRTC protocol is seen complementary to the HTTP 2.0 specification.

Introduction

The recent version of the HTTP protocol is 1.1 and is used by most of the web servers, proxies as well as browsers on the Internet. The main difference to HTTP 1.0 is that a connection can be reused, i.e. each request for a resource, such as image or HTML files, uses the same connection without the overhead of creating a connection for each of them. This already shows the need to reduce the number of connections to avoid overload of firewalls or the network stack.

Furthermore, new protocols have emerged based on HTTP. Their goal is to support real-time applications, such as collaborative editing (cf. Apache Wave). Other examples can be found in the area of adaptive streaming, such as apple live streaming. Clearly, these new killer application required adaptation to the existing HTTP protocol standard. I will briefly describe these applications and explain why these adaptations are still somehow flawed and require a new standard: HTTP 2.0.

Applications

Real-Time

Real-time applications require a permanent connection to a web server to push events or data to the server. Contrary to the standard request-response approach the connection is never terminated. The underlying assumption is that the application does not know exactly how much data needs to be transferred when to the server. However, data transfers occur frequently. One example is collaborative editing: There, we need to transfer text additions, changes, removals to the server and ultimately to other participants in the collaborative editing sessions. More advanced collaborative editors may also transfer other events, such as clicks or highlighting text. Given the context of real-time applications, the WebSocket standard has been developed. This standard enables a permanent connection for the aforementioned purposes. Basically, it uses HTTP 1.1, but does not transfer a lot of header information (see also example of a standard HTTP request below). Mostly JavaScript applications leverage this standard. For compatibility reasons the JavaScript libraries sock.js or socket.io/engine.io support a similar approach which works for older browsers or proxies. This is based on various techniques, such as XHR polling.

Media Streaming

Many popular live video streaming protocols are based on HTTP, such as apple live streaming or MPEG DASH. Basically they offer a list of links to chunks (short media blocks of few seconds length) of the media stream. These chunks are then downloaded via HTTP.

Challenges

Although we identified already some improvements with respect to HTTP 1.1, there are several issues with the current HTTP protocol:

  1. Communication is text-based

  2. No prioritization of data streams

Communication is Text-based

If we look at a standard request response then we see that there is a lot of over-head due to the fact that the communication is human-readable.

This can be seen from the following requests (via Google Chrome):

HTTP Example Request (Assumption: connection to server www.wikipedia.org established)

GET / HTTP/1.1
Host: http://www.wikipedia.org
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.70 Safari/537.17
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*

HTTP Example Response

Age:428
Cache-Control:s-maxage=3600, must-revalidate, max-age=0
Connection:keep-alive
Content-Encoding:gzip
Content-Length:11603
Content-Type:text/html; charset=utf-8
Date:Tue, 12 Feb 2013 22:36:12 GMT
Last-Modified:Mon, 11 Feb 2013 01:58:47 GMT
Server:Apache
Vary:Accept-Encoding
X-Cache:HIT from amssq38.esams.wikimedia.org
X-Cache:HIT from knsq24.knams.wikimedia.org
X-Cache-Lookup:HIT from knsq24.knams.wikimedia.org:80
X-Cache-Lookup:HIT from amssq38.esams.wikimedia.org:3128
X-Content-Type-Options:nosniff

[..] (content of the html page)

All attributes and values are human-readable. Thus, they consume a lot of overhead for transferring them each time we make a request and receive a response. Clearly this is a problem for real-time or media streaming applications. Furthermore, there is overhead when parsing them for further processing.

No Prioritization of Data Streams

The underlying assumption of HTTP is that all data streams have the same priority. However, this is not true for all applications. Let us imagine you upload several large images for your collaborative editing application. At the same time you modify some text. This can mean that the other users in the collaborative editing session won’t see the updated text until the images are uploaded. Clearly this is an undesirable situation.

The Current State

If we imagine further applications, such as an enterprise service bus, which can be based on web services/SOAP services or REST services, then we can see a lot of room for improvement. Hence, some big vendors have developed proprietary HTTP extensions:

SPDY by Google has been used as a first draft for the new HTTP 2.0 protocol, which is in process of being standardized by the IETF. Microsoft approach seems to be based on SPDY, but makes some parts of it optional to take into account mobile devices, which have limited resources and should, for example, only deal with encrypted content if necessary. Microsoft proposed extensions have been submitted to the IETF to be taken into account for HTTP 2.0

The main improvements of SPDY compared to the HTTP 1.1 protocol are the following:

  • Reduce the overhead of HTTP by tokenizing headers, compressing data or remove unnecessary headers.
  • Single data channel for multiple request (multiplexing). This has been to some extent already part of HTTP 1.1.
  • Prioritization of data (e.g. text editing events have higher priority than image upload events).
  • Server can push data to the client or suggest data to the client to be requested.
  • Further security features.

Luckily SPDY is designed in a way that it is backward-compatible. It only changes the way how HTTP data is transmitted, but existing applications do not need to be modified. The underlying assumption is that there is a translator in the middle (e.g. a proxy or directly provided by the application/library). More advanced features/applications have to implement HTTP 2.0 natively to control and leverage all features.

At the moment the following browsers implement SPDY:

  • Google Chrome

  • Mozilla Firefox

  • Opera

  • Amazon Silk (Cloud Browser)

Some proxies, such as NGINX, support SPDY. Furthermore, first popular web pages implement SPDY.

Implications

Clearly, the design of HTTP 2.0 has many interesting implications. Firstly, we note that HTTP 2.0 is not purely an application layer protocol as it is described in the OSI network layer architecture. It is partly a session, presentation and transport protocol.

Secondly, we can see that prioritization of data streams is an extremely powerful feature. Especially, if we consider not only one application, but several applications integrated via an enterprise service bus. The enterprise service bus can be also compared with some kind of advanced HTTP 2.0 proxy. Imagine premium customers connected via a web site. The website is integrated with the CRM and production system via the enterprise service bus. Interactions with these customers via the web site are prioritized over to interactions with basic customers.

Thirdly, complex event processing application can leverage the speed improvements of HTTP 2.0 over HTTP 1.0 and analyze streams as well as correlate events in different streams over different users.

HTTP 2.0 may have the potential to replace not only HTTP 1.1, but also other protocols, in the future. Support and implementations of major software vendors demonstrate the seriousness of HTTP 2.0.

At the same time, we see further protocols emerging, such as the WebRTC protocol. The WebRTC protocol is a browser-based peer to peer protocol for voice/video chat and real-time applications. It is thus not server-based. Hence, it can be seen as complementary to the HTTP 2.0 protocol.

In the future WebRTC and HTTP 2.0 may also be combined to HTTP 3.0 to fulfill the vision of a truly decentralized Internet architecture.

The Future of 3D Printing

3D Printing has become a hot topic over the last years, although it roots can be found already 30 years ago. Analysts say it will become a billion dollar industry in the near future. This blog investigates what is exactly new and what we can envision for the future. I start with an overview of 3D printing. Then I describe what companies exist and what can be printed. Afterwards, I will continue with an analysis of the impact on the manufacturing supply chain and differentiation from other supply chain management concepts. Finally, I conclude with a future vision.

What is it about?

As the name indicates, 3D Printing is about printing 3D (width, height, length) objects made out of solid material. It is also called additive manufacturing, because a 3D object is created by adding layers on top of each other. This layer structure can be digitally designed using CAD (Computer Aided Design) software or by using 3D models derived from real objects by using 3D scanners. Different materials can be used within the layers, but at the moment, they cannot be arbitrarily combined. Nevertheless, a wide range of objects have been created using 3D printing technology. At the moment, it is used mostly for design prototypes, but also implants. However, it is an open question if 3D printing is less costly than other types of manufacturing.

3D printing is a young market with only few existing companies and open source/crowdsourcing organizations. It may be an opportunity for 2D companies to leverage existing Intellectual Property to extend their business to new markets beyond the shrinking 2D printing market. However, new legal problems are likely to appear in these new markets.

Other business opportunities can be seen around the whole 3D printing lifecycle from mining of raw material via printing of 3D objects to recycling as well as reuse of 3D objects.

What companies exist?

There is not one big company offering 3D printers or printing services. We find several smaller ones, but among those the most popular ones are probably:

  • Stratasys
    • Services: They offer various consulting services to optimize the use of their technology in different domains. Furthermore, you can send them digital 3D models and let them print the 3D objects.
    • Products: Mostly printers for prototypes. There seems to be no mass production facility for end user products. Besides printers, they offer 3D production systems, which create stronger products, i.e. they are more stable. Additionally, they can utilize more material as input for the printing process.
  • 3D Systems

We observe that they offer services similar to the 2D printing industry. For instance on-demand printing. This is useful if you (1) do not have a 3D printer or (2) you need to print something using different materials or (3) if you run out of material for printing.

Other services seem to be more unique to the 3D printing industry, such as the market place for 3D models.

However, there have been also some open source/crowd sourcing initiatives for 3D printers or 3D objects. This includes also printers that can print themselves or print upgrades to their hardware. The popular open source printers by RepRap start from $500. There are also open source market places for 3D objects, such as Thingiverse.

What materials can be used?

In the beginning most 3D printers were able to print objects made out of plastic. This is still today mostly the case. This material is especially useful, because it is and can be used in a wide range of products.

Other materials include, but are not limited to:

  • Metal
  • Copper
  • Steel
  • Silver-filled polymers
  • Organic material

Obviously, the different materials require different printing technologies. Furthermore, organic material needs to be cultured after the printing process. Hence, it is currently debated in science and practice what should be labeled as 3D printing and what not. As far as I know, the aforementioned 3D printing companies are not able to print objects consisting of organic material.

What can be printed?

The limit is just the sky. However, you need to take into account what quality parameter you require. For example, what resolution you need and how stable the object should be. Another thing is that it is still tricky to print complex electronic parts or to assemble them afterwards into another complex object (e.g. a smartphone). Apparently, it is still less costly and has a better quality if you are using human labor. Nevertheless, it should not be expected that you will hit in the future just the print button and you can print your own Airbus A380. Complex objects need always be modular and parts of them may need to be replaced for maintenance reasons. This type of work may be done in the future by robots.

Examples of what can be printed and what has drawn attention in the press:

How does it impact the supply chain?

Until now I was just describing one part of the supply chain – the production process. However, in order to create new business opportunities, one must understand the whole supply chain, which is at the moment not as well understood as the production process. Generally one can distinguish the following phases (cf. also SCOR):

  • Sourcing: This probably the one process which is most similar to the existing sourcing processes.
  • Making: This will be done by the 3D printers. You may design the 3D objects using the tools and technologies mentioned in the beginning.
  • Delivering: Here, we may want to think about new business models. Should the customer print, for example, his new iPhone at home? Should there be a 3D printing shop that has all the material available and that is able to finance printers to make industry-grade products?
  • Recycling: This is somehow similar to the existing recycling processes. However, there needs to be a big incentive for customer and manufacturer to recycle. The open question is: Can we design components, such as smartphones, that can be printed for recycling? Do we need new technologies to separate again the end product into its source materials?
  • Upgrading: This is a new process and does not exist as such in the hardware industry, but it is well known in the software industry. The idea is that instead of throwing away or recycling your old hardware you just print an upgrade! This is already possible today with certain products (cf. RepRap). Imagine you have a smartphone, e.g. the Samsung Galaxy S2 and want to upgrade it to a new version, e.g. the Samsung Galaxy S3. What about instead of recycling it you go to your Samsung store and they print you the upgrade? Can we design systems that are able to do this? What are the limitations? How can we design for a managed evolution of systems?

Future Vision

Currently, I have not seen so many big 2D printing companies going into the 3D market. An exception is HP, which uses the technology from Stratasys. Some may think this an obvious step, because they could potentially reuse some of their intellectual property and it could help them to find new markets beside the struggling 2D printing market. However, also 3D printing is already more than 20 years old (cf. for example patents by Stratasys), it has not yet become as important as 2D printing (was). I think we need to develop new business models (cf. previous section) to make it more successful and alternative models less attractive (e.g. cheap human labor in developing countries).

Thinking further, we need also to consider threats to 3D printing. At the moment, it is mostly used for prototyping. I wonder if we could not just use Holographic technologies together with simulation. We would not even need to waste material and can quickly change the prototype. The demonstration capabilities may become similar to real 3D objects in the future. This could also mean higher productivity in comparison to 3D printing. Obviously, I won’t expect to fly in a holographic plane or drive a holographic car the next decades. Nevertheless, 3D printing has not been used for these use cases beyond prototyping, i.e. it has not yet proven useful for mass production.

From a more research point of view, we should investigate how we can design components that can be upgraded using 3D printing systems. For instance, how can we avoid to be constrained too much by the initial design of the system and still be flexible to allow upgrades that have commercial success in a few years? How can we manage the evolution of systems that are able to replicate? When do we need to make a decision to recycle the whole system and start from scratch again? We may find some answers in the software industry, but I do not think they will be sufficient.

Last, but not least, we need to consider societal challenges. Production may not be outsourced anymore to developing countries. We will see more demand for highly skilled people designing 3D objects. How do we handle scarce resources, if everybody can print anything he or she wants to have? Finally, as described above, there is the possibility for everyone to print weapons – even using today’s technology. Politicians, researchers and society itself need to find answers to deal with this.

The Emergence of Rule Management Systems for End Users

Long envisioned in research and industry, they have silently emerged: rule management systems for end users to manage their personal (business) processes. End users use these systems to define rules that automate parts of their daily behavior to have more time for their life. I will explain in this blog entry new trends and challenges as well as business opportunities.

What is it about?

Our life is full of decisions. Let it be complex unique decisions, but also simple reoccurring ones, such as switching off the energy at home when we leave it or sending a message to your family that you left work. Another more recent example is taking a picture and uploading it automatically to Facebook and Twitter. Making and executing simple decisions take our time, which could be used for something more important. Hence, we wish to automate them as much as possible.

The idea is not new. More technical interested people have used computers or home automation systems since several years or even decades. However, they required a lot of time to learn them, complex to configure / required software engineering skills, were proprietary and it was very uncertain if the manufacturer will still support them in a few years.

This has changed. Technology is part of everybody’s life, especially since the smartphone and cloud boom. We use apps for our smartphone or as part of the cloud that help us automating our life and I will present in the following two apps in this area. These apps can be used by anyone and not only software engineers.

What products exist?

Ifttt

The first app that I am going to present is a cloud-based app called ifttt (if this then that), which has been created by a startup company. You can use a graphical tool to describe rules for integrating various cloud services, such as Facebook, Dropbox, Foursquare, Google Calendar, Stocks or Weather. These cloud services are called channels in ifttt.

A rule has the simple format “if this then that”. The “this” part refers to triggers that start actions specified by the “that” part. The ifttt application polls the channels every 15 minutes if they have any new data and evaluates if new data triggers actions specified in the “that” part.

Defining rules is an easy task, but a user does not necessarily have to do this. A huge community has already created a lot of so-called recipes, which are predefined rules that can be just used by any other user without any effort. Examples for these recipes are:

On(X)

On(X) is a mobile and cloud application created by Microsoft Research. It leverages sensor information of your mobile for evaluating rules and triggering actions. Contrary to the aforementioned app it is thus not limited to data from other cloud platforms.

Rules are described using a JavaScript language (find some examples here). This means end users cannot use a graphical tool and have to have some scripting language knowledge. Luckily, the platform offers also some receipts:

We see that the receipts are more sophisticated, because they can leverage sensor information of the mobile (location/geofencing or movement).

What is in for business?

Using these personal rule engines for business purposes is an unexplored topic. I expect that they could lead to more efficiency and new business models. For example, consider the following business model:

Furthermore, we can envision improved service quality by using personal rule engines in various domains

  • Medicine: if user is in the kitchen then remind to take medicine, if it has not been taken yet for the day
  • Energy saving: If I leave my house then shut down all energy consumers except the heater.
  • Food delivery: If I am within 10 km range of my destination then start delivering the pizza I have ordered
  • Car sharing: If I leave work send a SMS to all my colleagues I share my car with
  • Team collaboration: We can evaluate if team members or members of different teams want to do the same actions or waiting for similar triggers. They can be brought together based on their defined rules to improve or split their work more efficiently.

Future research

The aforementioned applications are prototypes. They need to be enhanced and business models need to be defined for them. First of all, we need to be clear about what we want to achieve with automating simple decisions, e.g.

    • Cost savings
    • Categorizing for quicker finding and publishing information
    • Socializing

An important research direction is how we could mine the rules, e.g. for offering advertisement or bringing people together. Most of the mining algorithms to day focus on mining unstructured or unrelated data, but how can we mine rules of different users and make sense out of them?

Another technical problem is the time between rule evaluation and execution. For instance, ifttt only polls every 15 minutes its data sources to check if actions in a rule should be triggered. This can be too late in time critical situations or can lead to confusing actions.

From a business point of view, it would be interesting to investigate the integration of personal rule management into Enterprise Resource Planning (ERP) systems as well as to provide a social platform to optimize and share rules.

Finally, I think it is important to think about rules involving or affecting several persons. For example, let us assume that a user “A” defined the rule “when I leave work then inform my car sharing colleagues”. One of the car sharing colleagues has the rule “when a user informs me about car sharing then inform all others that I do need a seat in a car anymore”. If user “A” now cannot bring the car sharing colleague home then he or she has a problem.

A more simpler example would be if user “A” defines a rule “If user ‘B’ sends me a message then send him a message back” and user ‘B’ defines a rule “If user ‘A’ sends me a message then send him a message back”. This would lead to an indefinite message exchange between the users.

Here we need to be able to identify and handle conflicts.