Lambda, Kappa, Microservice and Enterprise Architecture for Big Data

A few years after the emergence of the Lambda-Architecture several new architectures for Big Data have emerged. I will present and illustrate their use case scenarios. These architectures describe IT architectures, but I will describe towards the end of this blog the corresponding Enterprise Architecture artefacts, which are sometimes referred to as Zeta architecture.

Lambda Architecture

I have blogged before about the Lambda-Architecture. Basically this architecture consists of three layers:

  • Batch-Layer: This layer executes long-living batch-processes to do analyses on larger amounts of historical data. The scope is data from several hours to weeks up to years. Here, usually Hadoop MapReduce, Hive, Pig, Spark or Flink are used together with orchestration tools, such as Oozie or Falcon.

  • Speed-Layer/Stream Processing Layer: This layer executes (small/”mini”) batch-processes on data according to a time window (e.g. 1 minute) to do analyses on larger amounts of current data. The scope is data from several seconds up to several hours. Here one may use, for example, Flink, Spark or Storm.

  • Serving Layer: This layer combines the results from the batch and stream processing layer to enable fast interactive analyses by users. This layer leverages usually relational databases, but also NoSQL databases, such as Graph databases (e.g. TitanDB or Neo4J), Document databases (e.g. MongoDB, CouchDB), Column-Databases (e.g. Hbase), Key-Value Stores (e.g. Redis) or Search technologies (e.g. Solr). NoSQL databases provide for certain use cases more adequate and better performing data structures, such as graphs/trees, hash maps or inverse indexes.

In addition, I proposed the long-term storage layer to have an even cheaper storage for data that is hardly accessed, but may be accessed eventually. All layers are supported by a distributed file system, such as HDFS, to store and retrieve data. A core concept is that computation is brought to data (cf. here). On the analysis side, usually standard machine learning algorithms, but also on-line machine learning algorithms, are used.

As you can see, the Lambda-Architecture can be realized using many different software components and combinations thereof.

While the Lambda architecture is a viable approach to tackle Big Data challenges different other architectures have emerged especially to focus only on certain aspects, such as data stream processing, or on integrating it with cloud concepts.

Kappa Architecture

The Kappa Architecture focus solely on data stream processing or “real-time” processing of “live” discrete events. Examples are events emitted by devices from the Internet of Things (IoT), social networks, log files or transaction processing systems. The original motivation was that the Lambda Architecture is too complex if you only need to do event processing.

The following assumptions exists for this architecture:

  • You have a distributed ordered event log persisted to a distributed file system, where stream processing platforms can pick up the events

  • Stream processing platforms can (re-)request events from the event log at any position. This is needed in case of failures or upgrades to the stream processing platform.

  • The event log is potentially large (several Terabytes of data / hour)

  • Mostly online machine learning algorithms are applied due to the constant delivery of new data, which is more relevant than the old already processed data

Technically, the Kappa architecture can be realized using Apache Kafka for managing the data-streams, i.e. providing the distributed ordered event log. Apache Samza enables Kafka to store the event log on HDFS for fault-tolerance and scalability. Examples for stream processing platforms are Apache Flink, Apache Spark Streaming or Apache Storm. The serving layer can in principle use the same technologies as I described for the serving layer in the Lambda Architecture.

There are some pitfalls with the Kappa architecture that you need to be aware of:

  • End to end ordering of events: While technologies, such as Kafka can provide the events in an ordered fashion it relies on the source system that these events are indeed delivered in an ordered fashion. For instance, I had the case that a system in normal operations was sending the events in order, but in case of errors of communication this was not the case, because it stored the events it could not send and retransmitted them at a certain point later. Meanwhile if the communication was established again it send the new events. The source system had to be adapted to handle these situations correctly. Alternatively, you can only ensure a partial ordering using vector clocks or similar implemented at the event log or stream processing level.

  • Delivery paradigms on how the events are delivered (or fetched from) to the stream processing platform

    • At least once: The same event is guaranteed to be delivered once, but the same events might be delivered twice or more due to processing errors or communication/operation errors within Kafka. For instance, the stream processing platform might crash before it can marked events as processed although it has processed them before. This might have undesired side effects, e.g. the same event that “user A liked website W” is counted several times.

    • At most once: The event will be delivered at most once (this is the default Kafka behavior). However, it might also get lost and not be delivered. This could have undesired side effects, e.g. the event “user A liked website W” is not taken into account.

    • Once and only once: The event is guaranteed to be delivered once and only once. This means it will not get lost or delivered twice or more times. However, this is not simply a combination of the above scenarios. Technically you need to make sure in a multi-threaded distributed environment that an event is processed exactly once. This means the same event needs to be (1) only be processed by one sequential process in the stream processing platforms (2) all other processes related to the events need to be made aware that one of them already processes the event. Both features can be implemented using distributed system techniques, such as semaphores or monitors. They can be realized using distributed cache systems, such as Ignite, Redis or to a limited extent ZooKeeper. Another simple possibility would be a relational database, but this would quickly not scale with large volumes.
      • Needles to say: The source system must also make sure that it delivers the same event once and only once to the ordered event log.

  • Online machine learning algorithms constantly change the underlying model to adapt it to new data. This model is used by other applications to make predictions (e.g. predicting when a machine has to go into maintenance). This also means that in case of failure we may temporary have an outdated or simply a wrong model (e.g. in case of at least once or at most once delivery). Hence, the applications need to incorporate some business logic to handle this (e.g do not register a machine twice for maintenance or avoid permanently registering/unregistering a machine for maintenance)

Although technologies, such as Kafka can help you with this, it requires a lot of thinking and experienced developers as well as architects to implement such a solution. The batch-processing layer of the Lambda architecture can somehow mitigate the aforementioned pitfalls, but it can be also affected by them.

Last but not least, although the Kappa Architecture seems to be intriguing due to its simplification in comparison to the Lambda architecture, not everything can be conceptualized as events. For example, company balance sheets, end of the month reports, quarterly publications etc. should not be forced to be represented as events.

Microservice Architecture for Big Data

The Microservice Architecture did not originate in the Big Data movement, but is slowly picked up by it. It is not a precisely defined style, but several common aspects exist. Basically it is driven by the following technology advancements:

  • The implementation of applications as services instead of big monoliths
  • The emergence of software containers to deploy each of those services in isolation of each other. Isolation means that they are put in virtual environments sharing the same operating systems (i.e. they are NOT in different virtual machines), they are connected to each other via virtualized networks and virtualized storage. These containers leverage much better the available resources than virtual machines.
    • Additionally the definition of repositories for software containers, such as the Docker registry, to quickly version, deploy, upgrade dependent containers and test upgraded containers.
  • The deployment of container operating systems, such as CoreOS, Kubernetes or Apache Mesos, to efficiently manage software containers, manage their resources, schedule them to physical hosts and dynamically scale applications according to needs.
  • The development of object stores, such as OpenStack Swift, Amazon S3 or Google Cloud Storage. These object stores are needed to store data beyond the lifecycle of a software container in a highly dynamic cloud or scaling on-premise environment.
  • The DevOps paradigm – especially the implementation of continuous integration and delivery processes with automated testing and static code analysis to improve software quality. This also includes quick deliveries of individual services at any time independently of each other into production.

An example of the Microservice architecture is the Amazon Lambda platform (not to be confused with Lambda architecture) and related services provided by Amazon AWS.

Nevertheless, the Microservice Architecture poses some challenges for Big Data architectures:

  • What should be a service: For instance, you have Apache Spark or Apache Flink that form a cluster to run your application. Should you have for each application on them a dedicated cluster out of software container or should you provide a shared cluster of software containers. It can make sense to have the first solution, e.g. a dedicated cluster per application due to different scaling and performance needs of the application.
  • The usage of object stores. Object stores are needed as a large scale dynamically scalable storage that is shared among containers. However, currently there are some issues, such as performance and consistency models (“eventually consistent”). Here, the paradigm of “Bring Computation to Data” (cf. here) is violated. Nevertheless, this can be mitigated either by using HDFS as a temporal file system on the containers and fetching the data beforehand from the object store or use an in-memory caching solution, such as provided by Apache Ignite or to some extend Apache Spark or Apache Flink.

I see that in these environments the role of software defined networking (SDN) will become crucial not only in cloud data centers, but also on-premise data centers. SDN (which should NOT be confused with virtualized networks) enables centrally controlled intelligent routing of network flows as it is needed in dynamically scaling platforms as required by the Microservice architecture. The old decentralized definition of the network, e.g. in form of decentralized routing, does simply not scale here to enable optimal performance.

Conclusion

I presented here several architectures for Big Data that emerged recently. Although they are based on technologies that are already several years old, I observe that many organizations are overwhelmed with these new technologies and have some issues to adapt and fully leverage them. This has several reasons.

One tool to manage this could be a proper Enterprise Architecture Management. While there are many benefits of Enterprise Architecture Management, I want to highlight the benefit of managed of managed evolution. This paradigm enables to align business and IT, although there is a constant independent (and dependent) change of both with not necessarily aligned goals as illustrated in the following picture.

enterprise-architecture-managed-evolution

As you can see from the picture both are constantly diverging and Enterprise Architecture Management is needed to unite them again.

However, reaching managed evolution of Enteprise Architecture requires usually many years and business as well as IT commitment to it. Enterprise Architecture for Big Data is a relatively new concept, which is still subject to change. Nevertheless some common concepts can be identifed. Some people refer to Enterprise Architecture for Big Data also as Zeta Architecture and it does not only encompass Big Data processing, but in context of Microservice architecture also web servers providing the user interface for analytics (e.g. Apache Zeppelin) and further management workflows, such as backup or configuration, deployed in form of containers.

This enterprise architecture for Big Data describes some integrated patterns for Big Data and Microservices so that you can consistently document and implement your Lambda, Kappa, Microservice architecture or a mixture of them. Examples for artefacts of such an enterprise architecture are (cf. also here):

  • Global Resource Management to manage the physical and virtualized resources as well as scaling them (e.g. Apache Mesos and Software Defined Networking)

  • Container Management to deploy and isolate containers (e.g. Apache Mesos)

  • Execution engines to manage different processing engines, such as Hadoop MapReduce, Apache Spark , Apache Flink or Oozie

  • Storage Management to provide Object Storage (e.g. Openstack Swift), Cache Storage (e.g. Ignite HDFS Cache), Distributed Filesystem (e.g. HDFS) and Distributed Ordered Event Log (e.g. Kafka)

  • Solution architecture for one or more services that address one or more business problems. It should be separated from the enterprise architecture, which is focusing more on the strategic global picture. It can articulate a Lambda, Kappa, Microservice architecture or mixture of them.

  • Enterprise applications describe a set of services (including user interfaces)/containers to solve a business problem including appropriate patterns for Polyglot Persistence to provide the right data structure, such as graph, columnar or hash map, for enabling interactive and fast analytics for the users based on SQL and NoSQL databases (see above)

  • Continuous Delivery that describe how Enterprise applications are delivered to production ensuring the quality of them (e.g. Jenkins, Sonarqube, Gradle, Puppet etc).

Big Data – What is next? OLTP, OLAP, Predictive Analytics, Sampling and Probabilistic Databases

Big Data has matured over the last years and is becoming more and more a standard technology used in various industries. Coming from established concepts, such as OLAP or OLTP, in context of Big Data, I go in this blog post beyond them describing what is needed for next generation applications, such as autonomous cars, industry 4.0 and smart cities. I will cover three new aspects: (1) making the underlying technology of predictive analytics transparent to the data scientist (2) avoiding Big Data processing of one large scale dataset by employing sampling and probabilistic datastructures and (3) ensuring quality and consistency of predictive analytics using probabilistic databases. Finally, I will talk about how these aspects change the Big Data Lambda architecture and briefly address some technologies covering the new three aspects.

Big Data

Big Data has emerged over the last years as a concept to handle data that requires new data modeling concepts, data structures, algorithms and/or large-scale distributed clusters. This has several reasons, such as large data volumes, new analysis models, but also changing requirements in the light of new use cases, such as industry 4.0 and smart cities.

During investigations of these new use cases it quickly came apparent that current technologies, such as relational databases would not be sufficient to address the new requirements. This was due to inefficient data structures as well as algorithms for certain analytics questions, but also to the inherent limitations of scaling them.

Hence, Big Data technologies have been developed and are subject to continuous improvement for old and new use cases.

Big Online Transaction Processing (OLTP)

OLTP has been around for a long time and focuses on transaction processing. When the concept of OLTP emerged it has been usually a synonym for simply using relational databases to store various information related to an application – most people forgot that it was related to processing of transactions. Additionally, it was not about technical database transactions, but business transactions, such as ordering products or receiving money. Nevertheless, most relational databases secure business transactions via technical transactions by adhering to the ACID criteria.

Today OLTP is relevant given its numerous implementations in enterprise systems, such as Enterprise Resource Management systems, Customer Relationship Management systems or Supply Chain Management systems. Due to the growing complexity of international organisations these systems tend to have more and more data and – from a data volume point of view – they tend to generate a lot of data. For instance, large online vendor can have several exabyte of transaction data. Hence, Big Data happen also for OLTP. Particularly, if this data needs to be historized for analytical purposes (see next section).

However, one important difference from other systems is the access pattern: Usually, there are a lot of concurrent users, who are interested in a small part of the data. For instance, a customer relation agent adds some details about a conversation with a customer. Another example is that an order is updated. Hence, you need to be able to find/update a small data set in a much large data set. Different mechanismto handle a lot of data for OLTP usage exist in relational database systems since a long time.

Big Online Analytical Processing (OLAP)

OLAP has been around nearly as long as OLTP, because most analysis have been done on historized transactional data. Due to the historization and different analysis needs the amount of data is significant higher than in OLTP systems. However, OLAP has a different access pattern: Less concurrent users, but they are interested in the whole set of data, because they want to generate aggregated statistics for them. Hence, a lot of data is usually transferred into an OLAP system from different source systems and afterwards it is only read very often.

This has led very early to the development of special OLAP databases for storing data for multidimensional analysis in cubes to match the aforementioned access pattern. They can be seen as very early NoSQL databases, although they have not been described as such at this time, because the term NoSQL databases appeared only much later.

While data from OLTP systems have been originally the primary source for OLAP systems, new sources of data have appeared, such as sensor data or social network graphs. This data goes beyond the capability of OLTP or special OLAP databases and requires new approaches.

Going beyond OLTP and OLAP

Aspect 1: Predictive Analytics

Data scientists employing predictive analytics are using statistic and machine learning techniques to predict how a situation may evolve in the future. For example, they predict how the sales will evolve given existing sales and patterns. Some of these techniques exist already since decades, but only since recently they make more sense, because more data can be processed with Big Data technologies.

However, current Big Data technologies, such as Hadoop, are not transparent to the end user. This is not really an issue with the Big Data technologies themselves, but with the tools used for accessing and processing the data, such as R, Matlab or SAS.

They require that the end user thinks about writing a distributed analysis algorithms, e.g. via map/reduce programs in R or other languages to do their analysis. The standard library functions for statistics can be included in such distributed programs, but still the user has to think about how to design the distributed program. This is undesirable, because these users are usually not skilled enough to design them optimally. Hence, frustration with respect to performance and efforts is likely to occur.

Furthermore, organisations have to think about an integrated repository where they store these programs to enable reuse and consistent analytics. This is particularly difficult, because these programs are usually maintained by business users, who lack proper software management skills.

Unfortunately, it cannot be expected that the situation changes very soon.

Aspect 2: Sampling & Probabilistic Data Structures

Surprisingly often when we deal with Big Data, end users tend to execute queries over the whole set of data independent if it is has 1 million rows or 1 billion rows.

Sampling databases

While it is certainly possible to process a data set of nearly any size with modern Big Data technologies, one should carefully think if this is desired due to increased costs, time and efforts needed.

For instance, if I want to calculate the average value of all transactions then I can calculate the average of all transactions. However, I could take also a random sample of 5 % of the transactions and know that the average of this sample is correct with an error of +-1 % in comparison of the total population. For most decision making this is perfectly fine. However, I needed to process only a fraction of the data and can now do further analysis due to the saved time and resources. This may even lead to better informed decisions.

Luckily, there are already technologies allowing this. For example, BlinkDB, which allows – amongst others – the following SQL queries:

  • Calculate the average of the transaction values within 2 seconds with some error:SELECT avg(transactionValue) FROM table WITHIN 2 SECONDS
  • Calculate the average of the transaction values within given error boundaries :SELECT avg(transactionValue) FROM table ERROR 0.1 CONFIDENCE 95.0%

These queries executed over large-scale dataset are executed much faster than in any other Big Data technology not employing sampling methods.

Particularly for predictive algorithms this makes sense, because they have anyway underlying assumption about statistical errors., which can be easily integrated by a data scientists with errors from sampling databases.

Bloom filters

Probabilistic data structures, such as Bloom filters or HyperLoglog, are aiming in the same direction. They are more and more implemented in traditional SQL databases and NoSQL databases.

Bloom filters can tell you if an element is part of a set of elements without browsing through the set of elements by employing a smart hash structure. This means you can skip trying to access elements on disk which anyway do not exist. For instance, if you want to join two large datasets you need only to load the data for which there is a corresponding value in the other dataset. This dramatically improves the performance, because you need to load less data from slow storage.

However, bloom filters can only tell you if an element is definitely not in the set. This means it can only tell you with a certain probability if a given element is in the set. However, this is for the the given use cases of bloom filters no problem.

HyperLogLog

Hyperloglog structures allow you to count the number of unique elements in a set without storing the whole set.

For example, let us assume you want to count the unique listeners of a song on the web.

If you use traditional SQL technologies then you need to store for 5 million unique listeners (not uncommon on Spotify) 80 MB of data for one song. It takes also several seconds for each web site request just to do the count unique or to insert a new unique listener.

By using HyperLoglog you need to store at a maximum only few kilobytes of information (usually much less) and can read/update instantaneously the counted unique listeners.

Usually these results are correct within a minor configurable error margin, such as 0,12%.

Find some calculations in my lecture materials.

Aspect 3: Probablistic Databases

Your company has deployed a Big Data technology and uses it productively. Data scientists generate new statistical models on a daily basis all over the world based on your enormous data reservoir. However, all these models have only a very selective view on the real world and different data scientists use different methods and assumptions to do the same analysis, i.e. they have a different statistical view on the same object in the real world.

This is a big issue: Your organization has an inconsistent view on the market, because different data scientists may use different underlying data, assumptions and probabilities. This leads to a loss of revenue, because you may define contradictory strategies. For example, data scientist A may do a regression analysis on sales of ice cream based on surveys in North France with a population of 1000. Data scientist B may independently do a regression analysis on sales of ice cream in South Germany with a population of 100. Hence, both may come up with different prediction for ice cream sales in middle Europe, because they have different views on the world.

This is not an issue only with collaboration. Current Big Data technologies do not support a consistent statistical view of the world on your data.

This is where probabilistic databases will play a key role. They provide an integrated statistical view on the world and can be queried efficiently by employing new techniques, but still supporting SQL-style queries. For example one can query the location of a truck from a database. However, instead of just one location of one truck you can get several locations with different probabilities associated to them. Similarly you may join all the trucks with a certain probability being close to goods at a certain warehouse.

Current technologies are based on academic prototypes, such as MayBMS by Cornell University, BayesStore by University of California Berkeley or Trio by Standford University.

However, the technologies lack still commercial maturity and more research is needed.

Many organization are not ready for this next big step in Big Data and it is expected that this will take at least 5-10 years until the first are ready to employ such technology.

Context of the Lambda architecture

You may wonder how this all fits into the Big Data Lambda architecture and I will briefly explain it to you here.

Aspect 1: Integration of analytics tools in the cluster

Aspect 1, the integration of analytics tools with your cluster, has not been really the focus of the Lambda Architecture. In fact, this is missing, although it has significant architectural consequences, since it affects resources used, reusability (cf. also here) or security.

Aspect 2: Sampling databases and probabilistic data structures

Sampling databases and probabilistic data structures are most suitable for the speed-and serving layer. They allow fast processing of data while only being as accurate as needed. If one is satisfied with their accuracy, which can be expected for most of the business cases after thoughtful reconsideration, then one even won’t need a batch layer anymore.

Aspect 3: Probabilistic databases

Probabilistic databases will be initially part of the serving layer, because this is the layer the data scientists directly interact with in most of the cases. However, later it will be integral part of all layers. Especially the deployment of quantum computing, as we see it already in big Internet and High Tech companies, will drive this.

Conclusion

I presented in this blog post important aspects for the future of Big Data technologies. This covered the near-term future, medium-term future and long-term future.

In the near-term future we will see a better and better integration of analytics tools into Big Data technology, enabling transparent computation of sophisticated analytics models over a Big Data cluster. In the medium-term future we will see more and more usage of sampling databases and probabilistic data structures to avoid unnecessary processing of large data to save costs and resources. In the long-term future we will see that companies will build up an integrated statistical view of the world to avoid inconsistent and duplicated analysis. This enables proper strategy definition and execution of 21st century information organizations in traditional as well as new industries.

The Lambda Architecture for Big Data in your Enterprise

I will present in this blog post the Lambda architecture for Big Data. This architecture is about integrating historical Big Data with “live” streaming Big Data. Afterwards, the concept of a large data lake in your enterprise or amongst enterprises in a B2B scenario is explained. This data lake – based on the lambda architecture – can replace a service oriented architecture (SOA), because it is easier to implement and manage for large data volumes in a variety of formats. Hence, a plethora of use cases arises. Finally, I will discuss how this architecture can be implemented using various open source software technologies based on the Hadoop Ecosystem.

The Lambda Architecture

Big Data has become an increasing popular topic over the last years. Big Data is about processing large volumes of data in a variety of formats taken into account live  streaming or historical data. One large computing cluster is used to store and process all of one or more companies’ data.

Internet companies, such as Google, Yahoo or Facebook, are driven by new business models for which existing technology was not suitable. This led to the development of new technologies known under the common umbrella of NoSQL. Furthermore, there has been the need to integrate them in a flexible standardized architecture to enable Big Data. The lambda architecture is such an architecture and has been coined recently by Nathan Marz and James Warren.

It has the following key features:

  • Standardized fault-tolerant distributed file system that spawns across the whole cluster – this file system is the base of the data lake that I will explain later.
  • A batch processing layer for processing large amounts of historical data stored in the computing cluster
  • A serving layer for providing fast access to results of batch processed data
  • A real-time processing layer (or “speed layer”) for “live” processing of data streams, such as sensor data or stock market data
  • A long term storage layer optimized for extremely cheap storage of data that is rarely used (e.g. for legal reasons). Usually you do not find this in other articles describe lambda architecture, but I think it is an important feature to highlight. Here you have very old data (more than multiple years) that you do not need in your day to day business – you can store them on very cheap hardware with a lot of disk space but much less computing power and memory capacity.

These features are not new and have been addressed partly also by other architectures known in other domains, such as Business Intelligence, Complex Event Processing, Data Warehouse or Master Data Management. However, the lambda architecture addresses them in context of huge data volumes, diversity of data formats (polyglot persistence) and integrates them all in one architecture.

The term “lambda” stems from the following function used for doing analytics in context of Big data:

query = λ(all data) = λ (live streaming data) * λ (historical data)

Basically it say that all analytics functions λ combining live streaming data and historical data can be computed on systems implementing the lambda architecture. I will later discuss the implication of this for the implementation of the architecture.

The lambda architecture is illustrated in the following figure

lambdaarchitecture

The lambda architecture provides the data scientist means and tools to analyze any data occuring in the company, whereby tools can be easily plugged into the architecture without requiring later major implementation efforts.

Machine learning components can autonomously leverage the lambda architecture to do prediction and automatically implement actions. This is known as predictive and prescriptive analytics.

Data Lake

One of the most interesting aspect of the lambda architecture is that you have a cluster of nearly unlimited storage and memory capacity. You can have even an in-memory database with a memory capacity on the terabyte to petabyte scale distributed over the whole cluster. Popular open source frameworks, such as Hadoop, allow you to use commodity hardware, so that deploying such an architecture can be relatively cheap and they have already built-in fault-tolerance, so that developers do not need to mess around with it.

With such a large cluster you can create a big data lake in your company (see next figure). Basically all your data ends up in this cluster and all applications including the one in the cloud can share it via simple file system access mechanisms and you can use the computing power of the whole cluster to do analysis. Needless to say that you save a lot of money, because you save a lot of redundant  ETL processes, which all have to be made fault-proof and interact with different systems. Modern Big Data architectures take care of this for you.

datalake

Finally, exchanging data becomes much easier than in a Service-oriented Architecture (SOA), where you need to design interfaces and implement services – here every application simply access the distributed file system in the cluster.

Implementing a Lambda Architecture

There are several things to consider when you implement the lambda architecture. Firstly, you can choose from a variety of components to implement it. For instance, on the open source side Apache Hadoop / Apache Spark is very popular which is used by many companies including all popular Internet companies, such as Facebook or Google. You can also use other open source components, such as Apache Cassandra for batch processing and Twitter Storm for Stream processing. Additionally, you can also use commercial tools, such as SAP HANA Cloud platform. Finally, you can put your lambda architecture completely on-premise, completely in the cloud (see my example with Amazon Elastic Map Reduce, which partly implements a Lambda Architecture) or have some kind of hybrid model. In the following I will describe an implementation using Apache Hadoop and additional tools that can integrate with Apache Hadoop.

Software Components

You can use the following components for implementing the lambda architecture.

  • Standardized fault-tolerant distributed file system: Hadoop Distributed Filesystem (HDFS). You can use also other distributed file systems. The choice of the file system is transparent to the application, i.e. they won’t need to use different APIs for different file systems. Most of the time you will be fine with HDFS, but, for example, cloud providers, such as Amazon, may implement their own that fits to their infrastructure.
  • Batch Processing layer: Here you can use Hadoop Yarn, which is responsible for distributing Big Data Analytics jobs, such as map reduce jobs. Yarn allows you even to “containerize” your jobs, i.e. define CPU, memory and network limitations across the big data cluster for a specific job. This allows you to do proper capacity management – one of the most important aspects of a lambda architecture. If you need in-memory batch processing then you should check out Apache Spark. If you want to have a more generic job control, i.e. because you have other distributed applications around your cluster , not based on the MapReduce paradigm, you can use Apache Mesos.
  • Serving layer: The serving layer provides fast access and advanced query mechanisms for results of batch jobs. Here you can use typical Big Data databases and data warehouses, such as Apache Hbase or Apache Shark (for in-memory access). You will probably have multiple different technologies here according to the polyglot persistence NoSQL paradigm. They offer typical interfaces, such as JDBC or ODBC, to integrate with any application.
  • Real-time processing layer: Although Hadoop can process streaming data, most of the time you will choose a software component supporting complex event processing of live streaming data across your cluster, such as Apache Spark Streaming or Twitter Storm.
  • The long term persistence layer is mostly a hardware choice: Here you need a lot of cheap hard disk space, e.g. by not using SSD flash drives, and little computing / memory power. It is usually a separate cluster connected to the other cluster and it leverages the fault tolerance features of HDFS, such as automated replication of data to several nodes and re-replication in case of node failures.

Furthermore, you can have a lot of other software components that automatically build on the aforementioned core technologies, such as Apache Hive or Apache Shark, a Data Warehouse for Hadoop, or Apache Oozie, which is a workflow tool for complex ETL processes distributed over your data lake.

As mentioned before, there is a wide variety of alternatives that you can use to implement the lambda architecture. The standardized fault-tolerant distributed file system is most of the time the base for everything and you can also gradually evolve your architecture and implement it using different components.

Delivery Pipeline

I briefly described before that capacity management is an important part of the lambda architecture. You need to define how big data jobs are programmed and tested as well how they get into the cluster. I expect that in the future not only programmers, but also business people, such as data scientist will need to load big data jobs in your cluster. This means you will need to (1) properly define your delivery pipeline (2) implement and enforce proper capacity management and (3) have a bullet-proof dependency management for different software versions in your cluster.

Luckily by using Apache Yarn or Apache Mesos together with a cluster monitoring software, such as Ganglia, you can do proper capacity management.

Recently, more tools, such as Docker, using advanced virtualization features of the Linux kernel (cgroups) have emerged making capacity management even more easier and flexible. These technologies also have built-in dependency management to avoid a library/versioning hell. Google developed an open source scheduling system, called Kubernetes for them.

Combining Stream-Processing and Batch Data

One core goal of the lambda architecture is to integrate live streaming and batch processing. In fact, most of the recent articles on lambda architecture are just about providing both as software components. However, you will also need to integrate this on the query level, because complex event processing queries are a little bit different from batch processing queries.

Spark Streaming demonstrates how you can join historical data with stream processed data at the same time.

Hardware Components

Hardware considerations for a lambda architecture have – if at all – only been briefly discussed in most of the publications. Hardware planning is important for your cluster – we have seen this already with the long term storage. Furthermore, if you have in your big data cluster a few very old machines than this will affect all jobs running on your cluster. You will need to have proper monitoring tools and rules deployed to identify automatically these kind of bottlenecks.

Conclusion

Once you have implemented the lambda architecture you will need to teach everybody to use it. You will need to plan migration of datas torage for analytics from the individual systems to your data lake, i.e. your big data cluster. Keep in mind that the lambda architecture is about analytics. Although it is possible to include transactional systems into this (e.g. a MySQL Cluster), you will probably still use for your individual ERP systems, CRM systems etc. standard transactional databases of which you extract the data in put them into the cluster for analytics.

However, there are also other tools for doing distributed transactions, such as CloudTPS or even more advanced the Bitcoin transaction system. They may replace individual transactional databases in the future.

More and more companies are embarking on the journey of a standardized Big Data architecture each year. Most of them use open source technologies to gradually migrate towards one big data lake as it has been described here.

Big Data: Bring Computation to Data

Big Data is the topic of the coming years. Even today large Internet companies store exabytes of data and their revenue model is based on selling products as well as services around this data. Consequently, they need to process data using advanced statistical methods, such as machine learning. Hence, they need to think about how to do this efficiently. Currently, especially in-memory is hyped to address this issue. However, this is only one aspect. A fundamentally more important aspect is where the data is processed in a distributed multi-node data environment.

A brief history on software architectures

In the beginning of software development, many applications have been single monolithic applications. They have been deployed on a single computer. This lead to several problems, such as that developers could hardly reuse code of monolithic applications and the approach did not scale very well since it was limited to a single computer. The first problem has been addressed by introducing different layers into the architecture. The resulting architectures are usually based on three layers (see next figure): data layer, service layer and presentation layer. The data layer handles any functionality for managing data, such as querying or storing it. The service layer implements business logic, e.g. it implements business process. The presentation layer allows the user to interact with the implemented business processes, e.g. entering of new customer data. The layers communicate with each other using well-defined interfaces implemented today in REST, OData, SOAP, Websockets or HTTP/2.0. threelayerarchitecture

With the emergence of the Internet, these layers had to be put physically on different machines to provide larger scalability. However, they have never been designed with this in mind. The network layer has only limited transport bandwidth and capacity. Indeed, for very large data it can be faster to store it on a large drive and transport it by truck to its destination than doing it by the network.

Additionally, during development scalability of data computation is of less interest, because in the Internet world it is often not known how many people will have access to an application and this may change over time. Hence, you need to be able to scale dynamically up an down. I observe that more and more of the development efforts in this area have moved to operations, who need to implement monitors, load-balancer and other technology to scale applications. This is also the reason why DevOps is a popular and emerging paradigm for developing and operating Internet-scale web applications, such as Netflix.

Towards New Software Architectures: Bring Computation to Data

The multiple layer approach does make sense and you could it even split it into more layers (“services”), but you have to evaluate carefully complexity and reusability of your service design. More important, you will have to think about new interfaces, because if components are located on different machines or different memory instances, your application will spend a lot of time for moving data between them. For instance, the application logic on the application server may request all customer transactions from the database and then correlate them to write the results back into the database. This requires a lot of data to be transferred from the database to the application server and potentially costs a lot of performance. Finally, it does not scale at all.

This problem first emerged when companies introduced the first Online Analytical Processing (OLAP) engines as part of business intelligence solutions for understanding their business. Database queries proved as too simple and would require to transfer first a lot of data to the application server. Hence, the Structured Query Language (SQL) for databases was extended to cope with these new requirements (e.g. the CUBE operator). Moreover, you can define your own custom functions (e.g. SQL Stored procedures), but they have to be implemented very vendor specific. For instance, distributed databases based on Apache Hadoop support custom functions. However, you can integrate sometimes other programming languages, such as Java. While stored procedures are already an improvement in terms of security (protection against SQL injection attacks), they have the problem that it is very difficult to write sophisticated programs to handle modern Big Data applications. For instance, many applications require machine learning, statistical correlation or other statistical methods. It is difficult to write them as stored procedures and to maintain support for different vendors. Furthermore, it leads again to monolithic applications. Finally, they are not dynamic – the application cannot decide to do any new computation on the fly without reimplementing it in the database layer (e.g. implement a new machine learning algorithm). Hence, I suggest another way to address this issue.

A Standard for Bringing Computation to Data?

As mentioned, we want to support modern Big Data applications by providing suitable language support for machine learning and statistical methods on top of any database system (e.g. MySQL, Hadoop, Hbase or IBM DB2). The next figure illustrates the new approach. The communication between the presentation and service layer works as usual. However, the services do not call functions on the data layer, but send any data-intensive computation they want to perform as an R script to the data layer, which executes it and only sends back the result.

bringcomputationtodataarchitecture

I observed that the programming language R for statistical computing has been recently integrated in various data environments, such as transactional databases, Apache Hadoop clusters or in-memory databases, such as SAP HANA. Hence, I think R could be a suitable language for describing computation that operates on data. Additionally, R has already a lot of built-in packages for machine learning or statistical data processing. Finally, depending on the openness of the underlying data environment, you can integrate R tightly into it, so you may not have to do extensive in-memory transfers.

The advantage of the approach are:

  • business logic stays in the service level and does not move to the data layer
  • You can easily add new services without modifying the data layer – so you avoid a tight coupling, which makes it easier to change the data layer or to introduce new functionality
  • You can mine R scripts generated by services to determine which computation the user is likely to do next to start executing it before the user requests it.
  • Caching and distribution of data processing can be based on a more sophisticated analysis of the R scripts using the R Profiler Rprof
  • R is already known by many business analysts or social scientists/psychologists

However, you will need to have some functionality for governing the execution of the R scripts in the data layer. This includes decisions on when to schedule computation or creating new computing/data nodes (e.g. real-time vs batch). This will require a company-wide enterprise architecture approach where you need to define which data should be real-time and which data should be batch-processed. Furthermore, you need to take into account security and separation of concerns.

In this context, Apache Hadoop might be an interesting solution from the technology perspective.

What is next

The aforementioned approach is only the beginning. By using this solution, you can think about true inter-cloud deployments of your application. Finally, you can enable inter-organizational data-processing business processes.

Enterprise Architecture Management in Business Networks

In my last blog post, I wrote about multi-cloud scenarios for enterprise applications focusing on enterprise applications of one company distributed over several different cloud providers. This blog post will be about enterprise applications connecting data, processes and the organization of different companies within business networks. Particularly complex scenarios with a high competition and margins, such as third party logistics (3PL) require a sophisticated approach ensuring and extending competitive advantages. We will see challenges when applying reference models, such as EDIFACT, ASC X12 or SCOR. Nevertheless, I see reference models – or more particularly their combination – as key success factors for business networks, since they represent best practices, common understanding and can significantly improve on-boarding as well as continuous education of new business network members. Hence, I will discuss how enterprise architecture and portfolio management can support the application and combination of different reference models in business networks. Finally, I present how the emerging concept of virtual software containers can support this approach from a technology perspective.

Types of Business Networks

One interesting question is what constitutes a business network [1]. Of course, it can be predefined and agreed upon, but there are a lot of business networks, where there are undefined and informal relations between two companies that have also (in-)dependent relations with other companies. The whole network of relations is called a business network. This is very similar to social networks where there are two related human beings that have independent relations to other human beings. However, all types of business networks have different forms of implicit or explicit governance, i.e. decision-making structures. Implicit governance refers to the fact that the chosen governance model has not been defined or agreed on by all involved parties in a business network. Explicit governance refers to an awareness and definition of governance arrangements by all parties in a business network.

The following generic modes of governance can exist in a business network (see next figure):

  • One inherits most / all types of decision-making roles and the others have merely an execution role

  • A group inherits most / all types of decision-making roles and a majority has only a execution role

  • Several large groups with decision-making roles related to different aspects and a majority has only execution role

  • Everybody has every role

businessnetworkgovernance

Additionally, business networks may expose a different degree of awareness and intensity of relations. On the one hand you may have a very structured business network, such as supply chains and on the other hand there is the free market where two parties directly interact without considering other parties in their interaction. Both extremes are unlikely and we will find companies on the whole spectrum. For instance, within a larger supply chain, one company may know only the direct predecessor and the direct successor company. It may just agree on the specification of the product to be delivered, but may not include any data or impose any processes on how the product should be manufactured. This means there is a limited degree of awareness and the intensity is less strong, because they do not really know how something is achieved by the other organizations in a business network.

contractlogistics

It can be observed that business networks become more complex, because new types of business networks emerge, such as contract logistics or third party logistics, where your business partners directly integrate dynamically in your manufacturing plant or point of sale as well as corresponding business processes. Hence, you need to work out best practices and stay ahead of the competition. An example can be seen in the previous figure, where the third party logistics provider has a packaging business process deployed at “Manufacturing Plant A”. This business process leverages applications and other resources within the sphere of “Manufacturing Plant A”. Besides delivery, the third party logistics provider integrates similarly in “Manufacturing Plant B”, where it does pre-assembly of the delivered parts from “Manufacturing Plant A”.

Applying Reference Models for Business Networks

Reference models exist since several decades in the area of business information systems, management and software engineering. Some are driven by academia and others are driven by industry. Usually both have been validated scientifically and in practice.

Reference models represent best industry practices for business processes derived from experts and organizations. They can cover the process, organization/governance, product, data and/or IT application perspective within a given business domain. Hence, they can also be viewed as standards. Examples for reference models are EDIFACT, SCOR, Prince2 or TOGAF. These are rather generic models, but there are also industry specific ones, such as the one existing for humanitarian supply chain operations [2] or retailing [3].

The main benefits of reference models with respect to business networks are:

  • Support your Enterprise Architecture Management (e.g. by reduced modeling efforts, transparency or common language)

  • Benchmark against industry

  • Evaluation of applications for enabling business networks

  • Business network integration by integrating available applications in a business network

There some issues involved when using reference models:

  • They are “just” models. Having them is like having a book on a shelf – pretty useless

  • Some of them are very generic applying to any business case/network and others are very specific

  • Some focus on business processes (e.g. SCOR), some on business data (e.g. EDIFACT, ASC X12), nearly none on organizational/governance aspects, others on material or money flows and others combine only some of the aspects (e.g. ARIS)

  • Some do provide key performance indicators for benchmarking your performance against the reference model, but most do not

  • It is unclear how different reference models can be combined and tailored to enable business networks

  • Tools supporting definition, viewing, visualizing, expertise provisioning, publishing or adaptation of these models are not standardized and a wide variety exists

  • Tools supporting monitoring the implementation of reference models in information systems consisting of technology and humans do not really exist

There exist already reference models for business networks, such as EDIFACT, and they are used successfully in practice. However, in order to gain benefit from reference models in a business network, you will need to have an integrated approach addressing the aforementioned issues as I will present in the next section.

Enterprise Architecture Management in Business Networks

Reference models are needed for superior business performance to deal with the increasing complexity of business networks. You will never have a perfect world by using only one reference model. Hence, you will need an enterprise architecture management approach for business networks to efficiently and effectively address the issues of one single reference model by combining several reference models (see next figure). Traditionally, enterprise architecture management focused only on the single enterprise and not business networks, but given the growing complexity of business networks and disrupting societal changes, it is mandatory to consider the business network dimension.

referencemodelpuzzle

Establishing an enterprise architecture management approach depends on the type of business network as I have explained before. For example, you may have one organization selecting and managing your reference model portfolio and application landscape for the whole business network. You may have also no one responsible, but you need to align and be aware of each other’s portfolio. For instance, you can create a steering board for this. Additionally, you will need to establish key performance indicators and benchmarking processes with respect to the business network’s reference model portfolio.

Once you have your enterprise architecture management approach leveraging combined and tailored reference models, you will have to address the aforementioned dynamics as well as tight integration between business partners in the business network’s information systems. Traditional ERP, CRM and SCM software packages will face difficulties, because even if all partners would use the same systems, there would be a huge variety of configurations to reflect the different internal business processes of members of a business network. Additionally, you will have to manage access and provisioning over the Internet.

Cloud-based solutions address these challenges already partially. They help you to understand how to manage access, governance and provide clearly defined interfaces via the emerging concept of API Management. However, these approaches do not reach far enough. You cannot move dynamically business processes and corresponding applications and data between organizations as a package to integrate it at your business partners’ premise. Furthermore, business processes may change quickly and you want to reuse as well as leverage the change in many different organizations using corresponding applications. This may facilitate a lot of scenarios, such as “bring your own digital business process” in third party logistics. Hence, there is still a need for further technology innovation and research.

Conclusion: Software Containers for “Bring your own Digital Business Process”

We have seen that new complex scenarios in business networks, such as third party logistics, as well as the high competition, tight network integration and dynamics impose new challenges. Instant business network adaptation as well as tight integration between business partners will be a key differentiator between business networks and ultimately decide about their success. Reference models representing industry best practice need to be combined and tailored on the business network level to achieve its future goals. However, no silver bullet exists, so you will also need to enable enterprise architecture management at the business network level. Finally, you need tools to enable dynamic movement of business processes as well as applications between different organizations in a business network. A coherent and reusable approach should be used.

Unfortunately, these tools do not exist at the moment, but there are some first approaches, which you should investigate in this context. Docker can create containers consisting of digital business process artifacts, applications, databases and many more. These containers can be sent over the business network and easily be integrated with containers existing in other organizations. Hence, the vision of instant dynamic business network adaption might not be as far-fetched as we think. The next figure illustrates this idea: The third party logistics provider sends the containers “Packaging” and “Pre-Assembly” to its business partners. These containers consists of applications supporting the corresponding business process. They are executed in the business partners’ clouds and they integrate with the existing business processes and applications there (e.g. the ERP system). Employees of the third party logistics provider use them at the side of the business partner. The containers are executed at the business partner side, because the business process takes anyway place there and thus it makes sense to let it also digitally happen there, before we send a lot of data and information to the network back and forth or having a lack of application integration.

businessnetworksoftwarecomponent

References

[1] Harland, C.M.: Supply Chain Management: Relationships, Chains and Networks, British Journal of Management, Volume 7, Issue Supplement s1, p . 63-680, 1996.

[2] Franke, Jörn; Widera, Adam; Charoy, François; Hellingrath, Bernd; Ulmer, Cédric: Reference Process Models and Systems for Inter-Organizational Ad-Hoc Coordination – Supply Chain Management in Humanitarian Operations, 8th International Conference on Information Systems for Crisis Response and Management (ISCRAM’2011), Lisbon, Portugal, 8-11 May, 2011.

[3] Becker, Jörg; Schütte, Reinhard: A Reference Model for Retail Enterprise, Reference Modeling for Business Systems Analyses, (eds.) Fettke, Peter; Loos, Peter, pp. 182-205, 2007.

[4] Verwijmeren, Martin: Software component architecture in supply chain management, Computers in Industry, 53, p. 165-178, 2004.

[5] Themistocleous, Marinos; Irani, Zahir; Love, Peter E.D.: Evaluating the integration of supply chain information systems: a case study”, European Journal of Operational Research, 159, p. 393-405, 2004.

Scenarios for Inter-Cloud Enterprise Architecture

The unstoppable cloud trend has arrived at the end users and companies. Particularly the first ones openly embrace the cloud, for instance, they use services provided by Google or Facebook. The latter one is more cautious fearing vendor lock-in or exposure of secret business data, such as customer records. Nevertheless, for many scenarios the risk can be managed and is accepted by the companies, because the benefits, such as scalability, new business models and cost savings, outweigh the risks. In this blog entry, I will investigate in more detail the opportunities and challenges of inter-cloud enterprise applications. Finally, we will have a look at technology supporting inter-cloud enterprise applications via cloudbursting, i.e. enabling them to be extended dynamically over several cloud platforms.

What is an inter-cloud enterprise application?

Cloud computing encompasses all means to produce and consume computing resources, such as processing units, networks and storage, existing in your company (on-premise) or the Internet. Particularly the latter enable dynamic scaling of your enterprise applications, e.g. you get suddenly a lot of new customers, but you do not have the necessary resources to serve them all using your own computing resources.

Cloud computing comes in different flavors and combinations of them:

  • Infrastructure-as-a-Service (IaaS): Provides hardware and basic software infrastructure on which an enterprise application can be deployed and executed. It offers computing, storage and network resources. Example: Amazon EC2 or Google Compute.
  • Platform-as-a-Service (PaaS): Provides on top of an IaaS a predefined development environment, such as Java, ABAP or PHP, with various additional services (e.g. database, analytics or authentication). Example: Google App Engine or Agito BPM PaaS.
  • Software-as-a-Service (SaaS): Provides on top of a IaaS or PaaS a specific application over the Internet, such as a CRM application. Example: SalesForce.com or Netsuite.com.

When designing and implementing/buying your enterprise application, e.g. a customer relationship management (CRM) system, you need to decide where to put in the cloud. For instance, you can put it fully on-premise or you can put it on a cloud in the Internet. However, different cloud vendors exist, such as Amazon, Microsoft, Google or Rackspace. They offer also a different flavor of cloud computing. Depending on the design of your CRM, you can put it either on a IaaS, PaaS or SaaS cloud or a mixture of them. Furthermore, you may only put selected modules of the CRM on the cloud in the Internet, e.g. a module for doing anonymized customer analytics. You will also need to think about how this CRM system is integrated with your other enterprise applications.

Inter-Cloud Scenario and Challenges

Basically, the exemplary CRM application is running partially in the private cloud and partially in different public clouds. The CRM database is stored in the private cloud (IaaS), some (anonymized) data is sent to different public clouds on Amazon EC2 (IaaS) and Microsoft Azure (IaaS) for doing some number crunching analysis. Paypal.com is used for payment processing. Besides customer data and buying history, the databases contains sensor information from different point of sales, such as how long a customer was standing in front of an advertisement. Additionally, the sensor data can be used to trigger some actuators, such as posting on the shop’s Facebook page what is currently trending, using the cloud service IFTTT. Furthermore, the graphical user interface presenting the analysis is hosted on Google App Engine (PaaS). The CRM is integrated with Facebook and Twitter to enhance the data with social network analysis. This is not an unrealistic scenario: Many (grown) startups already deploy a similar setting and established corporations experiment with it. Clearly, this scenario supports cloud-bursting, because the cloud is used heavily.

I present in the next figure the aforementioned scenario of an inter-cloud enterprise application leveraging various cloud providers.

intercloudarchitecture

There are several challenges involved when you distribute your business application over your private and several public clouds.

  • API Management: How to you describe different type of business and cloud resources, so you can make efficient and cost-effective decisions where to run the analytics at a given point in time? Furthermore, how to you represent different storage capabilities (e.g. in-memory, on-disk) in different clouds? This goes further up to the level of the business application, where you need to harmonize or standardize business concepts, such as “customer” or “product”. For instance, a customer described in “Twitter” terms is different from a customer described in “Facebook” or “Salesforce.com” terms. You should also keep in mind that semantic definitions change over time, because a cloud provider changes its capabilities, such as new computing resources, or focus. Additionally, you may dynamically change your cloud provider without disruption to the operation of the enterprise application.
  • Privacy, risk and Security: How do you articulate your privacy, risk and security concerns? How do you enforce them? While there are already technology and standards for this, the cloud setting imposes new problems. For example, once you update the encrypted data regularly the cloud provider may be able to determine from the differences parts or all of your data. Furthermore, it may maliciously change it. Finally, the market is fragmented without an integrated solution.
  • Social Network Challenge: Similarly to the semantic challenge, the problem of semantically describing social data and doing efficient analysis over several different social networks exist. Users may also change arbitrarily their privacy preferences making reliable analytics difficult. Additionally, your whole company organizational structure and the (in-)official networks within your company are already exposed in social business networks, such as LinkedIn or Xing. This blurs the borders of your enterprise further to which it has to adapt by integrating social networks into its business applications. For instance, your organizational hierarchy, informal networks or your company’s address book exist probably already partly in social networks.
  • Internet of Things: The Internet of Things consists of sensors and actuators delivering data or executing actions in the real world supported by your business applications and processes. Different platforms exist to source real world data or schedule actions in the real world using actuators. The API Management challenge exists here, but it goes even beyond: You create dynamic semantic concepts and relate your Internet of Things data to it. For example, you have attached an RFID and a temperature sensor to your parcels. Their data needs to be added to the information about your parcel in the ERP system. Besides the semantic concept “parcel” you have also that one of a “truck” transporting your “parcel” to a destination, i.e. you have additional location information. Furthermore it may be stored temporarily in a “warehouse”. Different business applications and processes may need to know where the parcel is. They do not query the sensor data (e.g. “give me data from tempsen084nl_98484”), but rather formulate a query “list all parcels in warehouses with a temperature above 0 C” or “list all parcels in transit”. Hence, Internet of Thing data needs to be dynamically linked with business concepts used in different clouds. This is particularly challenging for SaaS applications, which may have different conceptualization of the same thing.

Enterprise Architecture for Inter-Cloud Applications

You may wonder how you can integrate the above scenario at all in your application landscape and why you should do it at all. The basic promise of cloud computing is that it scales according to your needs, that you can outsource infrastructure to people who have the knowledge and capabilities to run the infrastructure. Particularly, small and medium size enterprises benefit from this and the cost advantage. It is not uncommon that modern startups start their IT using the cloud (e.g. FourSquare).

However, also large corporations can benefit from the cloud, e.g. as a “neutral” ground for a complex supply chain with a lot of partners or to ramp up new innovative business models where the outcome is uncertain.

Be aware that in order to offer some solution based on the cloud you need to first have a solid maturity of your enterprise architecture. Without it you are doomed to fail, because you cannot make proper risk and security analysis, scaling and benefit from cost reductions as well as innovation.

I propose in the following figure an updated model of the enterprise architecture with new components for managing cloud-based applications. The underlying assumption is that you have an enterprise architecture, more particularly a semantic model of business objects and concepts.

intercloudarchitecturenew

  • Public/Private Border Gateway: This gateway is responsible for managing the transition between your private cloud and different public clouds. It may also deploy agents on each cloud to enable a secure direct communication between different cloud platforms without the necessity to go through your own infrastructure. You might have more fine granular gateways, such as private, closest supplier and public. A similar idea came to me a few years ago when I was working on inter-organizational crisis response information systems. The gateway is not only working on the lower network level, but also on the business processes and objects level. It is business-driven and depending on business processes as well as rules, it decides where the borders should be set dynamically. This may also mean that different business processes have access to different things in the Internet of Things.
  • Semantic Matcher: The semantic matcher is responsible for translating business concepts from and to different technical representations of business objects in different cloud platforms. This can be simple transformations of not-matching data types, but also enrichment of business objects from different sources. This goes well beyond current technical standards, such as EDI or ebXML, which I see as a starting point. Semantic matching is done automatically – there is no need for creating time consuming manual mappings. Furthermore, the semantic matcher enhances business objects with Internet of Things information, so that business applications can query or trigger them on the business level as described before. The question here is how you can keep people in control of this (see Monitor) and leverage semantic information.
  • API Manager: Cloud API management is the topic of the coming years. Besides the semantic challenge, this component provides all necessary functionality to bill, secure and publish your APIs. It keeps track how is using your API and what impact changes on it may have. Furthermore, it supports you to compose new business software distributed over several cloud platforms using different APIs subject to continuous change. The API Manager will also have a registry of APIs with reputation and quality of service measures. We see now a huge variety of different APIs by different service providers (cf. ProgrammableWeb). However, the scientific community and companies have not picked up yet the inherent challenges, such as the aforementioned semantic matching, monitoring of APIs, API change management and alternative API compositions. While there exists some work in the web service community, it has not yet been extended to the full Internet dimension as it has been described in the scenario here. Additionally, it is unclear how they integrate the Internet of Thing paradigm.
  • Monitor: Monitoring is of key importance in this inter-cloud setting. Different cloud platforms offer different and possible very limited means for monitoring. A key challenge here will be to consolidate the monitoring data and provide an adequate visual representation to do risk analysis and selecting alternative deployment strategies on the aggregated business process level. For instance, by leveraging semantic integration we can schedule request to semantically similar cloud and business resources. Particularly, in the Internet of Thing setting, we may observe unpredictable delays, which lead to delayed execution of real-world activities, e.g. a robot is notified that a parcel flew off the shelf only after 15 minutes.

Developing and Managing Inter-Cloud Business Applications

Based on your enterprise architecture you should ideally employ a model-driven engineering approach. This approach enables you automation of the software development process. Be aware that this is not easy to do and failed often in practice – However, I have also seen successful approaches. It is important that you select the right modeling languages and you may need to implement your own translation tools.

Once you have all this infrastructure, you should think about software factories, which are ideal for developing and deploying standardized services for selected platforms. I imagine that in the future we will see small emerging software factories focusing on specific aspects of a cloud platform. For example, you will have a software factory for designing graphical user interfaces using map applications enhanced with selected Odata services (e.g. warehouse or plant locations). In fact, I expect soon a market for software factories which enhances the idea of very basic crowd sourcing platforms, such as the Amazon Mechanical Turk.

Of course, since more and more business applications shift towards the private and public clouds, you will introduce new roles in your company, such as the Chief Cloud Officer (CCO). This role is responsible for managing the cloud suppliers, integrating them in your enterprise architecture and proper controlling as well as risk management.

Technology

The cloud exists already today! More and more tools emerge to manage it. However, they do not take into account the complete picture. I described several components for which no technologies exist. However, some go in the right direction as I will briefly outline.

First of all, you need technology to manage your API to provide a single point of management towards your cloud applications. For instance, Apache Delta Cloud allows managing different IaaS provider, such as Amazon EC2, IBM SmartCloud or OpenStack.

IBM Research also provides a single point of management API for cloud storage. This goes beyond simple storage and enables fault tolerance and security.

Other providers, such as Software AG, Tibco, IBM or Oracle provide “API Management” software, which is only a special case of API Management. In fact, they provide software to publish, manage the lifecycle, monitor, secure and bill your own APIs for the public on the web. Unfortunately, they do not describe the necessary business processes to enable their technology in your company. Besides that, they do not support B2B interaction very well, but focusing on business to development aspects only. Additionally, you find registries for public web APIs, such as ProgrammableWeb or APIHub, which are first starting point to find APIs. Unfortunately, they do not feature sematic description and thus no semantic matching towards your business objects, which means a lot of laborious manual work for doing the matching towards your application.

There is not much software for managing the borders between private and public cloud or even allowing more fine-granular borders, such as private, closest partner and the public. There is software for visualizing and monitoring these borders, such as the eCloudManager by Fluid Operations. It features semantic integration of different cloud resources. However, it is unclear how you can enforce these borders, how you control them and how can you manage the different borders. Dome 9 goes into this direction, but focuses only on security policies for IaaS applications. It does only understand data and low level security, but not security and privacy over business objects. Deployment configuration software, such as Puppet or Chef, are only first steps, since they focus only on deployment, but not on operation.

On the monitoring side you will find a lot of software, such as Apache Flume or Tibco HAWK. While these operate more on the lower level of software development, IFTTT enables execution of business rules over data on several cloud providers providing public APIs. Surprisingly, it considers itself at the moment more as a end user facing company. Additionally, you find in the academic community approaches for monitoring distributed business processes.

Unfortunately, we find little ready to go software in the area “Internet of Things”. I worked myself with several R&D prototypes enabling cloud and gateways, but they are not ready for the market. Products have emerged but they are only for a special niche, e.g. Internet of Things enabled point of sale shop. They lack particularly a vision how they can be used in an enterprise-wide application landscape or within a B2B enterprise architecture.

Conclusion

I described in this blog the challenges of inter-cloud business applications. I think in the near future (3-5 years) all organizations will have some them. Technically they are already possible and exist to some extent. The risk and costs will be for many companies lower than managing everything on their own. Nevertheless key requirement is that you have a working enterprise architecture management strategy. Without it you won’t have any benefits. More particularly, from the business side you will need adequate governance strategies for different clouds and APIs.

We have seen already key technologies emerging, but there is still a lot to do. Despite decades of research on semantic technologies, there exists today no software that can perform automated semantic matching of cloud and business concepts existing in different components of an inter-cloud business application. Furthermore, there are no criteria on how to select a semantic description language for business purposes that are as broad as described here. Enterprise Architecture Management tools in this area only slowly emerge. Monitoring is still fragmented with many low level tools, but only few high-level business monitoring tools. They cannot answer simple questions, such as “what if cloud provider A goes down then how fast can I recover my operations and what are the limitations”. API Management is another evolving area, but which will have a significant impact in the coming years. However, current tools only consider low-level technical aspects and not high-level business concepts.

Finally, you see that a lot of challenges mentioned in the beginning, such as the social network challenge or Internet of Thing challenge, are simply not yet solved, but large scale research efforts are on their way. This means further investigation is needed to clarify the relationships between the aforementioned components. Unfortunately, many of the established middleware vendors lack a clear vision of cloud computing and the Internet of Things. Hence, I expect this gap will be filled by startups in this area.

Modularizing your Business and Software Component Design

In this blog, I will talk about modularizing your enterprise from a business and software perspective. We start from the business perspective, where I provide some background how today’s businesses are modularized. Afterwards, we will investigate how we can support the modularized business with software components and how they can be designed. Finally, we will see some software tools enabling component-based design for a modularized business, such as the service component architecture (SCA) or OSGi.

Business perspective

You will find a lot of different definitions of what and how a business can be modularized. Most commonly, business modules are known as business functions, such as controlling, finance, marketing, sales or production. Of course you can view this also on a more fine granular level. Furthermore, we may have several instances of the same module. This is illustrated in the following figure. On the left-hand side the business modules of a single enterprise are shown. On the right-hand side you see the business modules of decentralized organizations. There, the enterprise is split up in several enterprises, one for each region. Business modules are replicated across regions, but adapted to local needs.

businessarchitecture

A module has usually clear interfaces to other modules. For instance, in earlier times you used paper forms to order something from the production department.

One of the most interesting questions is how one should design business modules. Well there is no easy answer to this, but one goal is to reduce complexity between modules. This means there should not be many dependencies between modules, if any. There can be a lot of dependencies within one module. For instance, people work very closely together in the production department, because they share common knowledge and resources, such as machines or financial ones.

On the other side, production and sales have some very different business processes. Obviously, they are still dependent, but this should be done through a clear interface between them. For example, there can be regular feedback from a sales person to the production engineer on what the customer needs.

Clearly, it depends on the economic environment how you define business modules and the organization. However, this environment changes and this means business modules can be retired, new interfaces or completely new business modules be created.

Unfortunately, this is usually not very well documented and communicated in many businesses. Particularly, the conditions why a business has been designed out of a given set of modules and dependencies exists usually only in the head of some people. Additionally, the interfaces between business modules and their purpose are often not obvious. This can mean significant loss of competitive advantages.

Linking Business and IT Perspective: Enterprise Architecture

Business and IT have not necessarily the same goals. This means they need to be aligned, so that they are not conflicting. Hence, you need to map your business modules to your IT components. This process is called Enterprise Architecture Management. During this process the Enterprise Architecture is constantly modified and adapted to the economic environment.

Of course, you cannot map all your business and your whole IT, because this would be too costly. Nevertheless, you need to choose the important parts that you want to map. Additionally, you will need to define governance processes and structures related to this, but this is not part of this blog entry.

One popular, but simple, illustration is an enterprise architecture composed of four architectures:

  • The Business Architecture describes the business functions/modules, their relations within business processes, people, the underlying strategy, business goals and the relevant economic environment.
  • The Information Architecture is about the business data, their relationships to business functions/modules and processes, the people, its value as well as security concerns.
  • The Software Architecture depicts different kind of components according to IT goals, their relations to business data and business functions/modules.
  • The Technology Architecture is about the technology foundation for enabling the other architectures. It describes the basic infrastructure in form of hardware and software components. This includes local environments as well as cloud environments, such as OpenStack, Google Compute or Amazon EC2.

Some people advocate additionally an IT security architecture. I propose to model it not as an additional architecture, but include IT security concerns in each of the aforementioned architectures. This increases the awareness for IT security in your business. Nevertheless, appropriate tools can generate from the models a complete security view over all architectures.

There are many popular tools, such as the ARIS toolset to map your enterprise architecture.

Of course, you cannot only define top-down from business to IT how this architecture should be designed. You need to take into account the IT perspective.

IT perspective

As mentioned, IT and business goals are not necessarily the same. IT focuses on three areas: storing of information (storage), processing of information (computation) and transporting information (network). The goal is to do this in an efficient manner: Only the minimum of information should be stored, processing information should be as fast as possible and transporting information should only consume minimal resources. Clearly, there are constraints ranging from physical laws over business goals to IT Security concerns. For instance, the three key IT Security goals, namely confidentiality, integrity and availability often have negative impact on all three IT goals.

As I have explained before: business modules are mapped to software components and vice versa. One big question here is about the design of software components, i.e. what software functionality (representing business functionality) should be part of one software component and not one of the others. Software components are usually different from the business modules, because they have different design criteria.

In the past, people often used heuristics. E.g. they introduce “data components” and “functional components”. This makes sense, because you should not have 50 different databases, but only the right amount of databases for your purpose (e.g. one for NoSQL, one for SQL and/or probabilistic databases). This reduces the resource needs and avoids inconsistent data. However, there is no general approach how these heuristics should be applied by different enterprise architects. Another problem is that communication patterns (e.g. via message brokers, such as RabbitMQ) are completely left out.

Hence, I think a more scientific or general approach should be taken towards the design of components, because these heuristics do not give you good guidelines for a sustainable architecture. Based on the three IT focus areas, I propose to have software components for storage (e.g. database), computation (e.g. business logic) and network (e.g. message brokers). Once you have identified these components, you need to decide which functionality you should put in which component. One key goal should be to reduce the dependencies between components. The more communication you have, the more dependencies you have between the different functions in components. Evaluating this manually can be costly and error prone. Luckily, some approaches do this for your and they can be integrated with business modeling as well as software component management tools (cf. here an approach that derives the design of software components (managed using the service component architecture (SCA)) from the communication pattern in business processed (modeled using the business process modeling notation (BPMN)).

Another mean for coherent software component design is to have enterprise architects responsible for mapping one part of the business (e.g. controlling) reviewing the software architecture for another part of the business (e.g. marketing). I introduced such a process in an enterprise some time ago. Such an approach ensures that architecture decisions are made consistent across the enterprise architecture and this fosters also learning from other areas.

Finally, a key problem that you need to consider is the lifecycle management of a software component. Similar to the lifecycle of business modules, software components are designed, implemented, deployed and eventually retired. You need tools to appropriately manage software components.

Tools for Managing Software Components

Previously, I elaborated on the requirements for managing software components:

  • Handle interfaces with other components

  • Support the lifecycle of software components

Two known information technologies for managing software components are OSGi and the Service Component Architecture (SCA).

OSGi

OSGi is a framework for managing software components and their dependencies/interfaces to other components. It is developed by the OSGi alliance. It origins from the Java world and is mostly suitable for Java components, although it has limited support for other non-Java platforms. It considers the lifecycle of components by ensuring that all needed components are started when a component is started and by being able to stop components during runtime. Furthermore, other components and their interfaces can be discovered at runtime. However, there is no deployment method for software components part of the standard.

Since Java can run on many different devices, it is available for Android, iOS, embedded devices, personal computers and servers.

Unfortunately, tool support for linking OSGi and business or information architecture is very limited. Furthermore, an automatic generation and deployment of OSGi components from your enterprise architecture does not exist at the moment. This makes it difficult to understand software component and their relations within your enterprise architecture.

Many popular software projects are based on OSGi, such as the Eclipse project.

Service Component Architecture (SCA)

The service component architecture is a specification for describing software components, their interfaces and their dependencies. It is developed by members of the Organization for the Advancement of Structured Information Standards (OASIS). It does not depend on a specific programming platform, e.g. it supports Java and C++. It supports policies that govern components, a set of components or their communication. However, SCA does not consider the software component lifecycle or how they are deployed exactly.

It is supported by many middleware frameworks, such as TIBCO Active Matrix or Oracle Fusion Middleware.

Similarly to OSGi there is little tool support for linking SCA components and the business or information architecture. However, the SCA specification has a graphical modeling guideline and some recent work describes how they can be linked via business processes. Since OASIS is responsible for other enterprise architecture relevant modeling notations (e.g. BPMN), it can be expected that enterprise architecture tools can be adapted to provide support for linking different parts of the enterprise architecture.

Conclusion

Modularizing your business and designing software component is a difficult task. Not many people understand the whole chain from business to software components. While enterprise architecture and modeling has become a popular topic in research and practice, the whole tool chain from business to software components is not. There have been attempts to introduce model-driven-architecture (MDA), but the supported models were mostly only restricted to the Unified-Modeling-Language, which is not very suitable for business modeling and can be very complex. Additionally it does not take into account the software component lifecycle. Furthermore, the roles of the different stakeholders (e.g. business and IT) using these tools are unclear.

Nevertheless, new approaches based on the business process modeling notation and frameworks for managing software components make me confident that this will change in the near future. Growing IT complexity in terms of communication and virtualization infrastructure will require software support for managing software components. Companies should embrace these new tools to gain competitive advantages in terms of agility and flexibility.