Lambda, Kappa, Microservice and Enterprise Architecture for Big Data

A few years after the emergence of the Lambda-Architecture several new architectures for Big Data have emerged. I will present and illustrate their use case scenarios. These architectures describe IT architectures, but I will describe towards the end of this blog the corresponding Enterprise Architecture artefacts, which are sometimes referred to as Zeta architecture.

Lambda Architecture

I have blogged before about the Lambda-Architecture. Basically this architecture consists of three layers:

  • Batch-Layer: This layer executes long-living batch-processes to do analyses on larger amounts of historical data. The scope is data from several hours to weeks up to years. Here, usually Hadoop MapReduce, Hive, Pig, Spark or Flink are used together with orchestration tools, such as Oozie or Falcon.

  • Speed-Layer/Stream Processing Layer: This layer executes (small/”mini”) batch-processes on data according to a time window (e.g. 1 minute) to do analyses on larger amounts of current data. The scope is data from several seconds up to several hours. Here one may use, for example, Flink, Spark or Storm.

  • Serving Layer: This layer combines the results from the batch and stream processing layer to enable fast interactive analyses by users. This layer leverages usually relational databases, but also NoSQL databases, such as Graph databases (e.g. TitanDB or Neo4J), Document databases (e.g. MongoDB, CouchDB), Column-Databases (e.g. Hbase), Key-Value Stores (e.g. Redis) or Search technologies (e.g. Solr). NoSQL databases provide for certain use cases more adequate and better performing data structures, such as graphs/trees, hash maps or inverse indexes.

In addition, I proposed the long-term storage layer to have an even cheaper storage for data that is hardly accessed, but may be accessed eventually. All layers are supported by a distributed file system, such as HDFS, to store and retrieve data. A core concept is that computation is brought to data (cf. here). On the analysis side, usually standard machine learning algorithms, but also on-line machine learning algorithms, are used.

As you can see, the Lambda-Architecture can be realized using many different software components and combinations thereof.

While the Lambda architecture is a viable approach to tackle Big Data challenges different other architectures have emerged especially to focus only on certain aspects, such as data stream processing, or on integrating it with cloud concepts.

Kappa Architecture

The Kappa Architecture focus solely on data stream processing or “real-time” processing of “live” discrete events. Examples are events emitted by devices from the Internet of Things (IoT), social networks, log files or transaction processing systems. The original motivation was that the Lambda Architecture is too complex if you only need to do event processing.

The following assumptions exists for this architecture:

  • You have a distributed ordered event log persisted to a distributed file system, where stream processing platforms can pick up the events

  • Stream processing platforms can (re-)request events from the event log at any position. This is needed in case of failures or upgrades to the stream processing platform.

  • The event log is potentially large (several Terabytes of data / hour)

  • Mostly online machine learning algorithms are applied due to the constant delivery of new data, which is more relevant than the old already processed data

Technically, the Kappa architecture can be realized using Apache Kafka for managing the data-streams, i.e. providing the distributed ordered event log. Apache Samza enables Kafka to store the event log on HDFS for fault-tolerance and scalability. Examples for stream processing platforms are Apache Flink, Apache Spark Streaming or Apache Storm. The serving layer can in principle use the same technologies as I described for the serving layer in the Lambda Architecture.

There are some pitfalls with the Kappa architecture that you need to be aware of:

  • End to end ordering of events: While technologies, such as Kafka can provide the events in an ordered fashion it relies on the source system that these events are indeed delivered in an ordered fashion. For instance, I had the case that a system in normal operations was sending the events in order, but in case of errors of communication this was not the case, because it stored the events it could not send and retransmitted them at a certain point later. Meanwhile if the communication was established again it send the new events. The source system had to be adapted to handle these situations correctly. Alternatively, you can only ensure a partial ordering using vector clocks or similar implemented at the event log or stream processing level.

  • Delivery paradigms on how the events are delivered (or fetched from) to the stream processing platform

    • At least once: The same event is guaranteed to be delivered once, but the same events might be delivered twice or more due to processing errors or communication/operation errors within Kafka. For instance, the stream processing platform might crash before it can marked events as processed although it has processed them before. This might have undesired side effects, e.g. the same event that “user A liked website W” is counted several times.

    • At most once: The event will be delivered at most once (this is the default Kafka behavior). However, it might also get lost and not be delivered. This could have undesired side effects, e.g. the event “user A liked website W” is not taken into account.

    • Once and only once: The event is guaranteed to be delivered once and only once. This means it will not get lost or delivered twice or more times. However, this is not simply a combination of the above scenarios. Technically you need to make sure in a multi-threaded distributed environment that an event is processed exactly once. This means the same event needs to be (1) only be processed by one sequential process in the stream processing platforms (2) all other processes related to the events need to be made aware that one of them already processes the event. Both features can be implemented using distributed system techniques, such as semaphores or monitors. They can be realized using distributed cache systems, such as Ignite, Redis or to a limited extent ZooKeeper. Another simple possibility would be a relational database, but this would quickly not scale with large volumes.
      • Needles to say: The source system must also make sure that it delivers the same event once and only once to the ordered event log.

  • Online machine learning algorithms constantly change the underlying model to adapt it to new data. This model is used by other applications to make predictions (e.g. predicting when a machine has to go into maintenance). This also means that in case of failure we may temporary have an outdated or simply a wrong model (e.g. in case of at least once or at most once delivery). Hence, the applications need to incorporate some business logic to handle this (e.g do not register a machine twice for maintenance or avoid permanently registering/unregistering a machine for maintenance)

Although technologies, such as Kafka can help you with this, it requires a lot of thinking and experienced developers as well as architects to implement such a solution. The batch-processing layer of the Lambda architecture can somehow mitigate the aforementioned pitfalls, but it can be also affected by them.

Last but not least, although the Kappa Architecture seems to be intriguing due to its simplification in comparison to the Lambda architecture, not everything can be conceptualized as events. For example, company balance sheets, end of the month reports, quarterly publications etc. should not be forced to be represented as events.

Microservice Architecture for Big Data

The Microservice Architecture did not originate in the Big Data movement, but is slowly picked up by it. It is not a precisely defined style, but several common aspects exist. Basically it is driven by the following technology advancements:

  • The implementation of applications as services instead of big monoliths
  • The emergence of software containers to deploy each of those services in isolation of each other. Isolation means that they are put in virtual environments sharing the same operating systems (i.e. they are NOT in different virtual machines), they are connected to each other via virtualized networks and virtualized storage. These containers leverage much better the available resources than virtual machines.
    • Additionally the definition of repositories for software containers, such as the Docker registry, to quickly version, deploy, upgrade dependent containers and test upgraded containers.
  • The deployment of container operating systems, such as CoreOS, Kubernetes or Apache Mesos, to efficiently manage software containers, manage their resources, schedule them to physical hosts and dynamically scale applications according to needs.
  • The development of object stores, such as OpenStack Swift, Amazon S3 or Google Cloud Storage. These object stores are needed to store data beyond the lifecycle of a software container in a highly dynamic cloud or scaling on-premise environment.
  • The DevOps paradigm – especially the implementation of continuous integration and delivery processes with automated testing and static code analysis to improve software quality. This also includes quick deliveries of individual services at any time independently of each other into production.

An example of the Microservice architecture is the Amazon Lambda platform (not to be confused with Lambda architecture) and related services provided by Amazon AWS.

Nevertheless, the Microservice Architecture poses some challenges for Big Data architectures:

  • What should be a service: For instance, you have Apache Spark or Apache Flink that form a cluster to run your application. Should you have for each application on them a dedicated cluster out of software container or should you provide a shared cluster of software containers. It can make sense to have the first solution, e.g. a dedicated cluster per application due to different scaling and performance needs of the application.
  • The usage of object stores. Object stores are needed as a large scale dynamically scalable storage that is shared among containers. However, currently there are some issues, such as performance and consistency models (“eventually consistent”). Here, the paradigm of “Bring Computation to Data” (cf. here) is violated. Nevertheless, this can be mitigated either by using HDFS as a temporal file system on the containers and fetching the data beforehand from the object store or use an in-memory caching solution, such as provided by Apache Ignite or to some extend Apache Spark or Apache Flink.

I see that in these environments the role of software defined networking (SDN) will become crucial not only in cloud data centers, but also on-premise data centers. SDN (which should NOT be confused with virtualized networks) enables centrally controlled intelligent routing of network flows as it is needed in dynamically scaling platforms as required by the Microservice architecture. The old decentralized definition of the network, e.g. in form of decentralized routing, does simply not scale here to enable optimal performance.

Conclusion

I presented here several architectures for Big Data that emerged recently. Although they are based on technologies that are already several years old, I observe that many organizations are overwhelmed with these new technologies and have some issues to adapt and fully leverage them. This has several reasons.

One tool to manage this could be a proper Enterprise Architecture Management. While there are many benefits of Enterprise Architecture Management, I want to highlight the benefit of managed of managed evolution. This paradigm enables to align business and IT, although there is a constant independent (and dependent) change of both with not necessarily aligned goals as illustrated in the following picture.

enterprise-architecture-managed-evolution

As you can see from the picture both are constantly diverging and Enterprise Architecture Management is needed to unite them again.

However, reaching managed evolution of Enteprise Architecture requires usually many years and business as well as IT commitment to it. Enterprise Architecture for Big Data is a relatively new concept, which is still subject to change. Nevertheless some common concepts can be identifed. Some people refer to Enterprise Architecture for Big Data also as Zeta Architecture and it does not only encompass Big Data processing, but in context of Microservice architecture also web servers providing the user interface for analytics (e.g. Apache Zeppelin) and further management workflows, such as backup or configuration, deployed in form of containers.

This enterprise architecture for Big Data describes some integrated patterns for Big Data and Microservices so that you can consistently document and implement your Lambda, Kappa, Microservice architecture or a mixture of them. Examples for artefacts of such an enterprise architecture are (cf. also here):

  • Global Resource Management to manage the physical and virtualized resources as well as scaling them (e.g. Apache Mesos and Software Defined Networking)

  • Container Management to deploy and isolate containers (e.g. Apache Mesos)

  • Execution engines to manage different processing engines, such as Hadoop MapReduce, Apache Spark , Apache Flink or Oozie

  • Storage Management to provide Object Storage (e.g. Openstack Swift), Cache Storage (e.g. Ignite HDFS Cache), Distributed Filesystem (e.g. HDFS) and Distributed Ordered Event Log (e.g. Kafka)

  • Solution architecture for one or more services that address one or more business problems. It should be separated from the enterprise architecture, which is focusing more on the strategic global picture. It can articulate a Lambda, Kappa, Microservice architecture or mixture of them.

  • Enterprise applications describe a set of services (including user interfaces)/containers to solve a business problem including appropriate patterns for Polyglot Persistence to provide the right data structure, such as graph, columnar or hash map, for enabling interactive and fast analytics for the users based on SQL and NoSQL databases (see above)

  • Continuous Delivery that describe how Enterprise applications are delivered to production ensuring the quality of them (e.g. Jenkins, Sonarqube, Gradle, Puppet etc).

Advertisements

DevOps for your business? – About Uniting Development and Operations

DevOps has become in recent years a term for a new paradigm of integrating and managing development as well as operations of software within and cross organizations. I will describe in this blog entry what DevOps is and relate it to existing methodologies, such as agile development, and organizational structures. Basically, DevOps is a broad term that summarizes a set of best practices supported by a high degree of automation of all development and operational processes around the software delivery process using advanced public and/or private cloud technologies. I will conclude with a brief summary of the impact of DevOps on Big Data applications.

The Situation

Many companies have separate development and operations departments. Both usually work under high pressure to deliver and operate applications for business processes of strategic importance.

In recent years it has been shown that the development department has to work closer with internal and external customers to develop the right solutions that the customer can accept. Agile methodologies have been advocated to manage problems given the uncertainty of the business environment where the future solutions should be deployed and/or the lack of understanding of customer requirements or IT requirements. These agile methodologies broke with the paradigm to have a clearly defined long-term process (e.g. waterfall model) where the customer is only involved in the beginning and – often – too late in the end, so that mistakes where costly to correct.

At the same time, the operations department faced similar challenges as the development department. Given the uncertainties of the business environment, the customer ask a lot of new services and IT infrastructure changes, but there was little understanding on the customer side what effects they have and created high pressure on the operations department which has usually a low budget to deal with these changes. Hence, a clearly structured and governed process was needed to handle customer requests. Thus popular IT Service Management frameworks, such as the Information Technology Infrastructure Library (ITIL), were born and used globally. Since critical business applications are operated, the operations department tends to be much more risk avers and it tries to avoid unknown technologies.

It can be observed that both departments had needed and implemented different approaches to deal with the customers. This has lead to the problem that both departments were not only only divided from an organizational perspective, but also from a cultural one. For instance, the development department developed with little consultation of the operations department technology and after some time they just threw a complex piece of software over the “fence” and told the operations department to operate it. However, since they did not collaborate a lot there have been a lot of (extremely) costly problems during software delivery and operations. For example, the different environments, such as development or test environments, did not match the required infrastructure. Newest updates to fulfill requirements could not be installed fast enough, so that development was delayed. Operational staff required training and this was not considered. Many more problems occurred, because there was a strong interdependency between both departments due to the software delivery process.

Recently, DevOps have been pushed to handle this lack of collaboration between the two departments with respect to the software delivery process. Additionally, it address the challenges of new technologies, such as the cloud and software-defined infrastructures, which require strong development skills in development and the operations department.

DevOps a New Paradigm?

DevOps is not a clearly defined term and there is no reference model, such as ITIL, behind it. However, we can identify some common aspects that can be find in many different papers on the topic (cf. Gartner, this blog entry or this article):

  • A clearly defined as well as highly automated continuous software delivery pipeline across different software environments and the development as well as operations departments
    • Describe involved stakeholders with roles and responsibilities.
    • Defined and measured key performance indicators, e.g. “after committing some software source code in the development environment it is fully tested as well as deployed in the production environment and can be used by business processes within one hour”.
    • Clearly defined environments, e.g. development environment, test environment, acceptance environment and production environment. This implies a unambiguous and idempotent description (i.e. a set of scripts) of how they are created, operated, destroyed and what virtual resources (computation, memory and network) they require. This means you have to fully leverage private and public cloud technologies. Fully virtualized environments can be created by anyone using just a graphical browser interface (see next section).
    • Avoid reconfiguration of software for different environments. Environments should be ideally the same (e.g. same network addresses and same hardware) .
    • Continuous integration of software components delivered by different teams.
    • Fully automated deployment procedure in different environments. Manual deployment requiring human intervention is forbidden.
    • Fully automated regression, integration and acceptance testing. Manual test activities should be reduced to nearly zero, because rapidly changing economic environments require rapid deployment of new solutions.
    • Test-Driven development: develop the test of parts of software before you develop the software itself.
    • Deploy software in production incrementally in small chunks often (e.g. each week) as it is required when using Service-oriented Architectures (SOA). Then you avoid to make mistakes and changes cannot have catastrophic impact if they do not work. If you continue the traditional way of deploying large chunks of software with a lot of changes every few months you will continue the pay the price of production outages or obstacles to your business processes.
    • Have a consistent fully automated monitoring approach for your software environments. Leverage machine learning techniques to predict software problems before they happen.
    • Allow each stakeholder (development, test, operations and customer) to use a current build of the software deployed in a virtual environment by just using a browser interface
  • Integration of people from the operations department in the agile software development process. Operations has development skills and development has operation skills.
  • Integration of people from the development department in strategic IT operations processes
  • Clearly defined governance structures – have a sound program and project management
  • Integration of best practices, such as ITIL or agile methodologies
  • Fehlerkultur (error culture): Do not blame mistakes on each other, but solve them collaboratively. Finding out whose fault it was costs time and money and is usually not important – just solve them together as they occur. Be positive about errors and confident about handling them. Make error management part of your daily life. Each error is an opportunity to learn for all involved parties.

DevOps is NOT about flat organizations. Flat organizations make only sense if your organization has just one business area and your employee structure is relatively homogeneous and not heterogeneous. This does not imply that a strict hierarchical model is better, but you need to combine carefully control and agility.

Is DevOps right for your Organization?

Every organization should leverage and adapt DevOps practices, because they lead to significant benefits for all involved stakeholders (cf. also here). It makes sense for startups as well as large corporations. However, there are exceptions, such as research & development of disrupting new technologies. When researching & developing completely new technologies, where the impact on the infrastructure is completely unclear or where you just want to explore very risky software, where you are not certain that you will use it in the future, the overhead of a DevOps organization is too high. From my experience, there you have small teams, which are – on purpose – separated from the others to develop a new way of thinking and using novel technologies. They have to redo very often everything from scratch potentially using very diverse software technologies in a short time frame. Nevertheless, they may still employ some DevOps aspects, such as full computing, memory and network virtualization provided by cloud technologies.

What tools can I use?

You will find a lot of tools for enabling DevOps in your organization. Indeed tools are an important aspects, because you have a high degree of automation of previously manual activities and you will manage your environments using cloud virtualization technologies.

However, do not forget that it is also about organization and culture. Simply having the tools won’t help you much.

I present here only few of the many tools that can be used.

Cloud

As I mentioned before, you want to create and manage software environments fast and in a highly automatized fashion. Each environment should ideally be identical to avoid errors due to reconfiguration of software. This means they should have the same underlying software, configuration, (virtualized) hardware and the same network configuration (including the same IP-Addresses). Large-scale public cloud providers, such as Amazon EC2 or Google Compute, already have technologies that make it feasible. If you prefer a private cloud then you can use OpenStack, which is a Linux-based Cloud Computing distribution, offering similar functionality as Amazon EC2. It offers a web interface for creating new environments in a browser.

Additionally, you can use tools, such as Vagrant, to automate creation and management of software environments.

Continuous Integration

Continuous integration supports the integration of different software components from different teams. It builds and tests automatically complete applications every time a new piece of software source code is committed to the version management system, such as GIT. Popular tools are Jenkins or Travis.

Besides normal unit testing, integration testing, one should look at acceptance testing tools, such as Cucumber. There, acceptance tests can be described by business users in (nearly) natural language. This makes them repeatable and reliable.

Continuous Deployment

Deployment should be automated and repeatable independent of the environment. This means no manual configuration or manual deployment steps. You have to develop scripts that ensure that the target system is in the desired state after a deployment – independent of the state it is currently in. Popular tools, such as Puppet, Chef or Vagrant help you with this task.

Recently, web interfaces have become popular, so you can do deployment of complex applications just using your browser (cf. Ubuntu Juju).

Monitoring

Basically your monitoring infrastructure has to collect all the messages from various applications that are deployed in an environment and be able to take action on this. Usually there are a lot of message in various text formats. This means you have to find the right tools to collect and analyze a large amount of data.

You should not only present results (e.g. critical conditions), but also be able to automatically handle them (e.g. repair broken applications). Amazon OpsWorks is one product that can do this for application deployed on the Amazon EC2 cloud.

An interesting application is the Netflix ChaosMonkey. It is an excellent example for a tool supporting the Fehlerkultur (error culture) mentioned before. Basically, it switches off machines on which your software is deployed at random, but only in a certain time frame (e.g. from 9 – 17 o’clock). This means errors will be detected more easier and can be handled when all employees of the company are there. Hence, you will have no/less errors on Sunday morning, where it is difficult to get the right people to work on a problem. It should be noted that Netflix, a media streaming service, requires strict quality of service and it cannot allow itself that, for example, streaming of media to the customer is interrupted or disturbed. Nevertheless, the ChaosMonkey is switching off machines in their production environment.

The Netflix ConformityMonkey and JanitorMonkey check if the state of the systems is still acceptable or degraded. If it is not acceptable any more then the instances are automatically switched off and rebooted, so an acceptable state is available all the time. Furthermore, they switch off unused instances to reduce costs.

Recently, predictive software maintenance has become a hot topic, where you predict when your application will fail or slowed down given the environmental conditions before the event has taken place.

DevOps and Big Data

Big Data is about processing a large amount of data of different nature (e.g. structured or unstructured) in acceptable time. There is a trend to leverage big data to enable new sophisticated statistical models of complex real-world dependencies. For instance, you want to predict what customers want to buy next or when, for example, company cars are likely fail – even before the event happens.

Having DevOps is mandatory for Big Data. You will have a limited set of resources, you have very business critical applications and change will come often, e.g. new prediction models. You may also need to do experiments in production environments. For instance, you want to evaluate new machine learning technologies involving real live business data.

Furthermore, another challenge of which many people are not aware of yet is that business user will actually develop code and deploy it in production. For example, marketing specialists will develop and test new models using R. Similarly, hardware engineers will develop and deploy prediction algorithms for the reliability of hardware assets. This code will become part of existing critical business applications.

For example, you own a bus company. Your mechanics have a good understanding of the probability distributions for hardware errors, gas consumption etc. They will develop a model in a statistical programming language, such as R, to predict failure and maintenance needs of your bus based on sensor data as well as maintenance reports. The output of this is used by the scheduling application developed by your development department to avoid delays of bus service delivery. Similarly, a statistical marketing program developed by your marketing department will predict how much customers it expects when based on historical data, but also include current events (e.g. soccer match, news or tweets). This can also be included in the scheduling application. It is not very realistic that the marketing department will ask the development department to implement a statistical model proposed by them. The overhead is simply too large for experimental Big Data applications.

In fact, this has happen already a long time ago in the finance industry where business analysts created macros in Excel/Access or other office software to support calculation of their complex financial statistic models for creating more and more complex financial products. There the problem was that you have a variety of software somewhere in the business with critical impact no versioning or backup and unknown dependencies to other software or data. This is obviously bad and can even lead to a bankrupt.

How this can be handled has not been yet subject to extensive research or experiments.

Scenarios for Inter-Cloud Enterprise Architecture

The unstoppable cloud trend has arrived at the end users and companies. Particularly the first ones openly embrace the cloud, for instance, they use services provided by Google or Facebook. The latter one is more cautious fearing vendor lock-in or exposure of secret business data, such as customer records. Nevertheless, for many scenarios the risk can be managed and is accepted by the companies, because the benefits, such as scalability, new business models and cost savings, outweigh the risks. In this blog entry, I will investigate in more detail the opportunities and challenges of inter-cloud enterprise applications. Finally, we will have a look at technology supporting inter-cloud enterprise applications via cloudbursting, i.e. enabling them to be extended dynamically over several cloud platforms.

What is an inter-cloud enterprise application?

Cloud computing encompasses all means to produce and consume computing resources, such as processing units, networks and storage, existing in your company (on-premise) or the Internet. Particularly the latter enable dynamic scaling of your enterprise applications, e.g. you get suddenly a lot of new customers, but you do not have the necessary resources to serve them all using your own computing resources.

Cloud computing comes in different flavors and combinations of them:

  • Infrastructure-as-a-Service (IaaS): Provides hardware and basic software infrastructure on which an enterprise application can be deployed and executed. It offers computing, storage and network resources. Example: Amazon EC2 or Google Compute.
  • Platform-as-a-Service (PaaS): Provides on top of an IaaS a predefined development environment, such as Java, ABAP or PHP, with various additional services (e.g. database, analytics or authentication). Example: Google App Engine or Agito BPM PaaS.
  • Software-as-a-Service (SaaS): Provides on top of a IaaS or PaaS a specific application over the Internet, such as a CRM application. Example: SalesForce.com or Netsuite.com.

When designing and implementing/buying your enterprise application, e.g. a customer relationship management (CRM) system, you need to decide where to put in the cloud. For instance, you can put it fully on-premise or you can put it on a cloud in the Internet. However, different cloud vendors exist, such as Amazon, Microsoft, Google or Rackspace. They offer also a different flavor of cloud computing. Depending on the design of your CRM, you can put it either on a IaaS, PaaS or SaaS cloud or a mixture of them. Furthermore, you may only put selected modules of the CRM on the cloud in the Internet, e.g. a module for doing anonymized customer analytics. You will also need to think about how this CRM system is integrated with your other enterprise applications.

Inter-Cloud Scenario and Challenges

Basically, the exemplary CRM application is running partially in the private cloud and partially in different public clouds. The CRM database is stored in the private cloud (IaaS), some (anonymized) data is sent to different public clouds on Amazon EC2 (IaaS) and Microsoft Azure (IaaS) for doing some number crunching analysis. Paypal.com is used for payment processing. Besides customer data and buying history, the databases contains sensor information from different point of sales, such as how long a customer was standing in front of an advertisement. Additionally, the sensor data can be used to trigger some actuators, such as posting on the shop’s Facebook page what is currently trending, using the cloud service IFTTT. Furthermore, the graphical user interface presenting the analysis is hosted on Google App Engine (PaaS). The CRM is integrated with Facebook and Twitter to enhance the data with social network analysis. This is not an unrealistic scenario: Many (grown) startups already deploy a similar setting and established corporations experiment with it. Clearly, this scenario supports cloud-bursting, because the cloud is used heavily.

I present in the next figure the aforementioned scenario of an inter-cloud enterprise application leveraging various cloud providers.

intercloudarchitecture

There are several challenges involved when you distribute your business application over your private and several public clouds.

  • API Management: How to you describe different type of business and cloud resources, so you can make efficient and cost-effective decisions where to run the analytics at a given point in time? Furthermore, how to you represent different storage capabilities (e.g. in-memory, on-disk) in different clouds? This goes further up to the level of the business application, where you need to harmonize or standardize business concepts, such as “customer” or “product”. For instance, a customer described in “Twitter” terms is different from a customer described in “Facebook” or “Salesforce.com” terms. You should also keep in mind that semantic definitions change over time, because a cloud provider changes its capabilities, such as new computing resources, or focus. Additionally, you may dynamically change your cloud provider without disruption to the operation of the enterprise application.
  • Privacy, risk and Security: How do you articulate your privacy, risk and security concerns? How do you enforce them? While there are already technology and standards for this, the cloud setting imposes new problems. For example, once you update the encrypted data regularly the cloud provider may be able to determine from the differences parts or all of your data. Furthermore, it may maliciously change it. Finally, the market is fragmented without an integrated solution.
  • Social Network Challenge: Similarly to the semantic challenge, the problem of semantically describing social data and doing efficient analysis over several different social networks exist. Users may also change arbitrarily their privacy preferences making reliable analytics difficult. Additionally, your whole company organizational structure and the (in-)official networks within your company are already exposed in social business networks, such as LinkedIn or Xing. This blurs the borders of your enterprise further to which it has to adapt by integrating social networks into its business applications. For instance, your organizational hierarchy, informal networks or your company’s address book exist probably already partly in social networks.
  • Internet of Things: The Internet of Things consists of sensors and actuators delivering data or executing actions in the real world supported by your business applications and processes. Different platforms exist to source real world data or schedule actions in the real world using actuators. The API Management challenge exists here, but it goes even beyond: You create dynamic semantic concepts and relate your Internet of Things data to it. For example, you have attached an RFID and a temperature sensor to your parcels. Their data needs to be added to the information about your parcel in the ERP system. Besides the semantic concept “parcel” you have also that one of a “truck” transporting your “parcel” to a destination, i.e. you have additional location information. Furthermore it may be stored temporarily in a “warehouse”. Different business applications and processes may need to know where the parcel is. They do not query the sensor data (e.g. “give me data from tempsen084nl_98484”), but rather formulate a query “list all parcels in warehouses with a temperature above 0 C” or “list all parcels in transit”. Hence, Internet of Thing data needs to be dynamically linked with business concepts used in different clouds. This is particularly challenging for SaaS applications, which may have different conceptualization of the same thing.

Enterprise Architecture for Inter-Cloud Applications

You may wonder how you can integrate the above scenario at all in your application landscape and why you should do it at all. The basic promise of cloud computing is that it scales according to your needs, that you can outsource infrastructure to people who have the knowledge and capabilities to run the infrastructure. Particularly, small and medium size enterprises benefit from this and the cost advantage. It is not uncommon that modern startups start their IT using the cloud (e.g. FourSquare).

However, also large corporations can benefit from the cloud, e.g. as a “neutral” ground for a complex supply chain with a lot of partners or to ramp up new innovative business models where the outcome is uncertain.

Be aware that in order to offer some solution based on the cloud you need to first have a solid maturity of your enterprise architecture. Without it you are doomed to fail, because you cannot make proper risk and security analysis, scaling and benefit from cost reductions as well as innovation.

I propose in the following figure an updated model of the enterprise architecture with new components for managing cloud-based applications. The underlying assumption is that you have an enterprise architecture, more particularly a semantic model of business objects and concepts.

intercloudarchitecturenew

  • Public/Private Border Gateway: This gateway is responsible for managing the transition between your private cloud and different public clouds. It may also deploy agents on each cloud to enable a secure direct communication between different cloud platforms without the necessity to go through your own infrastructure. You might have more fine granular gateways, such as private, closest supplier and public. A similar idea came to me a few years ago when I was working on inter-organizational crisis response information systems. The gateway is not only working on the lower network level, but also on the business processes and objects level. It is business-driven and depending on business processes as well as rules, it decides where the borders should be set dynamically. This may also mean that different business processes have access to different things in the Internet of Things.
  • Semantic Matcher: The semantic matcher is responsible for translating business concepts from and to different technical representations of business objects in different cloud platforms. This can be simple transformations of not-matching data types, but also enrichment of business objects from different sources. This goes well beyond current technical standards, such as EDI or ebXML, which I see as a starting point. Semantic matching is done automatically – there is no need for creating time consuming manual mappings. Furthermore, the semantic matcher enhances business objects with Internet of Things information, so that business applications can query or trigger them on the business level as described before. The question here is how you can keep people in control of this (see Monitor) and leverage semantic information.
  • API Manager: Cloud API management is the topic of the coming years. Besides the semantic challenge, this component provides all necessary functionality to bill, secure and publish your APIs. It keeps track how is using your API and what impact changes on it may have. Furthermore, it supports you to compose new business software distributed over several cloud platforms using different APIs subject to continuous change. The API Manager will also have a registry of APIs with reputation and quality of service measures. We see now a huge variety of different APIs by different service providers (cf. ProgrammableWeb). However, the scientific community and companies have not picked up yet the inherent challenges, such as the aforementioned semantic matching, monitoring of APIs, API change management and alternative API compositions. While there exists some work in the web service community, it has not yet been extended to the full Internet dimension as it has been described in the scenario here. Additionally, it is unclear how they integrate the Internet of Thing paradigm.
  • Monitor: Monitoring is of key importance in this inter-cloud setting. Different cloud platforms offer different and possible very limited means for monitoring. A key challenge here will be to consolidate the monitoring data and provide an adequate visual representation to do risk analysis and selecting alternative deployment strategies on the aggregated business process level. For instance, by leveraging semantic integration we can schedule request to semantically similar cloud and business resources. Particularly, in the Internet of Thing setting, we may observe unpredictable delays, which lead to delayed execution of real-world activities, e.g. a robot is notified that a parcel flew off the shelf only after 15 minutes.

Developing and Managing Inter-Cloud Business Applications

Based on your enterprise architecture you should ideally employ a model-driven engineering approach. This approach enables you automation of the software development process. Be aware that this is not easy to do and failed often in practice – However, I have also seen successful approaches. It is important that you select the right modeling languages and you may need to implement your own translation tools.

Once you have all this infrastructure, you should think about software factories, which are ideal for developing and deploying standardized services for selected platforms. I imagine that in the future we will see small emerging software factories focusing on specific aspects of a cloud platform. For example, you will have a software factory for designing graphical user interfaces using map applications enhanced with selected Odata services (e.g. warehouse or plant locations). In fact, I expect soon a market for software factories which enhances the idea of very basic crowd sourcing platforms, such as the Amazon Mechanical Turk.

Of course, since more and more business applications shift towards the private and public clouds, you will introduce new roles in your company, such as the Chief Cloud Officer (CCO). This role is responsible for managing the cloud suppliers, integrating them in your enterprise architecture and proper controlling as well as risk management.

Technology

The cloud exists already today! More and more tools emerge to manage it. However, they do not take into account the complete picture. I described several components for which no technologies exist. However, some go in the right direction as I will briefly outline.

First of all, you need technology to manage your API to provide a single point of management towards your cloud applications. For instance, Apache Delta Cloud allows managing different IaaS provider, such as Amazon EC2, IBM SmartCloud or OpenStack.

IBM Research also provides a single point of management API for cloud storage. This goes beyond simple storage and enables fault tolerance and security.

Other providers, such as Software AG, Tibco, IBM or Oracle provide “API Management” software, which is only a special case of API Management. In fact, they provide software to publish, manage the lifecycle, monitor, secure and bill your own APIs for the public on the web. Unfortunately, they do not describe the necessary business processes to enable their technology in your company. Besides that, they do not support B2B interaction very well, but focusing on business to development aspects only. Additionally, you find registries for public web APIs, such as ProgrammableWeb or APIHub, which are first starting point to find APIs. Unfortunately, they do not feature sematic description and thus no semantic matching towards your business objects, which means a lot of laborious manual work for doing the matching towards your application.

There is not much software for managing the borders between private and public cloud or even allowing more fine-granular borders, such as private, closest partner and the public. There is software for visualizing and monitoring these borders, such as the eCloudManager by Fluid Operations. It features semantic integration of different cloud resources. However, it is unclear how you can enforce these borders, how you control them and how can you manage the different borders. Dome 9 goes into this direction, but focuses only on security policies for IaaS applications. It does only understand data and low level security, but not security and privacy over business objects. Deployment configuration software, such as Puppet or Chef, are only first steps, since they focus only on deployment, but not on operation.

On the monitoring side you will find a lot of software, such as Apache Flume or Tibco HAWK. While these operate more on the lower level of software development, IFTTT enables execution of business rules over data on several cloud providers providing public APIs. Surprisingly, it considers itself at the moment more as a end user facing company. Additionally, you find in the academic community approaches for monitoring distributed business processes.

Unfortunately, we find little ready to go software in the area “Internet of Things”. I worked myself with several R&D prototypes enabling cloud and gateways, but they are not ready for the market. Products have emerged but they are only for a special niche, e.g. Internet of Things enabled point of sale shop. They lack particularly a vision how they can be used in an enterprise-wide application landscape or within a B2B enterprise architecture.

Conclusion

I described in this blog the challenges of inter-cloud business applications. I think in the near future (3-5 years) all organizations will have some them. Technically they are already possible and exist to some extent. The risk and costs will be for many companies lower than managing everything on their own. Nevertheless key requirement is that you have a working enterprise architecture management strategy. Without it you won’t have any benefits. More particularly, from the business side you will need adequate governance strategies for different clouds and APIs.

We have seen already key technologies emerging, but there is still a lot to do. Despite decades of research on semantic technologies, there exists today no software that can perform automated semantic matching of cloud and business concepts existing in different components of an inter-cloud business application. Furthermore, there are no criteria on how to select a semantic description language for business purposes that are as broad as described here. Enterprise Architecture Management tools in this area only slowly emerge. Monitoring is still fragmented with many low level tools, but only few high-level business monitoring tools. They cannot answer simple questions, such as “what if cloud provider A goes down then how fast can I recover my operations and what are the limitations”. API Management is another evolving area, but which will have a significant impact in the coming years. However, current tools only consider low-level technical aspects and not high-level business concepts.

Finally, you see that a lot of challenges mentioned in the beginning, such as the social network challenge or Internet of Thing challenge, are simply not yet solved, but large scale research efforts are on their way. This means further investigation is needed to clarify the relationships between the aforementioned components. Unfortunately, many of the established middleware vendors lack a clear vision of cloud computing and the Internet of Things. Hence, I expect this gap will be filled by startups in this area.