The Emergence of Rule Management Systems for End Users

Long envisioned in research and industry, they have silently emerged: rule management systems for end users to manage their personal (business) processes. End users use these systems to define rules that automate parts of their daily behavior to have more time for their life. I will explain in this blog entry new trends and challenges as well as business opportunities.

What is it about?

Our life is full of decisions. Let it be complex unique decisions, but also simple reoccurring ones, such as switching off the energy at home when we leave it or sending a message to your family that you left work. Another more recent example is taking a picture and uploading it automatically to Facebook and Twitter. Making and executing simple decisions take our time, which could be used for something more important. Hence, we wish to automate them as much as possible.

The idea is not new. More technical interested people have used computers or home automation systems since several years or even decades. However, they required a lot of time to learn them, complex to configure / required software engineering skills, were proprietary and it was very uncertain if the manufacturer will still support them in a few years.

This has changed. Technology is part of everybody’s life, especially since the smartphone and cloud boom. We use apps for our smartphone or as part of the cloud that help us automating our life and I will present in the following two apps in this area. These apps can be used by anyone and not only software engineers.

What products exist?

Ifttt

The first app that I am going to present is a cloud-based app called ifttt (if this then that), which has been created by a startup company. You can use a graphical tool to describe rules for integrating various cloud services, such as Facebook, Dropbox, Foursquare, Google Calendar, Stocks or Weather. These cloud services are called channels in ifttt.

A rule has the simple format “if this then that”. The “this” part refers to triggers that start actions specified by the “that” part. The ifttt application polls the channels every 15 minutes if they have any new data and evaluates if new data triggers actions specified in the “that” part.

Defining rules is an easy task, but a user does not necessarily have to do this. A huge community has already created a lot of so-called recipes, which are predefined rules that can be just used by any other user without any effort. Examples for these recipes are:

On(X)

On(X) is a mobile and cloud application created by Microsoft Research. It leverages sensor information of your mobile for evaluating rules and triggering actions. Contrary to the aforementioned app it is thus not limited to data from other cloud platforms.

Rules are described using a JavaScript language (find some examples here). This means end users cannot use a graphical tool and have to have some scripting language knowledge. Luckily, the platform offers also some receipts:

We see that the receipts are more sophisticated, because they can leverage sensor information of the mobile (location/geofencing or movement).

What is in for business?

Using these personal rule engines for business purposes is an unexplored topic. I expect that they could lead to more efficiency and new business models. For example, consider the following business model:

Furthermore, we can envision improved service quality by using personal rule engines in various domains

  • Medicine: if user is in the kitchen then remind to take medicine, if it has not been taken yet for the day
  • Energy saving: If I leave my house then shut down all energy consumers except the heater.
  • Food delivery: If I am within 10 km range of my destination then start delivering the pizza I have ordered
  • Car sharing: If I leave work send a SMS to all my colleagues I share my car with
  • Team collaboration: We can evaluate if team members or members of different teams want to do the same actions or waiting for similar triggers. They can be brought together based on their defined rules to improve or split their work more efficiently.

Future research

The aforementioned applications are prototypes. They need to be enhanced and business models need to be defined for them. First of all, we need to be clear about what we want to achieve with automating simple decisions, e.g.

    • Cost savings
    • Categorizing for quicker finding and publishing information
    • Socializing

An important research direction is how we could mine the rules, e.g. for offering advertisement or bringing people together. Most of the mining algorithms to day focus on mining unstructured or unrelated data, but how can we mine rules of different users and make sense out of them?

Another technical problem is the time between rule evaluation and execution. For instance, ifttt only polls every 15 minutes its data sources to check if actions in a rule should be triggered. This can be too late in time critical situations or can lead to confusing actions.

From a business point of view, it would be interesting to investigate the integration of personal rule management into Enterprise Resource Planning (ERP) systems as well as to provide a social platform to optimize and share rules.

Finally, I think it is important to think about rules involving or affecting several persons. For example, let us assume that a user “A” defined the rule “when I leave work then inform my car sharing colleagues”. One of the car sharing colleagues has the rule “when a user informs me about car sharing then inform all others that I do need a seat in a car anymore”. If user “A” now cannot bring the car sharing colleague home then he or she has a problem.

A more simpler example would be if user “A” defines a rule “If user ‘B’ sends me a message then send him a message back” and user ‘B’ defines a rule “If user ‘A’ sends me a message then send him a message back”. This would lead to an indefinite message exchange between the users.

Here we need to be able to identify and handle conflicts.

Advertisements

OData – The Evolution of REST

Introduction

Making the data, hidden in various cloud platforms, available and understandable so that it can be processed by software services as well as human beings is one of the key challenges. Tim Berners-Lee coined this challenge “Linked Data”. However, he remained rather vague on how this data should be linked technically. An initiative of major software vendors, such as Citrix, IBM, Microsoft, Progress Software, SAP and WSO2, addressed this problem by proposing the Open Data Protocol (OData) standard to the Organization for the Advancement of Structured Information Standards (OASIS). All kind of software can leverage OData, but it is particularly interesting for business software, such as business process management systems, business rule systems and complex event processing middleware. OData can be seen as the technical foundation for open data, which comprises initiatives from various governments and organizations to make data publicly available.

Foundations

OData is inspired by the Representation State Transfer (REST) approach. Basically, REST is about clients creating, accessing, deleting or modifying resources, identified by Unique Resource Identifiers (URI), from the server. A resource can represent any concept in form of any data structure. Data structures can be described in a plethora of formats, such as the JavaScript Object Notation (JSON) or the eXtended Markup Language (XML). REST was becoming more and more popular with the emergence of complex web applications using HTML5 and Javascript.

Although not limited to any specific protocol between clients and servers, REST has been originally described and is now mostly implemented using the Hyper Text Transfer Protocol (HTTP). This allows creating, modifying, deleting or reading resources. Furthermore, we can create more sophisticated Internet based architectures for managing the data (e.g. proxies or IT security concepts).

Example (REST) We assume that a client (C) communicates with the server (S) using HTTP. The client requests the data of a book identified by the following URI: http://example.com/library/book/ISBN978-3787316502. The server answers with basic book data using the JSON format. The protocol can be described as follows:

(C) Request:

GET http://example.com/library/book/ISBN978-3787316502 HTTP/1.1

(S) Response:

HTTP/1.1 200 OK

{

“title”: “Kritik der praktischen Vernunft”,

„author“: „Immanuel Kant“,

„year“: „1788/2003“

}

Similarly, we can create, modify or delete resources.

OData

It has been shown that REST provides a lot of flexibility for managing data over the Internet. This can be seen from the plethora of web applications using it. However, it has some limitations. For example, we can only do simple data update or retrieval operation via the URI, but we cannot articulate more complex queries in our web application or for representing media streams. An example for a complex query is the following (using OData notation):

http://example.com/travelshop/Customer?$filter=Revenue gt 1000.00

This query asks for all the customers with revenue greater than 1000.00 Euro. The query can be executed similarly to the REST example presented before.

OData standardizes a wide range of different queries on the service level, which makes it very easy to reuse Odata services or do add enhanced flexibility/agility to your business processes.

Of course, we can use this query language also to update or delete resources.

 

Selected OData Concepts

OData can leverage and extend the Atom Syndication format or the Atom Publishing Protocol. You probably know these standards from web feeds (e.g. news) in your browser. They basically describe semantics for news, such as headlines, pictures or full-text. These standards can be used to represent answers to queries containing more than one entity (e.g. multiple customer records).

OData services can articulate what data they understand and produce. Clients can use this information to compose one or more OData services to fulfill their intents. For example, OData services are described using the metadata document and the service document. The first one is conceptually similar to the Web Service Description Language (WSDL). It describes operations and data provided by the OData service. The service document describes a set of entity collections that can be queried from the services. For instance, all customers that have a pay-tv subscription.

Unfortunately, it is not clear if OData is supposed to support queries requiring constructing a transitive closure. These type of queries are, for instance, useful to retrieve all the indirect flights between two airports. It may be supported by OData functions or on the client side, but not as part of the standard.

Conclusion and Outlook

OData has a lot of benefits. It leverages well proven internet concepts. HTML5 web applications can immediately start using it without changes to existing browsers. Finally, it has support by major vendors for business software.

There have been some competing proposals for managing data over the internet, such as GData by Google, but they seem to be deprecated by now.

However, some elements are missing in the OData proposal. For example, it is difficult to describe temporal queries, such as “Provide me all activities that have finished at the same time in the production process”. A lot of research has been done on representing temporal data in information systems (e.g. I used it to provide information system support for managing highly dynamic processes in the crisis response). Thus, I think this would be a beneficial feature. In fact, there have been proposals for extending the OData standard with temporal data. However, they remain rather simple and would not allow dealing with qualitative temporal queries (“finish at the same time”) as I have described before.

The same holds for geospatial queries. An example for a geospatial query is “Provide me all the warehouses in the disaster affected area”. Luckily, we have also here some proposals to extend the OData standard to support these types of queries.

What is next?

We have a lot of redundant and related data provided by different organizations. OData is focusing on single data sources and at the moment it is unclear how to relate and integrate many different sources from different information providers. This is mostly a semantic problem. For instance, a manager can be described as a director in different companies and vice versa. A solution could be to agree on OData Metadata documents to define semantics. This has to be done by domain experts as it had occurred in the area of reference models, such as the one I explained before about humanitarian supply chain management (cf. also the Humanitarian eXchange Language). Finally, we may use translation languages, such as XSLT to automate integration of semantically different information.

Further references

Research Challenges for Case Management

I describe in this blog entry research and innovation challenges for case management from a business and technical perspective. This blog entry complements my existing blog entries on case management: introduction & standards, what constitutes a case and system concepts.

This entry is relevant for managers, business analysts, enterprise architects, researchers and developers.

Business Challenges

I have explained before that case management is about dynamically evolving processes. We want to manage them in a structured way, because we expect an improved process execution in terms of quality, better cost management and enhanced innovation potential. To leverage these benefits, we need to determine the processes that should be subject to a case management approach and we need to be able to compare as well as integrate different cases.

I presented already in a previous blog guidelines for modeling cases, so that they can deliver a business benefit. However, it is still an open issue how benefits can be realized around the case lifecycle and what cases/processes are suitable for case management. A good starting point is the guide for business processes in [1]. Although it is about managing, deploying and monitoring more standardized business processes, it gives useful hints that are also valid for case management. This includes deployment, monitoring/optimization and learning aspects.

Obviously, some processes are more suitable for case management than other processes. For example, processes, which are always executed similarly and are highly standardized with predictable exceptions, should be managed using business process management techniques (e.g. workflow systems or six sigma). The issue here is that cases are not necessarily executed similarly, but we wish to be able to compare cases, so that we can learn from them and foster innovation. For example, Six Sigma cannot be transferred directly to case management. Furthermore, this is needed to enable quality and cost management.

Additionally, we may need to think about new roles governing the management of cases. I gave in a previous blog the example of a global business rule designer that ensures consistency of business rules among cases, but still allowing for individual case rules.

Nevertheless, it is still important to glue the outputs generated in different cases together to form the big picture of the enterprise and steer it into the right direction. Case management can offer here the right flexibility to act top-down and bottom-up. However, it is still unclear how different cases can and should be linked, especially on the inter-organizational level, where we have different rules, cultures and regulations of the involved organizations.

Another interesting aspect is the fact that case management offers also the new innovative models for paying employees. For instance, workers could be paid based on the complexity and approach they took to solve a case. This can be determined by manager and customer together. There is no requirement anymore to stay for a certain time in the office. Case workers can solve cases wherever they are and whenever they want. They can work as much as they require for satisfying their needs.

Finally, I expect that case management can be a building block of a solution for the management of personal processes, such as founding a company, buying a house or marriage.

Technical Aspects – Case Management Engine

The case management engine should enable the users to model, execute and monitor dynamically evolving processes in a structured manner. It supports the user to create case objects and define processes as well as rules for them. The created models can be verified by the engine for correctness, i.e. that the case could be executed without violating any rules or processes (see also [2]).

Rules and processes maybe enforced, but for dynamic processes it makes often more sense to detect deviations from rules and processes to evaluate their impact later (cf. [2]).

While a lot of rule and process formalisms are known (cf. [2] for an overview), it is still an open challenge, which one should be used and which one makes sense for a given case. Here, we need to evaluate and compare case management solutions in a real company setting.

Technical Aspects – Graphical User Interface/Visualizations/Pervasive Interactions

Another question is how the case workers and their enterprises can get the maximum benefit from case management. By following a structured approach, we expect that the case workers can make more sense out of cases and dynamically evolving processes, so that they can react better to a given environment. This requires new visualization techniques showing the case evolution, so that it is clear what has been done, what is currently going on and what are the next steps.

However, recent developments show that the workforce does not want and need to sit in an office all day long. They need to be at the customer site, doing sports, staying with their family or simply want to enjoy the world. This means that we have to support their contribution to cases wherever they are. Novel solutions need to be designed, so that they can provide their input to a dynamic case process at the right time, at the right place and using any device (e.g. screens, walls, voice or gestures). This also includes proactive recommendation of case objects to the case worker (e.g. based on expertise, skills or previous case executions). Appropriate recommendation algorithms considering also long-term aspects need to be invented.

Obviously, we need to integrate our existing collaboration and communication components (e.g. voice chat, collaborative text editing or version control management systems). I observe a lot of new development related to novel Web standards supporting this (e.g. OpenSocial or Web Intents). Nevertheless, there is still some research needed on how we can leverage these emerging standards.

Technical Aspects – Inter-organizational Distributed Level

I think the real challenge with case management is to support cross-organizational cases and dynamic processes. Business Process Management has terribly failed in this area – not only technically, but also from a business perspective. However, if we are able to manage it right then we can also expect a lot of benefits from it.

From a technical perspective, we need to consider that organizations working on one case cannot and do not have a complete overview on the case due to privacy, regulatory or strategic reasons. Furthermore, an inter-organizational case has to be embedded in the different environments (e.g. business goals or regulatory rules) of the organizations.

This also implies that a case is distributed over potentially several organizations that work on parts of it concurrently within their given environment consisting of business rules, artifacts, organizational structures and processes (e.g. an invoice is part a supplier and consumer case). Research in the area of distributed systems has shown that this can quickly lead to a diverging view on activities, artifacts, rules and data. Clearly, this is undesired, because it introduces coordination problems and cases management won’t deliver its benefits. Thus, case management systems have to provide a converging view, i.e. a common picture, on the inter-organizational cases. However, classical synchronization and transaction mechanisms in distributed systems do not scale well to this inter-organizational level and do not deliver what the users expect. Novel mechanisms need to be designed and tested (cf. also [2]).

Conclusion

I have presented in this blog entry several innovation and research challenges from a business as well as technical perspective. These challenges have not been solved yet adequately, but I see continuous improvement of these issues, so it can be expected that they will be addressed by consultancies and research organizations.

Stay tuned for my next blog entry where I analyze limitations of existing open source solutions with respect to case management.

References

[1] Becker, Jörg; Kugeler, Martin; Rosemann, Michael (Eds.): Process Management: A Guide for the Design of Business Processes, Springer, 2011, ISBN 978-3642151897

[2] Franke, Jörn: Coordination of Distributed Activities in Dynamic Situations. The Case of Inter-organizational Crisis Management, PhD Thesis (Computer Science), English, LORIA-INRIA-CNRS, Université de Nancy/Université Henri Poincaré, France, 2011.