Quantcast
Channel: Enterprise Integration for Beginners
Viewing all 93 articles
Browse latest View live

Democratizing the Digital Transformation

$
0
0
We know for a fact that there are more mobile phones than the world population right now (2017 November). Check the below graphics if you don’t aware of it.


But this does not mean that every person has a mobile phone. But it is almost every person who can handle a mobile phone will get it by 2020. Check the below graphics from Cicso where they predict by 2020, there will be more people with mobile phones than people with electricity.


I will stop right here without going deep into mobile phones. But the above 2 graphics clearly shows us that mobile phones has been democratized so quickly even it has overtaken an essential need like electricity. Most of these mobile devices are in the category of “smart phones” where they have a connectivity to the internet. 10 years back, a farmer living in a rural village in Dambulla, SriLanka may not have even dreamt about having such a device in their hand. But it happened in a way no one could predict. Mobile phones have been democratized across the globe. A company located in USA, Japan or Korea has acquired the market far away in east.

Every business organization wants to expand their business horizon to all parts of the globe. Apple CEO Tim Cook may be thrilled to see that Iphone is helping to change peoples lives in far distant country like Lesotho. It is more or less the same ambition for Elon Musk, Jeff Bezos or Satya Nadella. They want to see their products and services expands to far distant limits so that they can change the human behavior for the good (and make them profiltable).

First of democratizing the digital transformation is already here. It happened within last 10 years. Here 5 companies which lead this phase

  • Uber
  • AirBnB
  • Amazon
  • Alibaba
  • Facebook

Uber made everyone a global passenger of the world’s largest taxi service which does not own a single taxi. It democratized the transportation industry (specially taxi services). There are so many localized companies coming up everyday to copy the concept and helping the process.

AirBnb allowed travellers across the globe to fullfill their dreams without spending a lot of money on accomodation. It became the largest hospitality service company without owning any (relatively) real estate or hotel. It has democratized the travel and hospitality industry.

Likewise Amazon and Alibaba democratized the way people do shopping and the process of buying and selling products online. Facebook built the largest media owner of the world without having any media or media personnal.

It is the time for another revolution in the human existence using technology. Specifically using the digital transformation. If you are in the technology industry, you may have heard of the term “Digital Transformation” along with the words like Micro Services Architecture (MSA), DevOps, Containers, Service Mesh, Aritificial Intelligence, Analytics. Thes technical jargons shape up the next wave of technology revolution.

Digital Transformation is a term heavily used in analyst’s reports recently. Based on a recent survey done by Gartner (2017 April), 42% of CEOs of different organizations have already started working on DT. This survey did not include tech giants like Amazon, Facebook, Google or Mircosoft. It is all about companies which do not have direct connection to technology products. The main intention of this post is to talk about the social impact of the digital transformation rather than the enterprise side of it.

Let’s talk about political side of the story. Socialism is the best. But hardest to implement and persist. Democracy is good. Not much harder to implement. Super democracy (as we know it) is not that good. Let’s focus on the topic of “Democratizing the Digital Transformation” rather than the day-to-day politics.

Democracy in layman’s terms is to offer everyone equal opportunities to prosper. Digital Transformation is bit harder to describe in layman’s terms. But it can be defines as “Transforming your assets (products, services, customers, employees, etc) into a digitally accessible form”. As an example, If you are a shoe company, customers would be happy to go through the catalog of shoes using a digital device like phone, tablet or PC before coming into the brick and mortart store and make the purchase.

That is Digital Transformation as we know it. But democratizing the DT requires some additional technological advancements and agreements between different competing organizations. Let’s take one industry and try to simplify the idea. Since I began the story with mobile phone, let’s take the telecommunication industry. In most of the countries where mobile network operators (MNOs) operates in, there are more than one competing vendors. The looks of the subsribers (users) and the population can be depicted as in following venn diagram.

As depicted in the above figure, there are already people who are using multiple service providers (MNOs). But that is transparent from the operator. One of the main business trends in the telecom industry is the value added services (VAS). But these services are limited only to the customer base which is owned by that particular MNO.

These telecom operators (MNOs) has tried to democratize the industry by their own but they failed. GSMA is the global body which is responsible for the governance of the MNOs across the globe. They came up with the idea of a standard set of APIs which can be used by different MNOs across the globe to expose and exchange information about their customers and provide value added services across different operators.

As depicted in the above figure, once all the MNOs exposes their customer information with customer’s consent, individual operators can extend their value added services to the customers who are using other mobile operators in an area where first operators coverage is not present. In addition to that, independent service providers (3rd parties) can build applications on top of these APIs and extend their services to new customer bases. The impact this kind of architecture can have on normal person’s life can be immense. A farmer living in a developing country can use services like mobile banking or Ali express to buy and sell items through his mobile (which may or may not be a smartphone) phone. This is not a story anymore. This has already been realized in some parts of the world right now. It is only a matter of time until it comes to your doorstep.

The above mentioned use case showcases how to democratize the digital transformation within the telecommunication industry. The same concept can be applied in many other industries. Another frontrunner in this space is the financial industry.  

European union has recently released a regulation to all the banks operating in EU countries to expose their customer and payment information securely with customers consent through a standard set of APIs. This is know as Payment Services Directive 2 (PSD2). This is a scenario where democratizing of DT comes as a regulation. Financial institues like Banks were bit curious at the beginning to adhere to this regulation. But after they understood the potential, they quickly moving towards the PSD2 compliance. This will allow the financial industry in the EU to reach new heights in their operations and customers will get more benefits.

With these 2 practical examples of democratization of DT, the same concept can be extended to other industries such as Healthcare, Manufacturing, Retail, Governance, etc. This will allow people from different levels of the society from different geographical locations to reap the benefits of the technology and specifically the Digital Transformation.

More generalized form of the above architecture is mentioned below.


Understanding Hybrid Integration Platforms

$
0
0
Based on the research done by researchandmarkets.com, market cap for Hybrid Integration Platforms (HIP) will grow from 17.14B in 2017 to 33.60B in 2022. This shows the importance of the Hybrid Integration Platforms and their relevance to the enterprises. First of all, it is essential to understand the concept. The term “Hybrid” means a combination of more than one entities (most of the time 2). I drive a hybrid Toyota vehicle which runs on electricity (battery) as well as fuel (petrol). In the world of integration, hybrid means integrating systems which resides on

  • On premise and
  • Cloud


When it comes to “on-premise” systems, they can be running on physical hardware, virtual machines, containers or in a virtual private cloud. The meaning of an on-premise system in the world of integration is that user has the control over application maintenance.

“Cloud” systems means the systems which are running on public cloud which runs on vendors own data centers (or public Iaas cloud such as EC2, Azure or GCloud) and fully managed by the vendor. Sometimes users may get some admin privileges, but it is most of the time the vendor who does the maintenance of the system.

The term Hybrid Integration Platform means a platform which can interconnct both on-premise and cloud based systems. Even though we differentiate the systems based on their installed location and the maintenance capabilities, all the systems communicate with the integration platform using 2 main principals.

  • Communication (Transport) protocol (e.g. HTTP, JMS, TCP, IDOC, etc.)
  • Content Type (message format) (e.g. XML, JSON, Binary, Text, etc.)

From the Integration Platform’s perspective, what it matters is to understand the transport protocol and the message formats. In most of the well known integration platforms, these are handled through

  • Protocol Connectors - to deal with communication protocols (HTTP, JMS, TCP, etc)
  • Message Builder/Formatter - to deal with message formats (XML, JSON, Binary, etc)

With the usage of above mentioned 2 components, Integration Platform can interface with any system and integrate with other systems using a canonical internal message representation (or not). Some platforms keeps a canonical representation while others do the transformations as and when necessary.

On top of these standard protocol connectors, some integration platforms has build specific connectors for cloud APIs like Salesforce, Twitter, Gmail, Facebook, etc. so that integration with those systems are abstracted and make it easier for the developers. These Cloud Connectors are one of the key factors when selecting a proper Hybrid Integration Platform. Some vendors offer these connectors for free while other offer these connectors at a cost.

Another important aspect of a HIP is the ability to extend their core functionalities through simple and well defined interfaces. When it comes to integration, there can be so many on-premise systems which has been written with proprietary standards (especially Commercial Off The Shelf (COTS) systems) and it is really important that HIP can extend its capabilities and deal with this kind of systems.


Another aspect of a HIP is the deployment flexibility. The platform must be able to deploy on premise as well as cloud. It would also be a value addition if that is available as a public cloud offerring.


If a particular Integration Platform or a product offers the above mentioned capabilities, we can consider that as a good candidate for your Hybrid Integration Platform requirement.

On top of these technical capabilities, another key factor in identifying a good HIP is the user experience when it comes to integrate multiple systems. The concept of Citizen Integrtor has become a key factor in selecting a good HIP. The term Citizen Integrator has many different meanings. But the high level idea is that a person who understands the business value of systems and their interconnections will become a citizen integrator if he is willing to learn about usage of the technology without going detail into writing code. Due to this factor, most of the vendors are building visual tools to help Citizen Integrators to easily achieve their targets without worrying about writing code.

In addition to the above mentioned capabilities, having following features would add icing on the cake.

  • Analytics capabilities
  • API management capabilities
  • Identity and Access Management capabilities

Given below are some of the leaders in this space
  • Software AG (Darmstadt, Germany)
  • Informatica (California, US)
  • Dell Boomi (Pennsylvania, US)
  • MuleSoft (California, US)
  • IBM (New York, US)
  • TIBCO Software (California, US)
  • Oracle (California, US)
  • Liaison Technologies (Georgia, US)
  • WSO2 (California, US)
  • SnapLogic (California, US)
  • Red Hat (North Carolina, US)
  • Axway (Puteaux, France)
  • SEEBURGER (Bretten, Germany)
  • Microsoft (Washington, US)

Understanding WSO2 API Manager Deployment patterns

$
0
0
WSO2 API Manager comes with a modularized architecture so that users can scale the components based on their needs. When scaling the WSO2 API manager deployments, it is essential to understand the interconnections between different components. As depicted in the below figure, WSO2 API Manager has 6 main components.

apim-compo.png

  • Publisher - Creates and Publish APIs and manage API lifecycle
  • Developer Portal (Store) - Discover APIs, Subscribe to APIs, Test APIs
  • Gateway - Process the API requests. Traffic handling component
  • Key Manager - Validate the authenticity of the requests (traffic) coming into the gateway
  • Analytics - Provide analytics on API usability
  • Traffic Manager - Provide throttling and rate limiting of API access

The interconnections of these components happen through database, user store(LDAP) sharing. The interconnections are depicted in the below figure.

APIMDeployment.png
As depicted in the above figure, once the API is created and published from the API publisher, meta data will be push into the registry database. From that, API developer portal will get the meta information and show them in the portal. In the meantime, API file will be deployed to API gateway through the web service (admin service) call from publisher to gateway. If there are any throttling policies attached to a given API, that information will be pushed to the traffic manager. User information will be shared through user management database as well as user store (LDAP). When an API user generates a token for his application, that information will be stored in the API management database which is shared across publisher, dev portal and key manager. When the application sends an API call to the gateway, it will call the key manager and validate the token using the API management database.

Those are basic information about the componentized architecture and how they interact with each other during an API creation, publishing, subscription and invocation process. When these components are distributed for scalability, these interconnections needs to be maintained using relevant database sharing options mentioned above.

WSO2 API Manager All-in-one deployment

This is the simplest deployment pattern which can be setup using the WSO2 API Manager and APIM Analytics components. This is suitable if you have a considerably low traffic requirements (<1000TPS) and need not to worry about scalability as such. The deployment would be 2 All-in-one nodes in active/active mode with one APIM analytics node.

api-dep-1.png
If you are going with this approach, it is essential to deploy the load balancer in DMZ and put the APIM deployment within LAN as depicted above.

WSO2 API Manager partially distributed deployment

The next level of deployment pattern is to partially distribute the API manager components so that scaling and non-scaling profiles are separated. In WSO2 APIM, we have 3 major scaling components
  • Gateway
  • Key Manager
  • Traffic Manager
Compared to Gateway and Key Manager, Traffic Manager scales at a low rate (10:1) and hence we can consider that as a non-scalable component. Due to that reason, If you need some kind of scalability while keeping the setup less-complex with less costly, you can use the partially distributed deployment mentioned in below figure.

In the above mentioned setup, Key Manager and Gateway components are installed in separate JVMs while Publisher, Store and Traffic Manager components are installed in the same JVM. If you need to further scale down this solution, you can think of keeping both Gateway and Key Manager nodes combined into one JVM so that you can further reduce the number of nodes by 2. But that is optional.

WSO2 API Manager fully distributed deployment

If users need the full control of the WSO2 API manager deployment with scalability, users can utilize the fully distributed deployment as depicted in the below figure. In this setup, all the components are deployed in separate JVM instances and based on the requirements, scaling profiles can scale up and scale down.

full-distributed-apim-dep.png

WSO2 API Manager Internal/External deployment

If your organization needs to separate out the internal and external API gateways, that can be achieved by using this deployment pattern. In this setup, there is only one API publisher which is publishing the APIs. There are separate gateways for handling internal and external traffic.


When publishing the APIs, users can select which gateways to be published the API and the store will only display the APIs relevant to specific tenants which are accessible from outside.

Understanding General Data Protection Regulation (GDPR) and the business impact

$
0
0
Do you know how your personal data is used and exchanged within Facebook or Google? How do you feel if you see your personal information is exposed to a digital marketer and he keeps bombarding you with promotional offers which you are not interested in? Whether you like it or not, this is how it works today. Your personal information is there to be used by anyone who can pay a small amount of money or capable of doing some kind of hacking. Do you ever think about why someone wants your full detail when registering into some kind of service?

Within first 6 months of 2017, there has been more than 6 Billion personal records were exposed through data breaches. They cost millions of dollars to the organizations today and in the future. Trust and confidence are the most important factors in today's business. 70% of customers reports that they would be less inclined to work with a business that suffered a public disclosure of a data breach.

From the above facts, it is self evident that data protection is not only important to the customers but also to the businesses. Even though business leaders understands the value of the data, that understanding has not translated into careful data stewardship. But with the impact they have seen with these data breaches on other businesses, everyone is now keen on having a better protection to their data.

Source: https://marketoonist.com/2017/10/gdpr.html

General Data Protection Regulation or GDPR provides the much needed kick in the ass to many businesses that have become complacent about the data security. All the businesses dealing with data about EU citizens (in and out of EU) needs to comply with this regulation by 2018 May. It is the successor to the previous regulation data protection directive which was introduced in 1995.

Even though this is a regulation, it has many useful things which any business can use for their benefit.

Forcing awareness about entire data web

  • Business leaders are forced to understand their data landscape no matter if your company is a small company or a large multi national company with subsidiaries and 100s of partners. All the incoming and outgoing data must be well understood. 
  • If the business has subsidiaries and partners, the entire data web needs to be well understood. 

Demanding knowledge of data sources and origin countries

  • Every data source (Partners, Customers, Subsidiaries) feeding data into the organization must be vetted and documented.
  • GDPR is the first global data protection law
  • Applies to any business which process data about EU citizens

Advising data minimization

  • Companies must state a planned use for all the personal data they obtain. Recommend to use data which is absolutely necessary. No additional data to be used for future.
  • Not holding data for any longer than absolutely necessary
  • Not changing the original purpose of the data capture
  • Deleting any data at the request of the data subject (customer)

Spotlighting data sharing

  • Data in transit needs to be properly secured.
  • Businesses must be able to document appropriate security measures for every step in data's life cycle

Acquiring consent

  • Requires clear, affirmative consent of use for personal information of EU citizens. 
  • Lack of response is not considered automatic consent

Breach monitoring and response

  • Breach notifications needs to sent within 72 hours of breach detection
  • Breach policies needs to be carefully setup with partners and well documented
Even though this looks like something annoying for a business, it really has some good things which can be gained for every company. This regulation provides the careful design of your business data and avoid keeping unnecessary data within your organization and hence reducing the operational expenditure.

In addition to the above mentioned points, following link provides a list of major changes which are coming with the GDPR.

https://www.eugdpr.org/key-changes.html


Microservices Solution Patterns

$
0
0
Microservices Architecture (MSA) is reshaping the enterprise IT eco system. It started as a mechanism to break the large monolithic applications into a set of independent, functionality focused applications which can be designed, developed, tested and deployed independently. The early adopters of MSA have used this pattern to implement their back end systems or the business logic. Once they have implemented these so called back end systems, then came the idea of implementing the same pattern across the board. The idea of this article is to discuss the possible solution patterns which can be used in a MSA driven enterprise.

At 33000 feet, enterprise IT system looks like the picture shown below.
Figure 1: Enterprise IT system at 33000 feet

As depicted in the above figure 1, the system comprised of multiple layers horizontally and vertically. The business logic will be implemented in back end systems as monolithic applications and there will be third party systems like ERP, CRM, etc. On top of the back end systems, there is the integration layer which interconnects heterogenous back end systems. Once these services are integrated, they need to be exposed as APIs to internal and external users as managed APIs through API management layer. Security and analytics will be used across all those 3 layers. With the introduction of the MSA, this entire eco system is opened up with new solution patterns which can serve the purpose of different enterprises based on their needs.

Pattern 1 - MSA BE + Monolithic ESB + Monolithic APIM
This is the most common and the starting pattern of microservices implementations where back end systems (business logic) are developed as microservices while keeping the other layers as it is.
Figure 2: Pattern 1 - MSA BE + Monolithic ESB + Monolithic APIM

The above mentioned pattern is a good starting point for any organization to bring in the MSA and evaluate the pros and cons of the approach. Based on the results of this approach, a wider adoption can be done by moving into the patterns mentioned below (2,3,4,5)

Pattern 2 - Monolithic APIM + Service Mesh + Message Broker
The latest development of the MSA is the concept of a “service mesh” which provides the additional functionalities which are required for communication between microservices. Message broker is used as the communication channel (dumb pipe) for message exchange across microservices. This will reduce the requirement of an integration layer in some use cases.

Figure 3: Pattern 2 - MSA BE + Service Mesh + Message Broker

As depicted in the above figure 3, in this solution pattern, MSA has extended to the integration layer and removes the need for a separate integration layer. Eventhough this pattern can be implemented for a green field IT organization, it will be harder for a larger enterprises with so many additional systems.

Pattern 3 - Monolithic APIM + Micro Integration + Service Mesh
One of the challenges of the Pattern 2 is to integrate CRM, ERP kind of systems to the microservices layer. Instead of doing the integration at microservices layer or through message broker, it is possible to bring in a micro integration layer to fullfill this requirement. With that, Message Broker functionalities (messaging channel) can also be moved into the micro integration layer.

Figure 4: Pattern 3 - Monolithic APIM + Service Mesh + Micro Integration

As depicted in the above figure 4, microservices, database systems and COTS systems (ERP, CRM) are integrated using the micro integration layer.

Pattern 4 - Service Mesh + Micro Integration + Edge Gateway
With the widespread adoption of microservices architecture within the enterprise, monolithic API gateway can also be replaced with a micro-API gateway or an edge gateway. This is the next solution pattern which can be implemented.

Figure 5: Pattern 4 - Edge gateway + Service Mesh + Micro Integration
In this pattern, monolithic API management layer is replaced with a set of edge gateways which are deployed based on the APIs which are exposed to the users. These edge gateways behaves similar to microservices when it comes to design, develop, test and deployment while providing the API management functionalities.

Pattern 5 - All MSA
Once the microservices wave is in full swing, security, analytics and database layers can also be implemented as microservices. Except the COTS systems, entire enterprise IT eco system is implemented as microservices in this solutions pattern.
Figure 6: Pattern 5 - All MSA

As depicted in the above figure 6, all the component within the enterprise are implemented as microservices except third party systems. This pattern can be extended so that service mesh is extended across micro integration, analytics, security and database microservices.

The Magic Bullet of Integration - WSO2 Enterprise Integrator (WSO2 EI)

$
0
0
Enterprise Application Integration (EAI) has been a challenging but mandatory requirement within any enterprise IT system. According to a recent survey, 60% of the digital transformation stories covers with integration requirements. So it is quite evident that, integration is an integral part of your digital transformation journey. If you are an experienced integration architect or a developer, you know about the plethora of vendors and solutions available in the market. They have different strengths and weaknesses. But we know for a fact that there is no “Silver Bullet” in the integration landscape. Even though that is the case, we have a “Magic Bullet” which can serve a lot of requirements (like the Magic Bullet Blender shown below) which you have in your integration project.

Figure 1 : Magic Bullet (Source: Amazon.com)

WSO2 has been building a collection of middleware products which can serve almost all the requirements which may occur in a digital transformation project (or any IT project). There was a time it had produced 20+ products serving different requirements. Even though that model has advantages to some of the customers who start small and use only portion of the platform, it had more disadvantages to the customers as well as to WSO2 when it comes to maintenance and releases. About one and a half years ago, WSO2 decided to build 5 products which can be used for any enterprise middleware requirement when going through a digital transformation.

As a result of this 5 product strategy, WSO2 introduced this new product called the WSO2 Enterprise Integrator. The latest version of the product is WSO2 EI 6.1.1. This product aggregated several WSO2 products which has been used in different parts of an enterprise integration project. These different components, included to the WSO2 EI product as separate “profiles”. It has 4 main profiles which can be run as separate JVMs for different integration requirements.

  • WSO2 EI - Integration Profile (or ESB profile)
  • WSO2 EI - Business Process Profile (or BPS profile)
  • WSO2 EI - Message Broker Profile (or MB profile)
  • WSO2 EI - Analytics profile

The below figure showcase these profiles in a visualized manner.

Figure 2: WSO2 Enterprise Integrator capabilities (Source: WSO2 Documentation)


WSO2 EI - Integration Profile (or ESB profile)
This profile contains the functionalities of both WSO2 Enterprise Service Bus (ESB) as well as WSO2 Data Services Server (DSS). Integration profile can be used for any system integration or data integration requirements like

  • Message routing
  • Message transformation
  • Protocol translation
  • Data mapping
  • Service orchestration
  • Exposing databases as SOAP/REST services
  • Executing scheduled tasks
  • Cloud service integration (Salesforce, Peoplesoft, Twitter, etc)

In addition to the above mentioned use cases, WSO2 EI - Integration Profile comes with various other capabilities as well. You can refer WSO2 documentation for more information. WSO2 EI - Integration profile comes with a eclipse based IDE called WSO2 EI tooling, which can be downloaded as a separate tool and used to develop the integration use cases which are mentioned above. These integrations are done using xml based “synapse” mediation language for system integration use cases (ESB) and our own xml based “data services” language for data integration requirements. Users can use WSO2 EI tooling component to visually build these integrations using drag and drop functionalities.

WSO2 EI - Business Process Profile (or BPS profile)
If your integration project requires a tool to model your long running business processes which may or may not have human interactions to complete in the middle of the process, you may want to find out a proper BPEL or BPMN modelling tool to model these processes and execute. WSO2 EI comes with a profile which can be used for this purpose. WSO2 EI - Business Process Profile (or BPS profile) is the perfect tool for the task. It provides the following capabilities to the users

  • Enables developers to easily develop, deploy and manage long-running integration processes (business processes)
  • Implemented using either BPMN 2.0 standard or WS-BPEL 2.0 standard.
  • Powered by the Activiti BPMN Engine 5.21.0 and Apache Orchestration Director Engine (ODE) BPEL engine
  • Comes with a complete web-based graphical management console, enabling users to easily deploy, manage, view and execute processes as well as human tasks
  • Allows processes to be integrated with human tasks

WSO2 EI Tooling component comes with a graphical tool to build BPEL as well as BPMN processes. A good starting point to learn about the capabilities and how to implement a business process using WSO2 EI - BPS profile is the WSO2 documentation BPS tutorial section.

WSO2 EI Message Broker Profile (or MB profile)
Another core requriement of most of the integration projects is the ability of the platform to provide guaranteed message delivery without loosing any data even though different systems operates at different speeds (in terms of message processing). There can be millions of users sending data (requests) to your system but your back end systems cannot process them at the same rate. Then you need to have an intermediate data store which can store the incoming data at high speeds and allow back end systems to consume that data at their own rate. This is the core functionality of a Message Broker. WSO2 EI - Message Broker profile (or MB profile) is designed for that kind of use cases.  The main functionalities of this profile includes
  • Interoperable Message Broker
    • Support for JMS v1.0 and v1.1 API
    • Supports AMQP 0.91
    • Supports MQTT including all QOS levels.
  • Distributable Queues
    • Provides strict or best effort for inorder message delivery in cluster.
  • Shared subscription
    • Durable subscription.
    • Shared durable subscription.
  • Dead letter channel
  • Flow control
  • Browsing Queues
This profile comes with a management console to manage the queues/topics which are created on the broker as well as for monitoring the broker.

WSO2 EI Analytics profile
Did you notice that in the magic bullet, you can see what you put into the jar and how things are going through the transparent interface. Likewise, with all the integration functionalities covered with the above mentioned 3 profiles, enterprise needs to monitor what happens within these components. It is essential to monitor the latentices within these layers so that required SLAs are achieved at the clients. WSO2 EI Analytics component provides the capabilities to monitor both Integration profile as well as BPS profile.

Integration Profile Analytics
There is a separate dashboard to monitor the statistics of the integration components which users developed within the integration profile. It provides capabilities such as
  • Statistics for APIs, Proxy Services, Sequences, Inbound Endpoints and endpoints which includes
    • Request counts
    • Success/Failure counts
    • Latencies
    • Mostly used components
  • Mediator level statistics like latency, request counts
  • Message level monitoring with tracing enabled
In addition to these OOTB graphs, users can pubish custom information to the analytics component and then build their own dash boards as well as graphs.

Business Process profile analytics
There is a separate dashboard for monitoring the business processes and their activities within the EI Analytics profile. This dashboard provides information about the running business processes and their statistics. It provides
  • Available business processes
  • Details about the running instances of the process/human tasks
  • Average execution times of processes/human tasks
  • User involvments for processes/human tasks

With all these capabilities coming with a single product distribution, users can build their own integration projects using whatever the profile which is reuired based on the project requirements. It is similar to using the proper jar and the proper blade in the “magic bullet” device.

Ultimately, we have a “magic bullet” for integration projects. Which is WSO2 Enterprise Integrator (WSO2 EI).



Understanding Microservices communication and Service Mesh

$
0
0
Microservices architecture (aka MSA) is reaching a point where handling the complexity of the system has become vital to reap the benefits of the very same architecture brings into the table. Implementing microservices is not a big deal given the comprehensive set of frameworks available (like Spring Boot, DropWizard, NodeJs, MSF4J, etc.). The deployment of the microservices also well covered with containerized deployment tools like docker, kubernetes, Cloud Foundary, Open Shift, etc.

The real challenge with the microservices type implemention is the inter-microservice communication. Now we have gone back to the age where we had the spaghetti deployment architecture where service to service communication happens in a point to point manner. Enterprise has come a long way from that position through ESB, API Gateways, Edge proxies, etc. Now it is not possible to go back to the same point in the history. This requirement of inter-microservice communication was not an afterthought. The proposed method at the beginning of microservices wave was to use

  • Smart endpoints and
  • Dumb pipes

Developers have used message brokers as the communication channel (dumb pipe) while making the microservice itself a smart endpoint. Even though this concept worked well for initial microservices implementations, with the advancements of the implementations and the scale and the complexity, this concept was no longer enough to handle the inter-microservice communication.

This is where the concept of service mesh concept came into picture with a set of features which are required for pragmatic microservices implementations. Service mesh can be considered as the evolution of the concepts like ESB, API Gateway, Edge proxy from the monolithic SOA world to the microservices world.


Service mesh follows the side car design pattern where each service instance has its own sidecar proxy which handles the communication to other services. Service A does not need to be aware of the network or interconnections with other services. It will only need to know about the existence of the sidecar proxy and do the communication through that.

At a high level, service mesh can be considered as a dedicated software infrastructure for handling inter-microservice communication. The main responsibility of the service mesh is to deliver the requests from service X to service Y in a reliable, secure and timely manner. Functionality wise, this is somewhat similar to the ESB function where it interconnects heterogenous systems for message communication. The difference here is that there is no centralized component rather a distributed network of sidecar proxies.

Service mesh is analogous to the TCP/IP network stack at a functional level. In TCP/IP, the bytes (network packets) are delivered from one computer to another computer via the underlying physical layer which consists of routers, switches and cables. It has the ability to absorb failures and make sure that messages are delivered properly. Similary, service mesh delivers requests from one microservice to another microservice on top of the service mesh network which runs on top of an unreliable microservices network.
Eventhough there are similarities between TCP/IP and service mesh, latter demands much more functionality within a real enterprise deployment. Given below are a list of functionalities which are expected from a good service mesh implementation.

  • Eventually consistent service discovery
  • Latency aware load balancing
  • Circuit breaking/ Retry/ Timeout (deadlines)
  • Routing
  • Authentication and Authorization (Security)
  • Observability

There can be more features than the above list. But we can consider any framework a good one if it offers the above mentioned list. The above mentioned functionalities are executed at the side car proxy where it directly connects with the microservice. This sidecar proxy can live inside the same container or within a separate container.

With the advent of more and more features within the service mesh architecture, it was evident that there should be a mechanism to configure these capabilities through a centralized or common control panel. This is where the concept of “Data plane” and “Control plane” comes into the picture.

Data plane


At a high level, the responsibility of the “data plane” is to make sure that requests are delivered from microservice X to microservice Y in a reliable, secure and timely manner. So the functionalities like

  • Service discovery
  • Health checking
  • Routing
  • Load balancing
  • Security
  • Monitoring

are all parts of the data plane functionality.

Control plane

Eventhough the above mentioned functionalities are provided within the data plane on the sidecar proxy, the actual configuration of these functionalities are done within the control plane. The control plane takes all the stateless sidecar proxies and turns them into a distributed system. If we correlate the TCP/IP analogy here, control plane is similar to configuring the switches and routers so that TCP/IP will work properly on top of these switches and routers. In Service mesh, control plane is responsible for configuring the network of sidecar proxies. Control plane functionalities include configuring

  • Routes
  • Load Balancing
  • Circuit Breaker / Retry / Timeout
  • Deployments
  • Service Discovery

The following figure explains the functionality of data plane and service plane well.

As a summary of things which we have discussed above,

  • Data plane touches every requests passing through the system and executes the functionalities like discovery, routing, load balancing, security and observability
  • Control plane provides the policies and configurations for all the data planes and make them into a distributed network

Here are some of the projects which has implemented these concepts.

Data planes - Linkered, Nginx, Envoy, HAProxy

Control planes - Istio, Nelson

Even though these are categorized into 2 sections, some frameworks has functionalities which are related to both data plane and control plane.

References:




Writing Java microservices with WSO2 Microservices Framework For Java (MSF4J)

$
0
0
Microservices are going all over the enterprise. It changed the way people write software within enterprise eco system. The advantages of containerized deployments and CI/CD processes and cloud-native application architectures provided the base platform for wider microservices adoption. When it comes to enterprise software, Java has been the first choice for decades and still it is the case. Writing a microservice in Java is pretty simple and straightforward. You can just write a JAX-RS application and run that within a web container like tomcat or use an embedded server like jetty. Even though you can write a simple micro service like that, when deploying this microservice within a production system, it needs lot more capabilities. Some of them are
  • Monitoring
  • Performance
  • Error handling
  • Extensibility

Microservices frameworks comes into picture with these types of requirements. WSO2 Microservices Framework for Java (MSF4J) is a framework designed from scratch to fullfill these requirements and many others.

In this article, I will be guiding the user through a 5 step tutorial on how to implement Java microservices with WSO2 MSF4J.

Create your first microservice

If you have Java and Maven installed in your computer, creating your first microservice is as easy as just entering the below mentioned command in a terminal.

mvn archetype:generate -DarchetypeGroupId=org.wso2.msf4j -DarchetypeArtifactId=msf4j-microservice -DarchetypeVersion=1.0.0 -DgroupId=org.example -DartifactId=stockquote -Dversion=0.1-SNAPSHOT -Dpackage=org.example.service -DserviceClass=HelloService

Once you execute the above command, it will create a directory called “stockquote” and within that directory, you will find the artifacts related to your first microservice.

  • pom.xml - This is the maven project file for the created microservice
  • src - This directory contains the source code for the microservice

With the above command, we specify the package name as “org.example.service” and the service class as “HelloService”. Within the src directory, you can find the package hierachy and the source file with the name “HelloService.java”. In addition to this source file, there is another source file with the name “Application.java” which will be used to run the microservice in the pre-built service runtime.
After generating the microservice, you can create an IDE project by using either of the below mentioned commands based on the preferred IDE.

mvn idea:idea (for Intellij Idea)
mvn eclipse:eclipse (for eclipse)

Once this is done, you can open the generated source files using the preferred IDE. Let’s have a look at the source files.

HelloService.java

@Path("/service")
public class HelloService {

  @GET
  @Path("/")
  public String get() {
      // TODO: Implementation for HTTP GET request
      System.out.println("GET invoked");
      return "Hello from WSO2 MSF4J";
  }
// more code lines here

This is the main service implementation class which will execute the business logic and have the microservice related configurations through annotations which are a sub set of JAX-RS annotations.

  • @Path - This annotation is used to specify the base path of the API as well as resource level path (We have “/service” as the base path and “/” as the resource path)
  • @GET - This is the http method attached to a given method (for get() method)

In the above class, change the base path context from “/service” to “/stockquote” by changing the “@Path” annotation at the class level.

@Path("/stockquote")
public class HelloService {


In addition to this class, there is another class generated with the command which is “Application.java”.

Application.java

public class Application {
  public static void main(String[] args) {
      new MicroservicesRunner()
              .deploy(new HelloService())
              .start();
  }
}

This class is used to run the microservice within the msf4j runtime. Here we create an instance of the “MicroservicesRunner” class with an object of the “HelloService” class which we have implemented our microservice logic and then initiating the object with “start” method call.

We are all set to run our first microservice. Go inside the stockquote directory and execute the following maven command to build the microservice.

mvn clean install

After the command completion, you will see that there is a jar file generated at the target/ directory with the name “stockquote-0.1-SNAPSHOT.jar”. This is your “fat” jar which contains the “HelloService” as well as msf4j run time. Now you can run the microservice with the following command.

java -jar target/stockquote-0.1-SNAPSHOT.jar

If all goes well, you will see the following log lines in the console which will confirm that your microservice is up and running.

2017-12-15 15:17:03 INFO  MicroservicesRegistry:76 - Added microservice: org.example.service.HelloService@531d72ca
2017-12-15 15:17:03 INFO  NettyListener:56 - Starting Netty Http Transport Listener
2017-12-15 15:17:04 INFO  NettyListener:80 - Netty Listener starting on port 8080
2017-12-15 15:17:04 INFO  MicroservicesRunner:122 - Microservices server started in 180ms

Now it is time to send a request and see whether this service actually doing what it is supposed to do (which is echoing back the message “Hello from WSO2 MSF4J”). Open another terminal and run the curl command mentioned below

curl -v http://localhost:8080/stockquote

You will get the following result once the command is executed

Hello from WSO2 MSF4J

Add GET/POST methods to your microservice

Let’s create a microservice which produces something useful. Let’s expand the generated HelloService class to provide stock details based on the user input. We can rename the HelloService class as “StockQuoteService” and implement the methods for “GET” and “POST” operations.

StockQuoteService.java

@Path("/stockquote")
public class StockQuoteService {

  private Map<String, Stock> quotes = new HashMap<>();

  public StockQuoteService() {
      quotes.put("IBM", new Stock("IBM", "IBM Inc.", 90.87, 89.77));
  }

  @GET
  @Path("/{symbol}")
  @Produces("application/json")
  public Response get(@PathParam("symbol") String symbol) {
      Stock stock = quotes.get(symbol);
      return stock == null ?
              Response.status(Response.Status.NOT_FOUND).entity("{\"result\":\"Symbol not found = "+ symbol + "\"}").build() :
              Response.status(Response.Status.OK).entity(stock).build();
  }

  @POST
  @Consumes("application/json")
  public Response addStock(Stock stock) {
      if(quotes.get(stock.getSymbol()) != null) {
          return Response.status(Response.Status.CONFLICT).build();
      }
      quotes.put(stock.getSymbol(), stock);
      return Response.status(Response.Status.OK).
              entity("{\"result\":\"Updated the stock with symbol = "+ stock.getSymbol() + "\"}").build();
  }
Here, we have used a new annotation “@Produces” to specify the response message content type as “application/json”. Also, the get method is returning a “Response” object which is something native to the msf4j runtime. Also we have used “@PathParam” annotation to access the path parameter variable we have defined within the “@Path” annotation.

In this class, we have used a new class “Stock” which holds the information about a particular stock symbol. The source code for this class can be found in github location mentioned at the end of the tutorial.

Let’s go inside the source directory “step2-get-post/stockquote” and execute the following maven command.

mvn clean install

Now the microservice flat-jar file is created in the target directory. Let’s run it using the following command.

java -jar target/stockquote-0.1-SNAPSHOT.jar

Now the microservice is up and running on default 8080 port. Let’s execute the “GET” and “POST” methods with the following commands.


{"symbol":"IBM","name":"IBM Inc.","high":90.87,"low":89.77}

curl -v -X POST -H "Content-Type:application/json" -d '{"symbol":"GOOG","name":"Google Inc.", "high":190.23, "low":187.45}'http://localhost:8080/stockquote

{"result":"Updated the stock with symbol = GOOG"}

Add interceptors to your microservice

Now we have written a useful microservice with msf4j and it is working as expected. Interceptors is a component which can be used to intercept all the messages coming into a particular microservice. Interceptor will access the message prior to the microservice logic execution. It can be used for purposes like
  • authentication
  • throttling
  • rate limiting
and the interceptor logic can be implemented as a separate component from the business logic of the microservice. It can also be shared across multiple microservices with the model supported by msf4j.

Let’s see how to implement an interceptor logic.

RequestLoggerInterceptor.java

public class RequestLoggerInterceptor implements RequestInterceptor {

  private static final Logger log = LoggerFactory.getLogger(RequestLoggerInterceptor.class);

  @Override
  public boolean interceptRequest(Request request, Response response) throws Exception {
      log.info("Logging HTTP request { HTTPMethod: {}, URI: {} }", request.getHttpMethod(), request.getUri());
      String propertyName = "SampleProperty";
      String property = "WSO2-2017";
      request.setProperty(propertyName, property);
      log.info("Property {} with value {} set to request", propertyName, property);
      return true;
  }
}

The above class has implemented the “RequestInterceptor” interface which is coming from msf4j runtime. It is used to intercept requests coming into a microservice. Within this class, it has overriden the “interceptRequest” method which will execute once the request is received. Interceptor logic will be written here.

Similarly, response can be intercepted by using the “ResponseLoggerInterceptor” which implements the “ResponseInterceptor” interface of the msf4j runtime.

Once the interceptor logic is implemented, we need to engage this interceptor with the microservice using the annotations.

StockQuoteService.java

@Path("/stockquote")
public class StockQuoteService {

  private Map<String, Stock> quotes = new HashMap<>();

  public StockQuoteService() {
      quotes.put("IBM", new Stock("IBM", "IBM Inc.", 90.87, 89.77));
  }

  @GET
  @Path("/{symbol}")
  @Produces("application/json")
  @RequestInterceptor(RequestLoggerInterceptor.class)
  @ResponseInterceptor(ResponseLoggerInterceptor.class)
  public Response get(@PathParam("symbol") String symbol) {
      Stock stock = quotes.get(symbol);
      return stock == null ?
              Response.status(Response.Status.NOT_FOUND).build() :
              Response.status(Response.Status.OK).entity(stock).build();
  }

In this main microservice class, we have used the “@RequestInterceptor” and “@ResponseInterceptor” annotations to bind the interceptor classes which has the implementation logic.

Let’s go inside the source directory “step3-interceptor/stockquote” and execute the following maven command.

mvn clean install

Now the microservice flat-jar file is created in the target directory. Let’s run it using the following command.

java -jar target/stockquote-0.1-SNAPSHOT.jar

Now the microservice is up and running on default 8080 port. Let’s execute the “GET” and method with the following commands.


{"symbol":"IBM","name":"IBM Inc.","high":90.87,"low":89.77}

At the terminal window which has the microservice started, you will see the below log entries which are printed from the interceptors which was written.

2017-12-18 12:08:01 INFO  RequestLoggerInterceptor:19 - Logging HTTP request { HTTPMethod: GET, URI: /stockquote/IBM }
2017-12-18 12:08:01 INFO  RequestLoggerInterceptor:23 - Property SampleProperty with value WSO2-2017 set to request
2017-12-18 12:08:01 INFO  ResponseLoggerInterceptor:18 - Logging HTTP response
2017-12-18 12:08:01 INFO  ResponseLoggerInterceptor:21 - Value of property SampleProperty is WSO2-2017

Add custom erros with ExceptionMapper

One of the major requirements when writing microservices is the ability to handle errors and provide customized errors. This can be achieved by using the ExceptionMapper concept of the msf4j.

ExceptionMapper has 2 parts.
  • Exception type
  • Exception Mapper

First, user has to define the Exception type as a Java Exception.


public class SymbolNotFoundException extends Exception {
  public SymbolNotFoundException() {
      super();
  }

  public SymbolNotFoundException(String message) {
      super(message);
  }

  public SymbolNotFoundException(String message, Throwable cause) {
      super(message, cause);
  }

  public SymbolNotFoundException(Throwable cause) {
      super(cause);
  }

  protected SymbolNotFoundException(String message, Throwable cause,
                                    boolean enableSuppression, boolean writableStackTrace) {
      super(message, cause, enableSuppression, writableStackTrace);
  }
}

Then the ExceptionMapper can be used to map this exception to a given customized error message.


public class SymbolNotFoundMapper implements ExceptionMapper<SymbolNotFoundException> {

  public Response toResponse(SymbolNotFoundException ex) {
      return Response.status(404).
              entity(ex.getMessage() + " [from SymbolNotFoundMapper]").
              type("text/plain").
              build();
  }
}

The “SymbolNotFoundMapper” has implemented the “ExceptionMapper” class and within that, we specify the generic type of the “SymbolNotFoundException” which maps to this exception mapper. Inside the “toResponse” method, the customized response generation can be done.

Now, let’s see how this exception mapper is binded to the main microservice. In the “StockQuoteService” class, within the method which binds to the relevant HTTP method, specific exception needs to be thrown as mentioned below.

StockQuoteService.java

@Path("/stockquote")
public class StockQuoteService {

  private Map<String, Stock> quotes = new HashMap<>();

  public StockQuoteService() {
      quotes.put("IBM", new Stock("IBM", "IBM Inc.", 90.87, 89.77));
  }

  @GET
  @Path("/{symbol}")
  @Produces("application/json")
  @Timed
  public Response get(@PathParam("symbol") String symbol) throws SymbolNotFoundException {
      Stock stock = quotes.get(symbol);
      if (stock == null) {
          throw new SymbolNotFoundException("Symbol "+ symbol + " not found");
      }
      return Response.status(Response.Status.OK).entity(stock).build();
  }

In the above implementation, “get” method is throwing “SymbolNotFoundException” which is mapped to the “SymbolNotFoundMapper” which we have mentioned above. When there is an error occur within this method, it will respond back with the customized error.

Let’s go inside the source directory “step4-exception-mapper/stockquote” and execute the following maven command.

mvn clean install

Now the microservice flat-jar file is created in the target directory. Let’s run it using the following command.

java -jar target/stockquote-0.1-SNAPSHOT.jar

Now the microservice is up and running on default 8080 port. Let’s execute the “GET” and method with the following commands.


Symbol WSO2 not found [from SymbolNotFoundMapper]

Monitor your microservice with WSO2 Data Analytics Server (WSO2 DAS)

Now with microservices are implemented with error handling and interceptors, it is essential to monitor these services. WSO2 MSF4J comes with a built in set of annotations to monitor the microservices. These monitoring data can be published into
  • Terminal output
  • WSO2 Data Analytics Server
  • JMX
  • CSV file
  • SLF4J API

Details about the monitoring annotations can be found at the following link.

StockQuoteService.java

@Path("/stockquote")
public class StockQuoteService {

  private Map<String, Stock> quotes = new HashMap<>();

  public StockQuoteService() {
      quotes.put("IBM", new Stock("IBM", "IBM Inc.", 90.87, 89.77));
  }

  @GET
  @Path("/{symbol}")
  @Produces("application/json")
  @Timed
  @HTTPMonitored(tracing = true)
  public Response get(@PathParam("symbol") String symbol) {
      Stock stock = quotes.get(symbol);
      return stock == null ?
              Response.status(Response.Status.NOT_FOUND).build() :
              Response.status(Response.Status.OK).entity(stock).build();
  }

  @POST
  @Consumes("application/json")
  @Metered
  @HTTPMonitored(tracing = true)
  public Response addStock(Stock stock) {
      if(quotes.get(stock.getSymbol()) != null) {
          return Response.status(Response.Status.CONFLICT).build();
      }
      quotes.put(stock.getSymbol(), stock);
      return Response.status(Response.Status.OK).
              entity("http://localhost:8080/stockquote/"+ stock.getSymbol()).build();
  }

Here, we have used “@Metered” and “@Timed” annotations for measuring the metrics for respective HTTP methods. In addition to that “@HTTPMonitored” annotation publishes metrics data to WSO2 Data Analytics Server.

We need to setup the WSO2 DAS to test the HTTP Monitoring dashboard.

Download WSO2 DAS

Download WSO2 DAS and unpack it to some directory. This will be the DAS_HOME directory.

Configure DAS

Run "das-setup/setup.sh" to setup DAS. Note that the DAS Home directory in the above step has to be provided as an input to that script.The setup script will also copy the already built MSF4J HTTP Monitoring Carbon App (CAPP) to DAS.

Start DAS

From DAS_HOME, run, bin/wso2server.sh to start DAS and make sure that it starts properly. You need to start the DAS with following command to avod the SSLCertificate exception occurs due to self-signed certificates.

sh bin/wso2server.sh -Dorg.wso2.ignoreHostnameVerification=true

Execute following Commands

1. curl -v http://localhost:8080/stockquote/IBM
2. curl -v -X POST -H "Content-Type:application/json" -d '{"symbol":"GOOG","name":"Google Inc.", "high":190.23, "low":187.45}' http://localhost:8080/stockquote

If everything works fine, you can observe the metrics information printed at the console similar to below

org.example.service.StockQuoteService.get
            count = 4
        mean rate = 0.01 calls/second
    1-minute rate = 0.00 calls/second
    5-minute rate = 0.04 calls/second
   15-minute rate = 0.12 calls/second
              min = 1.81 milliseconds
              max = 85.22 milliseconds
             mean = 2.47 milliseconds
           stddev = 1.47 milliseconds
           median = 2.20 milliseconds
             75% <= 3.34 milliseconds
             95% <= 3.34 milliseconds
             98% <= 3.34 milliseconds
             99% <= 3.34 milliseconds

Access the HTTP Monitoring dashboard

Go to http://localhost:9763/monitoring/. If everything works fine, you should see the metrics & information related to your microservices on this dashboard. Please allow a few minutes for the dashboard to be updated because the dashboard update batch task runs every few minutes.
Source code for this tutorial can be found at the following github repository.



Understanding Serverless Architecture advantages and limitations

$
0
0
Just before all the hype about Microservices Architecture is gone, there is another term which is looming within the technology forums. Even though this is not entirely new concept, it became a talking topic recently. This hot topic is Serverless Architecture and Serverless computing. As I have already mentioned, this was around us for some time with the advent of Backend As A Service or BaaS or MBaaS. But it was at a different scale before AWS Lambda, Azure Functions and Google Cloud Functions came into the picture with their own serverless solutions.
In layman’s terms, Serverless Architecture or Serveress Computing means that your backend logic will run on some third party vendor’s server infrastructure which you don’t need to worry about. It does not mean that there is no server to run your backend logic, rather you do not need to maintain it. That is the business of these third party vendors like AWS, Azure and Google. Serverless computing has 2 variants.
  • Backend as a Service (BaaS or MBaaS — M for Mobile)
  • Function as a Service (FaaS)
With BaaS or MBaaS, backend logic will run on a third party vendor. Application developer do not need to provision or maintain the servers or the infrastructure which runs this backend services. In most cases, these back end services will run continuously once they are started. Instead, application developers need to pay a subscription to the hosting vendor. In most cases, this subscription is weekly, monthly or yearly basis. Another important aspect of BaaS is that it will run on shared infrastructure and the same backend service will be used by multiple different applications.
The second variant is the more popular one these days which is Function as a Service or FaaS. Most of the popular technologies like AWS Lambda, Microsoft Azure Functions as well as Google Cloud Functions fall into this category. With FaaS platforms, application developers (users) can implement their own back end logic and run them within the serverless framework. Running of this functionality in a server will be handled by the serverless framework. All the scalability, reliability and security aspects will be taken over by this framework. Different vendors provide different options to implement these functions with popular programming languages like Java and C#.
Once these functions are implemented and deployed on the FaaS framework, the services offered by these functions can be triggered via events from vendor specific utilities or via HTTP requests. If we consider the most popular FaaS framework which is AWS Lambda, it allows users to trigger these functions through HTTP requests by interfacing the lambda functions with AWS API Gateway. There are few main differences of FaaS when compared to BaaS.
  • FaaS function will run for a short period of time (at max 5 mins for Lambda function)
  • Cost would be only for the amount of resources used (per minute level charging)
  • Ideal for hugely fluctuating traffic as well as typical user traffic
All the above mentioned points proves that Serverless Computing or Serverless Architecture is worth giving a shot at. It will simplify the maintenance of back end systems while giving cost benefits for handling all sorts of different user behaviors. But according to Newtons 2nd law, there is a reaction for every action, meaning there are some things we need to be careful when dealing with serverless kind of architectures.
  • Lack of monitoring, debugging capabilities to the user about the production system and it’s behaviors. User had to trust whatever the monitoring options provided by the vendor
  • Vendor locking can cause problems like frequent mandatory API changes, pricing structure changes and technology changes
  • Latencies occur at initial requests can become challenging for providing better SLAs across multiple concurrent users
  • Since server instances are come and go, maintaining the state of an application is really challenging with these types of frameworks
  • Not suitable for running long running business processes since these function (Services) instances will destroy after a fixed time duration
  • There are some other limitations like maximum TPS which can be handled by a given function within a given user account (AWS) is fixed by the vendor.
  • End to end testing or Integration testing is not easy with the functions come and go.
Having said all these limitations, these things will change in the future with more and more vendors coming along with more improved versions of their platforms.
Another important thing is the difference between a Platform as a Service and a FaaS (or BaaS or MBaas). With PaaS platforms, users can implement their business logic with polyglot of programming languages as well as other well known technologies. There will always be servers running in the backend specifically for the user applications and will run continuously. Due to this behavior of the PaaS frameworks, pricing is based on large chunks such as weekly, monthly or yearly. Another important aspect is the automatic scalability of the resources is not that easy with these platforms. If you have these requirements, considering a Serverless Computing platform (FaaS) would be a better choice.
Finally, with the type of popularity achieved by the Microservices Architecture, Serverless Architecture is well suited for adopting a MSA without the hassle of maintaining the servers, scalability and availability headaches. Even though Serverless computing has the capabiltiies to extend the popularity of MSA, it has some limitations when it comes to practical implementations due to the nature of vendor specific concepts. Hopefully these things will eventually go away with concepts like serverless framework.

5 technology trends that will change enterprise software in 2018

$
0
0
Enterprise software used to be slow moving, legacy, safe-playing and conservative in the past. But that era is long gone with the new advancements we have seen in the last 5 to 10 years. This trend of innovation and rapid change will get accelerated during the year 2018. Here are 5 main technology areas that will change the way people build, sell and use enterprise software.
  • IaaS to PaaS to FaaS — Maintaining a server infrastructure was challenging even for very large enterprises due to the various levels of expertise needed to maintain it. But this has changed a long way with the introduction of Amazon Web Services (AWS) in 2006 and with their EC2 Infrastructure as a Service (IaaS). All the other cloud vendors followed AWS and offered their own solutions like Microsoft Azure, Google Cloud platform. Then came the next level of cloud computing frameworks, Platform as a Service or PaaS where cloud vendors provide both infrastructure as well as operating systems and a common platform to deploy and run applications developed by users. This reduces the costs within enterprises even further. Amazon and Microsoft has delivered a plethora of platform as a service capabilities through their cloud platforms. Even with PaaS frameworks, users has to run servers in the background when there are no activities from the users and this has costs them an additional amounts for no reason. This is one of the driving factors for next level of abstraction in the cloud which is Function as a Service (FaaS) or Serverless frameworks which allows users to run their applications in the cloud only when there is a user demand. That has hugely reduced the cost for the enterprise users. This trend will keep going during 2018.
  • Open source is the new standard — Proprietary, closed source, large software vendors cannot FUD open source solutions any longer. Open source software has become the defacto standard in enterprise software. With more and more people contributing to open source projects, the trust on those projects has been immensely increased within the enterprise. With millions of projects hosted in GitHub, open source projects speaks for their stability and trust within the GH through stars, issues, conversations which are public and fully transparent.
  • Containers and Microservices architectures will dig deeper — Containers are all over the place and docker has established as the major container technology. On top of docker, there are several container management software layers developed to make the life easier for application developers. Following diagram showcases the variuos technologies around containers and specifically docker.
Container technology stack
  • Artificial Intelligence and Deep Learning — The impact which AI can make into software is well understood with the usage of AI in Google and Facebook type of applications. Now it is time to make similar impact on enterprise software with introducing AI into the newly built enterprise software platforms. Deep learning is one step further than AI and it is producing far greater results with increased efficiency and better results within a shorter response times. All the big cloud vendors are investing on their infrastructure to offer enough computing power to the users through services like Google TensorFlow, IBM Watson and Microsoft Cognizant.
  • Cyber security and Data protection regulations — Within first 6 months of 2017, there has been more than 6 Billion personal records were exposed through data breaches. They cost millions of dollars to the organizations today and in the future. Trust and confidence are the most important factors in today’s business. 70% of customers reports that they would be less inclined to work with a business that suffered a public disclosure of a data breach. This is the very same reason why there are regulations such as GDPR and PSD2 coming up for protecting the user data within enterprise deployments. This will be a major trend in 2018 with the enterprise software needs to comply with these standards and regulations.
There will be some other topics which will take traction during 2018. Some of them are
  • Microservices and Service Mesh
  • Internet of Things (IOT)
  • UX in enterprise software

Connecting the Connected - Reference architecture for Telecommunications Industry

$
0
0

1.0 Introduction

Sending a message from one place to another (telecommunication) has been a revolution since its inception. It helped people to save lives, win wars, make peace and many other things. After thousands of attempts, Alexander Graham Bell made it possible to send a message over a wire and that opened the door for the eventual over the air communication and finally we are in the age of satellite based communication (thanks to Sir. Arthur C. Clarke) which made it possible to connect people in far corners of the world within a whisker. Telecommunication (specifically mobile) technology has gone through several iterations (e.g. 2G, 3G, 4G, 5G) and the main focus point of that was to improve how information is exchanged across the network. As any other technology, telecommunication has gone beyond the need of message exchanges to data communications to video conferencing to shopping to financing and many more. You can absolutely do anything with the device (mobile phone) which was used to send messages in the past using your finger tips. The organizations who bring this technology to the end user are called the Mobile Network Operator (MNO)s. Even though they deal with these technical advancements continuously, their main focus point is their customer base which they refer as subscribers. It is their responsibility to provide value to their subscribers on top of the telecommunication capability which they offer through technology. The main focus of this whitepaper is around how MNOs can collectively offer better experience to their subscribers while increasing their profit margins using digital transformation.

2.0 Understanding the stakeholders in telecommunication ecosystem

Mobile Network Operator (MNO)s are the main players of the telecommunication game with respect to bringing telecommunication capability to the end users. They use the complex technological advancements and equipments and build networks which interconnects people in a given region (it can be a village, city, province, country or the entire world). Let’s identify the stakeholders who are engaged with the telecom ecosystem.

Figure 1: Telecommunication industry stakeholders






Mobile Network Operator (MNO)
- The main stakeholder of providing telecommunication services to subscribers. It is responsible for purchasing the frequencies, setting up the infrastructure (signaling towers, base stations, switches, antennas, etc.), designing the network based on capacities, testing the signal strengths and finally marketing and selling the service to the end users.


Subscribers - The end users who are using the network which is built and maintained by the MNO and pay for the subscription.


Equipment manufacturers- Vendors who do research on technological advancements and build the equipments (antennas, routers, switches, servers, etc.) which provides the infrastructure layer for message exchanges.


Internet Service Provider (ISP)s
- These organizations maintain the connectivity between MNO maintained network and the public internet as well as other MNOs. Normally they maintain the backbone of the mobile network.


Application Developers - These people were not used to be stakeholders of telecommunication ecosystem in the past. But with the invention of customizable mobile operating systems like android and apple ios, application developers also play a major role in providing value added services to the subscribers on top of basic telecommunication capabilities like voice and data.


Service Providers- These are 3rd party service providers like taxi services, e-channeling, payment services (Banks), location based services and many other types of services which are offered through either mobile network over signalling channels (SMS and USSD) or data network through internet.

3.0 Understanding the technology landscape of telecommunication ecosystem

In the telecommunication industry, technology plays a major role. It is essential to understand the technology landscape before coming up with any reference model. We can divide the telecommunication technology architecture to 2 main sections.

  1. Networking architecture (Technology focused)
  2. Information Technology architecture (Business focused)


    3.1 Networking architecture



    This is the core of any telecommunication system which provides the connectivity of subscribers. It requires a special technical capability in the field of telecommunication engineering to design, build and maintain the network. It is out of the scope of this article to discuss about the networking architecture. Following diagram is added for the sake of completeness.

    Figure 2: Telecommunications network architecture[1]


    3.2 Information technology architecture



    In telecommunication industry, both network and subscribers are equally important to the MNO business. The idea of the information technology architecture is to define the systems and their interactions where subscriber related information resides. It can be billing, charging, value added services, subscriber information, service subscriptions, etc. These type of information resides on different systems which are built in house or bought from outside (COTS or SaaS applications). The interaction of these systems and how they are presented to the subscribers makes the differentiation across multiple MNOs operating in the same region since most of these MNOs owned the same type of network and technological capabilities. Due to this fact, it is essential for a MNO to focus on this part of the architecture and innovate. Typical information technology architecture within a MNO is depicted in the below figure.

    Figure 3: Telecommunications Information technology architecture (basic)





    As depicted in the above figure, in a typical information technology (IT) system within a telecommunication operator (MNO), we can find the below mentioned systems.

    Intelligent Network (IN) - This is the core of the IT architecture within the MNO. It has all the details about the subscribers and their usages. IN is connected with billing/charging systems, loyalty data systems, Call Data Record (CDR) systems and many other systems. It is responsible for charging the subscribers in proper and intelligent manner.

    Customer Relationships Management (CRM) - This is the software component which keeps the information about customers and their subscriptions.

    Enterprise Resource Planning (ERP) - This component keeps the information about projects, equipments, accounting, business processes and other business related items.

    Loyalty Data - This system is responsible for keeping information about any loyalty system and marketing promotions and other marketing activities

    Billing/Charging - This component is responsible for charging the customers based on their usage. Keeping various profiles based on usage patterns is defined and maintained here.

    Subscriber Data - This component keeps the information about subscribers from the network architecture perspective. This includes information like, Call Details Record (CDR), services offered to a subscriber, profiles bounded to subscribers, etc.

    Web Applications - These are the internal and external applications which aggregates data from different systems mentioned above and provide a useful view. 

    Integration Bus - This is the central component which interconnects heterogeneous systems through various messaging formats and communication protocols.

    Network Architecture - This is the network portion which is described in previous section.


    The above mentioned architecture is somewhat common amongst most of the telecommunication operators with more or less additional components. This architecture works perfectly fine and unless there are other operators doing innovative things around this architecture to add value to their subscriber base and expanding to your subscriber base.


    4.0 Adding value to your subscriber base


    Mobile telecommunication technology has grown to every corner of the earth. Whether you are a farmer living in a rural village in Uganda or an eskimo living in Greenland or a yankee living in New York, everyone is connected within sub second through this technology. The real potential of telecommunications lies on top of the standard voice, data or messaging services. That is where the innovation comes into picture. This technology has enabled a platform which spans beyond any cultural or geographical boundaries. To grasp this vast potential, network architecture as well as information technology architecture needs to be upgraded. In order to upgrade the IT architecture within telecom ecosystem to add value to their subscriber base, first we need to identify the capabilities which are required.

    • Capability to expose subscriber information securely
    • Analyze usage patterns
    • Connect with 3rd party service providers
    • Attract application developers


      The above mentioned capabilities can be added to your IT system with the usage of API management, data analytics, partner onboarding portal and developer portal. The improved architecture of the IT system can be depicted as in the below figure.

      Figure 4: Telecommunications Information Technology reference architecture (advanced)




      In the above figure, there are few additions on top of the basic IT architecture diagram depicted in figure 3. Those additional components are



      API management - This component is used to expose the subscriber information to external systems in a secure and controlled manner. In addition to that, it provides the capabilities to on board partners, build portals for developers to write mobile applications 


      Security - Once the subscriber information is exposed to third parties, it is essential that we could take the highest possible security measures to protect the data as well as the system 


      Analytics - By analyzing the subscriber usage patterns and their interests, MNOs can offer more and more innovative and attractive services to the subscribers. The analytics component will do the real time as well as time series based analysis of subscriber data. 


      Payment Gateway - This component resides outside of MNO ecosystem and connects with the internal systems through integration bus to handle the payments which are done from/to 3rd party service providers. As an example, a subscriber can pay his mobile subscription charges through his bank account or he can buy a shoe using his mobile subscription credit. 


      Mobile applications - Once the Subscriber information is exposed through APIs, application developers can develop mobile applications alongside 3rd party service providers to offer innovative services. 


      Service Providers - This can be any type of service providers who gets a chance to offer services to a whole new customer base (can be millions). Some examples are money transfer, e-channeling, home catering and banking.


      This architecture can expand the horizon of the MNO’s business capabilities into many other areas through APIs. The subscriber base can get benefits through the services which MNO offers through partnerships. One of the important thing with this technology is that you can provide services with a simple messaging based or USSD based application even when there is no internet connectivity. Another important aspect is that these services can be offered through localized applications so that users who has language barriers can benefit from this technology.

      5.0 Realizing the modern Information Technology architecture


      The next most important step of modernizing the IT architecture within MNO is to identify the products which are available to cater these requirements. WSO2 offers products which are specially built for these requirements. The below figure showcases the products which are well suited for this architecture.

      Figure 5: Telecommunications Information Technology reference architecture (WSO2 mapped)



      In the above figure 5, it depicts how the WSO2 products can be mapped to different components within the reference architecture for telecommunication IT system. As shown in the figure above, following WSO2 products can be used to realize the proposed reference architecture.


      WSO2 API Manager - Exposes the internal subscriber information to external users through a set of managed APIs. These APIs can be secured using OAuth2 protocol. These APIs can be displayed in a developer portal so that application developers and 3rd party service providers can discover these APIs and build their own applications. In addition to that, these API requests can be controlled using policies and alerts can be set based on over usage. 

      WSO2 Identity Server - This provides the comprehensive security capabilities to the platform by acting as an authorization server for OAuth2 tokens as well as enabler for SSO and Identity Federation. As an example, MNOs can allow users to log in to various applications using their social logins like facebook, google. 

      WSO2 Enterprise Integrator - Different systems like IN, ERP, CRM, Loyalty has their own data formats as well as transport protocols and needs to be interconnected to provide valuable aggregated data formats to the external applications. This is the central hub which interconnects heterogeneous systems and allows API management layer to expose these systems and orchestrations as valuable APIs. 

      WSO2 Stream Processor - This is the component which analyse the data going through above mentioned WSO2 products and provide insights on which APIs are heavily used, latency of different services, available login sessions and many other metrics related to the products. In addition to that events coming from network layer towards IN about usage details can also be monitored in real time and take decisions accordingly. As an example, if a particular subscriber is using data highly on a particular time frame regularly, system can automatically send him a message offering a package which is customized for that time period and increase the revenue of the business. 



      6.0 Expanding the value addition across multiple operators



      MNOs have their own strength and weaknesses when it comes to the network capabilities and other service offerings to their subscribers. MNOs wants to expand their strong capabilities to subscribers within other networks while subscribers wants to get the best possible service no matter who the operator is. This is one of the major reasons for the introduction of GSMA OneAPI. According to Wikipedia, it is defined as


      “ OneAPI is a set of application programming interfaces (APIs) supported by the GSM Association that exposes network capabilities[clarify] over the Internet “.


      In simple terms, it is mechanism to expose set of capabilities within MNOs through a standard set of APIs so that everyone can build applications on top of those APIs and allows interoperability across different MNOs (like they do for roaming) for services which they offer. If we think about roaming capabilities, there should be an agreement between your local MNO and the visiting MNO so that you can use the same SIM card within that visiting country or region. With the OneAPI specification, this interoperability brings to the next level where users can use services and applications which are offered by visiting MNO.


      With the reference architecture discussed in the previous sections, MNOs can easily adopt to OneAPI standard without any changes or additions to their IT architecture or networking architecture. The only requirement is to implement the set of APIs defined by OneAPI specification using the API management layer and relevant system integrations through integration layer. 

      Figure 6: Telecommunications Information Technology reference architecture with GSMA OneAPI



      As depicted in the above figure, when all the MNOs implement the OneAPI standard and expose their network capabilities and subscriber information, 3rd party service providers as well as application developers can reap benefit of a larger subscriber base. In fact mobile operators can also sell their services to subscribers on other network operators. Subscribers will get services from multiple operators by having only one subscription. To mediate and govern these interactions, there needs to be a “OneAPI Gateway” which will eventually route the subscriber requests based on set of defined rules and agreements across different operators. This gateway functionality can be done by a 3rd party provider (3PP) or one of the operators themselves. With this architecture, different operators can have their own internal implementations on how the systems are interconnected and so on, but they must expose a standard set of APIs.

      This architecture can be expanded across boundaries and the entire world can be connected into a single network (similar to internet) and a person with a mobile phone and a subscription to his local operator can access services across the world when he is travelling or staying in his own country. It is somewhat similar to you watching youtube from your home through internet, you are using a mobile application which is used to channel a doctor in another country where you mother is living and make the appointment from the phone by paying from your mobile wallet !.
      7.0 Summary


      In summary, the technology which used to connect people needs to be connected in a more meaningful manner. With the telecommunication information system reference architecture introduced within this article discusses about how to provide innovative services to your subscriber base and drive revenue to your business. By using GSMA OneAPI specification, it is possible to expand the business into other operators as well as other regions. Subscribers will also benefited from the global connectivity and service sharing across multiple operators. 

      8.0 References




      WSO2 API Manager 2.1.0 - Cheat Sheet

      $
      0
      0

      WSO2 API Manager components

      • API Publisher: Used by API owners. Create, Publish and Manage API lifecycle. URL = https://localhost:9443/publisher.
      • API Store (Developer Portal): Used by API users. Discover, register and subscribe to APIs. URL = https://localhost:9443/store.
      • API Gateway: Used by API consumers. All the requests comes here and security, throttling enforced here. URL = https://localhost:9443/carbon .
      • Key Manager: Used by API Gateway to validate subscriptions, OAuth tokens, and API invocations. Provides a token API to generate OAuth tokens that can be accessed via the Gateway. URL = https://localhost:8243/token
      • Traffic Manager: Used by API Gateway to enforce throttling. Features a dynamic throttling engine (Siddhi) to process throttling policies in real-time. URL = https://localhost:9443/admin
      • API Manager Analytics: Provides a host of statistical graphs, an alerting mechanism on predetermined events and a log analyzer.

      API Manager component interation

      WSO2 API Manager Users and Roles

      • Creator: Granted permissions to create APIs using the API publisher and view APIs in API store to understand the feedback of the developed APIs
      • Publisher: Granted permissions to manage the full API life-cycle from creation on wards.
      • Consumer: A consumer uses the API Store to discover APIs, see the documentation and forums, and rate/comment on the APIs. Consumers subscribe to APIs to obtain API keys.
      • Admin: Super user with all the above privileges and administration capabilities

      Lifecycle of an API

      • CREATED: API metadata is added to the API Store, but it is not visible to subscribers yet, nor deployed to the API Gateway.
      • PROTOTYPED: The API is deployed and published in the API Store as a prototype. A prototyped API is usually a mock implementation made public in order to get feedback about its usability. Users can try out a prototyped API without subscribing to it.
      • PUBLISHED: The API is visible in the API Store and available for subscription.
      • DEPRECATED: The API is still deployed in the API Gateway (i.e., available at runtime to existing users) but not visible to subscribers. You can deprecate an API automatically when a new version of it is published.
      • RETIRED: The API is unpublished from the API Gateway and deleted from the Store.
      • BLOCKED: Access to the API is temporarily blocked. Runtime calls are blocked, and the API is not shown in the API Store anymore.


      API Lifecycle Visibility

       

      Database configuration for distributed deployment



      APIM Database configurations across profiles

       

      In addition to the above mentioned databases, following databases will be used based on the usage of metrics and APIM analytics respectively.
      • metrics database (metrics.xml) — once you enable metrics and JDBC storage type, you need to configure the datasource configurations in metrics-datasources.xml file.
      • analytics database(WSO2_ANALYTICS_EVENT_STORE_DB) — This database needs to be configured at WSO2 APIM anaytics node to store the raw events coming into it.
      In a fully distributed setup, analytics needs to be configured at each node as mentioned below.


      APIM Analytics database configuration

       

      Supported OAuth2 and extended grant types

      • Authroization Code grant — Validate application and the end user. Use authorization endpoint (URL=https://localhost:8243/authorize) to authenticate user and token endpoint (URL=https://localhost:8243/token) to request the access token.
      • Password grant — Validate application and the end user (resource owner). Use token endpoint to get the access token directly by sending the username and password of the resource owner along with base64 encoded string of consumer-key:consumer-secret pair.
      • Client credentials grant — Validate only the application (client). Use token endpoint to get the access token by sending the base64 encoded string of consumer-key:consumer-secret pair.
      • Implicit grant — Validate application and the end user (resource owner). Use authorization endpoint to get the token by sending the client ID (only) and user is redirected to provide user credentials. Access token is included in the redirection URL as a URI fragment.
      • Refresh token grant — Used to get a new access token once the existing token is expired. Use token endpoint to get the new token by sending the refresh token and base64 encoded consumer-key:consumer-secret pair.
      • SAML2 extension grant — Validate application and the end user. User will be redirected to IDP to log in to the system and IDP returns a SAML response to the application (SP). Application calls the token endpoint along with SAML token (base64 URL encoded) and consumer-key:consumer-secret pair and get the access token.
      • NTLM extension grant — Validate application and the end user. User needs to get an NTLM token from the running windows server and pass that along with base64 encoded consumer-key:consumer-secret pair to token endpoint and get an access token.
      • Kerberos extension grant — Validate application (client). Application calls token endpoint to get access token by sending base64 encoded consumer-key:consumer-secret pair along with kerberos ticket received from KDC (Key Distribution Centre).

      WSO2 API Manager throttling capabilities



      How throttling is related to applications, users and back end systems

       
       

      • Application -> API throttling — Subscription tiers are available when an application subscribe for an API. Burst control can be configured at subscription tiers.
      • Application -> Token throttling — Different throttling levels are available per token when creating an application
      • All consumers -> API/Resource throttling — Advanced throttling tiers are available at API/Resource level for all the external consumer requests
      • All consumers -> All APIs throttling — Custom throttling policies are defined globally which are applicable for all APIs for all consumer requests
      • API -> back end throttling — Requests going from API to a backend can be throttled with a max back end throughput


      API throttling flow

       

      WSO2 API manager analytics


      API Analytics architecture



      • Raw events are stored in WSO2_ANALYTICS_EVENT_STORE_DB database which is configured within the analytics profile.
      • These events are processed using spark scripts and processed data is stored into the WSO2AM_STATS_DB database.
      • Processed data will be retrieved by the API publisher and API store to showcase the API statistics
      • Siddhi runtime included within the analytics component analyses the incoming events and send realtime notifications based on the conditions configured in the node.

      WSO2 API manager extensions



      WSO2 API Manager extension capabilites

       
       

      Happy cheating with WSO2 API Manager !!!

      References:

      Implement user based throttling with WSO2 API Manager

      $
      0
      0

      WSO2 API Manager throttling implementation is done in a manner so that API designers have the full flexibility to throttle API consumers at all levels. It supports throttling at 
      • API level
      • Resource level
      • Application level
      • Subscription level
      • Oauth2 token level
      with out of the box throttling tiers and configurations. Users can manage throttling at above mentioned levels using the OOTB functionalities available in the product. The below 2 figures explains how the throttling is applicable across different levels and how they are executed during the runtime.
      WSO2 API Manager throttling levels and applicability
      The different throttling levels which are configured from the API manager will apply during the runtime as depicted in the below figure.
      WSO2 API Manager throttling execution flow
      In this article, I’m going to discuss how to implement custom throttling rules to throttle API requests based on users. To do this, users needs to log in to the admin interface of the WSO2 API manager runtime which is deployed in the following URL.

      Once you log in, you can go to the Throttling Policies tab and navigate to Custom rules section where you can add a new custom rule. These custom rules needs to be implemented in Siddhi query language which is somewhat similar to SQL per say. The below section is extracted from the WSO2 API Manager documentation.

      Custom throttling allows system administrators to define dynamic rules for specific use cases, which are applied globally across all tenants. When a custom throttling policy is created, it is possible to define any policy you like. The Traffic Manager acts as the global throttling engine and is based on the same technology as WSO2 Complex Event Processor (CEP), which uses the Siddhi query language. Users are therefore able to create their own custom throttling policies by writing custom Siddhi queries. The specific combination of attributes being checked in the policy need to be defined as the key (also called the key template). The key template usually includes a predefined format and a set of predefined parameters. It can contain a combination of allowed keys separated by a colon (:), where each key must start with the prefix $. The following keys can be used to create custom throttling policies:

      resourceKey, userId, apiContext, apiVersion, appTenant, apiTenant, appId

      Let’s write a custom policy to restrict any user from sending more than 5 requests to all the APIs running on the gateway. 

      FROM RequestStream
      SELECT userId, userId as throttleKey
      INSERT INTO EligibilityStream;
      FROM EligibilityStream#throttler:timeBatch(1 min)
      SELECT throttleKey, (count(userId) >= 5) as isThrottled, expiryTimeStamp group by throttleKey
      INSERT ALL EVENTS into ResultStream;

      In the above script, we select the userId as throttleKey and check the number of requests received to the gateway within a time span of 1 minute and make it throttled if the request count is greater than or equal to 5. 
      Let’s write another script to restrict only a given user (“admin”) with a limit of 5 requests per minute.

      FROM RequestStream
      SELECT userId, ( userId == ‘admin@carbon.super’ ) AS isEligible , str:concat(‘admin@carbon.super’,’’) as throttleKey
      INSERT INTO EligibilityStream;
      FROM EligibilityStream[isEligible==true]#throttler:timeBatch(1 min)
      SELECT throttleKey, (count(userId) >= 5) as isThrottled, expiryTimeStamp group by throttleKey
      INSERT ALL EVENTS into ResultStream;

      In the above script, we check the username to be “admin” and then apply the throttling limit.
      If we want to restrict all the users other than “admin” from sending more than 5 requests per minute, we can implement it like below.

      FROM RequestStream
      SELECT userId, ( userId != ‘admin@carbon.super’ ) AS isEligible, userId as throttleKey
      INSERT INTO EligibilityStream;
      FROM EligibilityStream[isEligible==true]#throttler:timeBatch(1 min)
      SELECT throttleKey, (count(userId) >= 5) as isThrottled, expiryTimeStamp group by throttleKey
      INSERT ALL EVENTS into ResultStream;

      Above script will block all the users other than “admin” user from sending more than 5 requests per minute.
      Happy throttling with WSO2 API Manager !

      How to manage API development teams with WSO2 API Manager

      $
      0
      0
      WSO2 API Manager recently added a feature to control the visibility and the management of the API publisher interface which allows multiple teams within a same organization to independently develop their APIs without allowing others to edit or modify APIs. Even though separate teams can achieve the same (or higher) level of isolation through multi tenancy, it is not a viable option for most user scenarios where they need to expose APIs through the same tenant without dealing with the tenant level complexities.
      The basic requirement to achieve team level isolation is to create a role per team with necessary permissions to create and publish APIs. You can do this by log in to the WSO2 API manager carbon console (https://localhost:9443/carbon) and then creating a role for team1 and then assigning API creation and publishing permissions.
       
       
       
       
       
       
       
       
       
       
       
       
       
      View user roles created within the API manager
       
       
       
       
       
       
       
       
       
       
       
       
       
      Permission assignment to role created for team1
      Once this role is created, the team members related to team1 can be assigned to this role. Once we do that, APIs created by any of the team member of team1 can select the team1 role as the Access control→Restricted by roles option of publisher when creating the APIs so that the API can only be visible in the publisher portal to any other team members in the same team only (+admin).
       
       
       
       
       
       
       
       
       
       
       
       
       
      API publisher access control
      If the team members in another team (group) creates an API, those APIs will not be visible to the members of the team1 within the publisher portal (if they follow the same steps mentioned above). This will make the team level isolation during the API development time.
      Even though these APIs are not visible within the publisher portal, the visibility on the Store portal can be different. When the API is created, user can select the visibility level at the store. That can be done at
      • Role based
      • Tenant domain based
      • Public
      Depending on the selected visibility level of the Store side, other team members might also be able to view the API within the store. But they cannot modify the API since it is not visible at the publisher side.
      Using this method, an organization can easily manage their multiple API development teams without interfering with each other. This feature is available with API Manager 2.1.0 latest updates.

      The evolution of Distributed Systems

      $
      0
      0
      Distributed systems (to be exact, distributed computer systems) has come a long way from where it was started. At the very beginning, one computer could only do one particular task at a time. If we need multiple tasks to be done in parallel, we need to have multiple computers running in parallel. But running them parallel was not enough for building a truly distributed system since it requires a mechanism to communicate between different computers (or programs running on these computers). This requirement of exchanging (sharing) data across multiple computers triggered the idea of the message-oriented communication where two computers share data across using a message which wraps the data. There were few other mechanisms like file sharing, database sharing also came into the picture.





      Then came the era of multitasking operating systems and personal computers. With Windows, Unix, Linux operating systems, it was possible to run multiple tasks on the same computer. This allowed distributed systems developers to build and run an entire distributed system within one or few computers which are connected over messaging. This lead to the Service Oriented Architecture (SOA) where each distributed system could be built with integrating a set of services which are running on either one computer or multiple computers. Service interfaces were properly defined through a WSDL (for SOAP) or WADL (for REST) and the service consumers used those interfaces for their client-side implementations.



      With the reduction of price for computing power and storage, organizations all over the world started using distributed systems and SOA based enterprise IT systems. Once the number of services or systems increased, the point to point connection of these services was no longer scalable and maintainable. This leads to the concept of centralized “Service Bus” which interconnects all different systems via a hub type architecture. This component is called the ESB (Enterprise Service Bus) and it acts as a language translator or a middleman between a set of people who talk different languages but wanted to communicate with each other. In the enterprise, the languages were analogous to messaging protocols and messaging formats which different systems used for their communication. The main advantage of this model was that each system can build server side and client side implementations without worrying about the protocols of the connecting systems.



      This model was working fine and works fine even today. With the popularity of world wide web and the simplicity of the model, REST-based communication has become more popular than the SOAP-based communication model. This lead to the evolution of Application Programming Interface (API) based communication over REST model. Due to the simplicity of the REST model, the features like security (authentication and authorization), caching, throttling and monitoring type capabilities were needed to implement on top of the standard REST API implementations. Instead of implementing these capabilities at each and every API separately, there came the requirement to have a common component to apply these features on top of the API. This requirement leads the API management platform evolution and today it has become one of the core features of any distributed system.



      Then came the big bang moment of distributed systems where Internet-based companies like Facebook, Google, Amazon, Netflix, LinkedIn, Twitter became so large that they wanted to build distributed systems which span across multiple geographies and multiple data centres. These types of requirements made the technology focus shift towards the place where it all began. Engineers started thinking about the concept of a single computer and single program. Instead of considering one computer as a one computer, they think about a way to create multiple virtual computers within the same machine. This leads to the idea of virtual machines where same computer can act as multiple computers and run them all in parallel. Even though this was a good enough idea, it was not the best option when it comes to resource utilization of the host computer. Running multiple operating systems required additional resources which was not required when running in the same operating system.

      This lead to the idea of containers where running multiple programs and their required dependencies on separate runtime using the same host operating system kernel (Linux). This concept was available with the Linux operating system for some time, it became more popular and improved a lot with the introduction of container-based application deployment. Containers can act as same as virtual machines without having the overhead of a separate operating system. You can put your application and all the relevant dependencies into a container image and that can be run on any environment which has a host operating system which can run containers. Docker and Rocket are 2 popular container building platforms.







      This provided the underlying framework for organizations like Netflix, LinkedIn, Twitter to build their ever demanding always-on multi-region, multi-data-centre application platforms. This didn’t come without complexities though. The miniature nature of the container based deployment brought the complexity of platform maintenance and orchestration across multiple containers. With the invent of microservices architecture (MSA), the monolithic application divided into smaller chunks of microservices which are capable of doing a given functionality of the entire service and deployed in containers (in most cases). This brought a different set of requirements to the distributed systems ecosystem to make the system eventually consistent and communicate with each other without many complexities.






      These requirements eventually helped engineers to build a container orchestration system which can be used to maintain the consistency of a larger container-based deployments. There is no surprise that the prominent technology on this domain came from Google given their scale. They built the container orchestration platform called “Kubernetes” (a.k.a k8s) and it became the de-facto standard for large scale container orchestration requirements. K8S allowed engineers to
      • Run containers in large clusters
      • Consider the data center as a single computer
      • Communication between services (which are running on containers)
      • Automatically scale, load balanced between multiple services
      Kubernetes and docker made life easier for application programmers. They no longer wanted to boil their hearts thinking about how it will behave in different environments (operating systems, DEV, TEST, PROD, etc.) since the container image he builds will run almost identical in all the environments given that all dependencies are packaged into it.









      But still, with the containers and orchestration frameworks, there should be a team who manages these servers. Meaning that data center needs to be managed using technologies like docker, kubernetes to make sure that it feels like a single computer to the applications. Instead of you doing that, how about someone else manages that part for you. That is exactly what is coming with the serverless architecture where your server will be managed by a 3rd party cloud provider like Amazon (Lambda) , Microsoft (Azure Functions) or Google (Cloud Functions). Now the distributed system will be programmed by application programmers while the underlying infrastructure management will be done by a cloud provider. This is the latest state of the distributed systems evolution and it keeps on evolving.



      WSO2 released the latest products with GDPR at forefront

      $
      0
      0
      WSO2 released the latest product versions of its open source middleware stack with the theme of GDPR (General Data Protection Regulation) compliance. You can learn more about GDPR at below link.
      Here are the new product versions released with this theme.
      • WSO2 API Manager 2.2.0
      • WSO2 Identity Server 5.5.0
      • WSO2 Enterprise Integrator 6.2.0 (formerly known as ESB)
      • WSO2 Stream Processor 4.1.0 (formerly known as DAS/BAM)
      In addition to these open source products, WSO2 also released it’s very own solution for Open Banking as
      • WSO2 Open Banking 1.0
      This release includes the necessary features to make WSO2 platform GDPR compliance in almost all aspects. On top of that, different products have pretty cool new features which customers were looking for quite some time. You can learn about the new features of each product by referring to the following release note links.
      Try out these latest products and keep transforming digitally with open source !!!

      WSO2 MB vs Apache Kafka comparison

      $
      0
      0
      When it comes to message broker solutions, those can be categorized into 2 main types
      • Standards based traditional message brokers (e.g. Apache ActiveMQ, RabbitMQ, WSO2 MB, Apache Qpid)
      • Proprietary modern message brokers (e.g. Apache Kafka, Kestrel)
      Based on your requirement, you need to select the best category and then go for a specific vendor based on your needs, IT capacity and financial capabilities. In this post, I’m comparing 2 popular message brokers (WSO2 MB and Apache Kafka) from 2 categories. Even though it discusses about 2 specific brokers, you can consider this when comparing these 2 categories of message brokers.




      FeatureWSO2 MBApache Kafka
      Messaging semanticsRich messaging semantics with features such as
      - Transactions
      - Persistent/In-memory
      Relaxed and proprietary
      Supported protocolsJMS 1.0/1.1, AMQP, STOMP, MQTTPropreitary protocol written over TCP
      Standard messagingsupports well known standards based messaging
      - pub/sub
      - request/reply
      - point to point
      Supports only pub/sub with proprietary protocol
      ClusteringIn built clustering without any third party componentsRequires Zoo Keeper to make a cluster
      PerformanceConsiderably low due to the standards based messaging semantics and different protocol supportHigher performance due to simpler messaging semantic and proprietary protocol
      Deployment complexitySimpler deployment with 2 node cluster which is capable of handling a considerable loadComplex deployment with multiple components deployed in a cluster. Not suitable for moderate messaging requirements
      Message flow controlSupported at the MB itself through
      - dead letter channels
      - QoS
      - delivery guarantees
      Needs to be done at the client side
      Management consoleAvailable with full management capabilities of Queues/Topics and other server runtime monitoringNo native management console
      Client supportJava, .Net, C++ and other AMQP clientsClients for Java, C++, Python, Go and others
      Compatibilitycompatible with existing messaging systems within the enterprise due to standards based communicationNeed special connectors to be written for existing systems to connect with Kafka
      SecuritySecurity can be configured through
      - SSL at transport layer
      - User based access control for queues
      - Role based access control for topics
      no SSL support







      Based on the above comparison, if you are looking for a high performance, large scale message storage platform which works in isolation without much connectivity to existing systems, you can select Apache Kafka. But your requirements is to build a messaging system which interconnects with existing systems seamlessly and with a moderate performance and easily manageable deployment with rich messaging features, you can select WSO2 MB.
      Cheers !!!

      How to select a programming model for your enterprise software development

      $
      0
      0
      Microservices hype is not over yet. There are technologies like serverless computing and multi cloud platforms coming up and they will eventually increase the need for microservices style programming and application development. When it comes to enterprise software development, we can see that it is a mixture of traditional soa style systems, api driven systems which are backed my microservices or above mentioned soa systems, cloud based SaaS applications as well as proprietary software systems running on premise. Enterprise is going to be like this for a foreseeable future.
      It is quite important that as an architect or a senior leader who is responsible for an enterprise software architecture to understand the proper approach to built your software eco system which will be an ever important factor to your business. When it comes to building such a complex system, there are 3 main approaches stands out
      • Polyglot programming model
      • Platform based programming model (open source)
      • Vendor specific programming model (proprietary software)
      Let’s talk about each of these approaches in detail.
      Polyglot programming model
      The rationale behind this model is that you allow your development teams to choose the best programming language or technology to build the systems which they are building. In a microservices type architecture, this freedom is automatically there since each microservice is independent from the other. It provides more agility and maximum usage of internal talents. With the invention of cross platform tools like docker, you can bundle your applications in a common wrapper and deploy them like they have developed with the same technology. The main disadvantage of this model is that scattered knowledge and too many tools and techniques in the platform makes it complex to manage under a single platform. But this gives the maximum flexibility when it comes to selecting the best technology for a given task. Most of the architects and senior leaders are afraid to go with this approach due to the fact that they cannot build enough expertise on all different technologies.
      Platform based programming model (open source)
      Instead of selecting a technology per each and every application, you can select a technology platform which will cater most of your application development requirements. When it comes to selecting such a platform, we need to consider the following capabilities of the platform.
      • Stability
      • Extendibility
      • Performance
      • Maintainability
      • Transparency
      Open source technology platforms have become the defacto standard of choice for platform driven use cases. All the above mentioned factors are well covered with a proper open source technology platform. Some of the most popular open source technology platforms for building open source based enterprise applications are
      • Spring framework
      • Kubernetes
      • Docker
      • Netflix OSS
      • WSO2 Open source platform
      This model provides advantages like easy maintenance due to common technology platform, easier to build in house expertise, less complexity and more stability. Based on the platform you choose to build your system, you can achieve goals like agility through this model.
      Vendor specific programming model
      This model of programming has been the choice in the past when there were few companies building enterprise software platforms. These large corporations built their own proprietary platforms and provided easy to use user interface driven tools to program their systems. No one knew how to debug these systems and what is happening under the hood and there were no open information about future road maps of these platforms.
      Even though these platforms are proprietary and hard to debug, they have their own advantages.
      • Ease of use
      • Better stability
      • Purposely built software with quality features
      One of the main disadvantages of this model is the lack of flexibility and vendor locking. Once you locked in, you don’t have a choice than pay what the vendor demands when you want to add a new feature. Sometimes, they will not add the relevant feature at all.
      Conclusion
      The 3 programming models described above have their own advantages and disadvantages based on your requirements and capabilities. Therefore, you need to weigh in all the factors before deciding on a specific approach.

      Handling special characters in URLs with WSO2 ESB

      $
      0
      0
      Let’s say you have a URL which needs to send an email address as a query parameter. If you create a standard REST API within WSO2 ESB (or EI) and try to match the query parameter in uri-template, it will fail during the execution time. As an example, if you have the below API definition.

      <api xmlns="http://ws.apache.org/ns/synapse" name="EmailAPI" context="/voice">
      <resource methods="POST GET" uri-template="/details?email={email}">
      <inSequence>
      <log level="full">
      <property name="EMAIL" expression="$url:email"/>
      </log>
      <respond/>
      </inSequence>
      </resource>
      <resource methods="POST GET" uri-template="/phone/{number}">
      <inSequence>
      <log level="full">
      <property name="PHONE" expression="$ctx:uri.var.number"/>
      </log>
      <respond/>
      </inSequence>
      </resource>
      </api>

      You would expect that following request should work properly.

      http://localhost:8280/voice/details?email=chanaka@gmail.com

      If you hit the above URL, you will get a “404 not found” error. The reason is ESB cannot dispatch the request to matching resource since you have special character “@” in the URL path. You can find more information about these special characters from below link.

      https://secure.n-able.com/webhelp/NC_9-1-0_SO_en/Content/SA_docs/API_Level_Integration/API_Integration_URLEncoding.html
       
      In this post, I’m going to explain how you can resolve this special character issue using 2 approaches.

      Solution 1


      The first solution is to URL encode the special character when sending the request to this API. Now the request URL should look like below.

      http://localhost:8280/voice/details?email=chanaka%40gmail.com

      When you send the request with above URL, it will dispatch to the correct resource and you will see the log message with the proper email address similar to below mentioned log entry.

      [2018-05-04 13:35:12,660] [EI-Core]  INFO - LogMediator To: /voice/details?email=chanaka%40gmail.com, MessageID: urn:uuid:281dcd97-2043-45a3-8842-d3a0b27f5efa, Direction: request, EMAIL = chanaka@gmail.com, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"><soapenv:Body><name>chanaka</name></soapenv:Body></soapenv:Envelope>

      Even though the incoming URL parameter is encoded, once it is accessed within the mediation sequence using $url syntax, it is giving the properly decoded value.

      This solution is easier for the developers, but harder for the users of this API.

      Solution 2


      Instead of asking your users to change the client side implementation, we can handle this within the ESB itself using a simple trick. The trick is to use “*” to define the URL path when creating the API. You can create an API similar to the below configuration.

      <api xmlns="http://ws.apache.org/ns/synapse" name="AsterixAPI" context="/voice">
      <resource methods="POST GET" uri-template="/details*">
      <inSequence>
      <log level="full">
      <property name="EMAIL" expression="$url:email"/>
      <property name="PHONE" expression="$url:phone"/>
      </log>
      <respond/>
      </inSequence>
      </resource>
      <resource methods="POST GET" uri-template="/phone*">
      <inSequence>
      <log level="full">
      <property name="PHONE" expression="$url:phone"/>
      </log>
      <respond/>
      </inSequence>
      </resource>
      </api>

      Here we have defined the matching uri-template with “*” so that it will capture all the requests coming into this context “/voice/details*”. Now if you send a request to following URL

      http://localhost:8280/voice/details?email=chanaka@wso2.com&phone=(077)-3337238

      In this URL, we have 3 special characters “@”, “(“, “)” but we are not asking the client to URL encode those parameters. Rather, user can send them as it is to the API and within the ESB mediation runtime, these query parameters are accessed using the $url syntax. Here we are specifically ignoring these special characters from template matching to avoid the special character impact. Now you can see a log similar to below if you send a request to the above URL.

      [2018-05-04 14:20:57,841] [EI-Core]  INFO - LogMediator To: /test2/email?email=chanaka@gmail.com&phone=(077)-3337238, MessageID: urn:uuid:6926aa13-8f1d-40ef-afb9-02021d2d908a, Direction: request, EMAIL = chanaka@gmail.com, PHONE = (077)-3337238, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"><soapenv:Body><email>chanaka@gmail.com</email></soapenv:Body></soapenv:Envelope>

      In the above log, you can clearly see that email address and phone number query parameters are accessed within the mediation sequence in the proper format.

      These are the 2 solutions for the URL special character related requirements within WSO2 ESB or EI.

      WSO2 Identity Server 5.5.0 - Cheat Sheet

      $
      0
      0

      WSO2 Identity Server Architecture

      WSO2 Identity Server a.k.a WSO2 IS is a fully fledged Identity and Access Management (IAM) solution which provides capabilities for your enterprise to secure your resources. The product is build with a rather simplistic but powerful architecture to support variety of identity and access management requirements. Below figure explains the high level architecture of the WSO2 Identity Server.








      Figure 1: WSO2 Identity Server Architecture

      The main components included in the above figure are explained below.

      • Service Providers - These are the applications which needs to be secured using WSO2 IS as the authentication and authorization framework. This includes your mobile applications, web applications, SaaS applications, etc.
      • Inbound Authentication - This component deals with various protocols like SAML, OIDC, WS-Federation, OAuth, etc. which are used to authenticate the users by the service providers (applications).
      • Authentication Framework - This component handles the claim mapping across service providers, local and identity providers so that the incoming data is corretly mapped to the claims which are stored in the user stores within the system.
      • Local Authenticators - If the users are stored within the Identity Server connected user stores, the actual authentication of the user occurs within this component.
      • Federated Authenticators - If the actual user resides in another Identity Provider or within an external SaaS application (e.g. google, facebook, etc.) user store, actual authentication occurs within those external systems and the relevant federated authenticator will connect with the external system and get the result back to the IS
      • Identity Providers - These are the systems or applications which are capable of providing authentication capabilities so that IS can delegate it’s tasks to these systems
      • Provisioning Framework - This is the component which handles all sorts of user provisioning in to the WSO2 IS and connected external IDPs.

      Main features

      Following are the most powerful features of the WSO2 IS
      • Single Sign On (SSO) - Ability to access multiple applications by signing into one application. Different applications can be implemented with different protocols like SAML2, OIDC, WS-Federation, WS-Trust but WSO2 IS is capable of supporting SSO across those applications
      • Identity Federation - Ability to access totally different applications using the same credentials can be defined as identity federation. Your users may need to access multiple applications with their social media logins. WSO2 IS provides the capabilities to connect with well known identity providers to federate the user authentication (e.g. Google, Yahoo, Facebook, Salesforce, etc.)
      • Identity Provisioning - Ability to provision users into your system is another key capability of WSO2 IS. Users can be provisioned into primary/secondary user stores or external applications through WSO2 IS
      • Access Control and Entitlement Management - Applications needs to be controlled through proper access control levels to different users. WSO2 IS supports Role Based Access Control (RBAC) as well as Attribute Bases Access Control (ABAC) to authorize the users. Through the XACML support, it supports fine grained policy based access control.
      • Access Delegation through OAuth2 - Your applications needs access to resources on behalf of the users. OAuth2 has become the defacto standard for access delegation and WSO2 IS can act as the authorization server for OAuth2 applications.
      • Authenticators and Connectors - If your users are coming from applications which requires special authentication mechanism specific to that application, WSO2 IS provides authenticators and provisioning connectors to interact with them. These authenticators and connectors can be freely downloaded from WSO2 Store.
      • GDPR Compliance - General Data Protection Regulation comes into effect with 28, May 2018, and can affect any organization that processes Personally Identifiable Information (PII) of individuals who live in Europe. WSO2 IS complies with the GDPR and can be used to build systems which are GDPR compliant.

      Other features

      On top of the above mentioned main features, WSO2 IS has many other features which makes it an easy to use, easy to configure product for all your identity and access management requirements. Here are the bonus features

      User Account management
      • Account recovery
      • Self sign up and email confirmation
      • Ask password for new users
      • Forced password reset
      • Account suspension
      Password policy management
      • Password history validation
      • Password patterns
      User login management
      • Account locking and disabling
      • Adding ReCaptcha
      Workflow management
      • User administration workflow
      • Custom workflow
      Customizing email templates
       
      Notifications for user operations
       
      Associating user accounts
       
      Use email address as the username
       
      Analytics on user activities
      • monitor login activities
      • managing real time alerts

      Single Sign On (SSO) Feature

      WSO2 IS supports SSO across applications which uses different protocols. Here are the supported protocols
      • SAML 2.0 Web SSO
      • WS-Trust
      • WS-Federation
      • Integrated Windows Authentication
      • OAuth2-OpenID Connect

      Identity Federation

      Users coming into WSO2 IS for authentication can be federated into external Identity provider using the federated authenticator capability. WSO2 IS supports below mentioned federated authenticators OOTB.
      • SAML 2.0 Web SSO
      • OAuth2-OpenID Connect
      • WS-Federation
      • Facebook
      • Yahoo
      • Google
      • Microsoft Windows Live
      • IWA on Linux
      • AD FS as a Federated Authenticator
      • Twitter
      • SMS OTP
      • Email OTP

      In addition to the above mentioned authenticators, it supports many other applications with the authenticators which can be downloaded from WSO2 Store.

      Identity Provisioning

      Provisioning users to into your system is another key feature of the WSO2 IS. It supports 3 main types of provisioning capabilities.

      Inbound provisioning - Provisions users or groups in to the WSO2 Identity Server by an external application(service providers). WSO2 IS supports the  SCIM API and SOAP-based Web service inbound provisioning. Once the users are provisioned, IS can
      • Persist the users or groups within the Identity Server.
      • Persist the users or groups to the Identity Server and provision them to external applications using outbound provisioning.
      • Provision the users or groups to the external applications using outbound provisioning, without persisting them  internally.
      Outbound provisioning - Provisions users to a trusted identity provider from the WSO2 IS. It can be Google, Salesforce, another Identity Server, etc. Outbound Provisioning involves sending provisioning requests from the Identity Server to other external applications. Outbound provisioning is supported via SCIM or SPML standards. There are outbound provisioning connectors for Google and Salesforce available by default in the Identity Server. There are additional connectors available at WSO2 Store. WSO2 IS supports
      • Role Based Provisioning
      • Rule Based Provisioning
      • Provisioning Patterns

      Just In Time (JIT) provisioning - Provisions users to the WSO2 IS at the time of federated authentication. When WSO2 IS is used for federated authentication, it redirects the user to an external Identity Provider for authentication. JIT provisioning is triggered when the Identity Server receives a positive authentication response from the external Identity Provider. The Identity Server provisions the user to its internal user store using the user claims of the authentication response. Using JIT provisioning you can:
      • Persist users within the Identity Server.
      • Persist users to the Identity Server and provision them to the external system using outbound provisioning.

      Access Control and Entitlement management

      Access control defines the rules for “who accesses what” in your enterprise. WSO2 IS supports
      • Role Based Access Control (RBAC)
      • Attribute Bases Access Control (ABAC)
      • Policy Based Access Control (PBAC) using XACML as policy language

      Access delegation and OAuth2

      OAuth2 has become the de-facto standard for access delegation. Based on the different types of applications, OAuth2 has different grant types defined. WSO2 IS supports following OAuth2 grant types
      • Authorization Code Grant
      • Implicit Grant
      • Resource Owner Password Credentials Grant
      • Client Credentials Grant
      • Refresh Token Grant
      • Kerberos Grant


      GDPR Compliance

      WSO2 IS has the following features in relation to GDPR compliance
      • Privacy by design and privacy by default
      • Consent identity management
      • Consent life cycle management
      • Right to be forgotten
      • Exercising individual rights
      • Personal data portability 
      • Personal data protection

      Extension points

      As a final note to this cheat sheet, WSO2 IS is a configuration driven solution where you need to configure different components based on the use case. In most cases, users don’t need to change the applications or write programs (code) to fullfill the security requirements. There can be situations where WSO2 IS not capable of handling with OOTB features, in such cases, it provides extension mechanisms to implement the functionality with some custom Java code.

      Extending Access Control
      • Writing a Custom Policy Info Point

      Extending Identity Federation
      • Writing a Custom Federated Authenticator
      • Writing a Custom Local Authenticator

      Extending Provisioning
      • Extensible SCIM User Schemas With WSO2 Identity Server
      • Writing an Outbound Provisioning Connector

      Extending User Management
      • Carbon Remote User Store Manager
      • User Store Listeners
      • Using the User Management Errors Event Listener
      • Working with Errors
      • Writing a Custom Password Validator

      Extending Workflow Management
      • Writing a Custom Event Handler
      • Writing a Custom Workflow Template

      Viewing all 93 articles
      Browse latest View live