Quantcast
Viewing all 93 articles
Browse latest View live

Building Micro architectures with Micro API Gateway

Microservices Architecture (MSA) is becoming the SOA of the modern era. Like the way SOA improved the enterprise software architecture with new patterns and architectures around that, Microservices Architecture (MSA) has tripped off several new architectural styles and new concepts around how people build enterprise software. Some of them are
  • - Service Mesh — Technique to communicate amongst microservices
  • - Serverless — Running your code as functions on cloud
  • - Micro Integration — Run your integrations as microservices
  • - Micro Gateway — Run your api gateways in a microservices compatible manner
All these architectures can be categorized under the common umbrella of “microservices” and can be called as “micro-architecture”. In this post, I’m going to introduce the micro-architecture and how Micro API Gateway can be utilized in such an architecture.
Image may be NSFW.
Clik here to view.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Figure 1: Micro Architecture
As depicted in the above figure, micro architecture is independent from any type of infrastructure or vendor or technology. It is an open architecture which can be implemented using the best suitable technology or vendor for specific enterprise. Let’s understand the micro-architecture a bit more in respect to the above figure.
We have 3 groups of micro services in the figure with 3 different colors. The microservices started with MS are real back end business logic implementations. MS-X and MS-Y depicts 2 groups of micro services (e.g. lending and deposits microservices groups in a banking system). Each hexagon depicts a load-balanced, high available microservice (e.g. kubernetes service). The hexagons marked with MI are integration microservices which are integrating existing micro services (MS type) and provide a complex and advanced functionality.
The arrows connecting microservices with each other depicts the service mesh functionality and internally, it uses a side car proxy (or not depending on the selected technology stack). This component provides functionalities like timeout, retry, circuit breaker, service discovery, load balancing at the transport layer(L3/L4). Then the configuration of the service mesh is done through the control plane of the service mesh.
Then we have the 3 diamonds which demonstrate the API micro gateway functionality where these gateways offers functionalities like security, caching, throttling, rate limiting and analytics capabilities to the upstream micro services layer. In this diagram, we have used 3 different micro gateways for 3 groups of microservices. This can be extended in such a manner that each MS or MI can have their own micro gateway.
Micro API Gateway is a special component in this architecture since it has some cross cutting features which are already available in other components. If we take the functionality of service mesh, it has some capabilities like load balancing, service discovery, circuit breaker which are already available in the micro gateway. It is important understand that these functionalities are available for internal, inter-micro service communication while micro gateway uses these functionalities to expose services externally. Which means that we cannot dismiss the necessaity of api gateway within a service mesh type architecture.
The other cross-cutting component which overlaps with the micro api gateway is the micro integration layer where some capabilities like service orchestration, transformation and composition can be done at that layer. Here also we need to clearly understand that the micro integration layer offers these capabilities for internal services and at the developer level. But the types of capabilities available at the micro gateway are more towards external user interaction layer and sometimes users can directly use these features like API composition to build their own APIs.
On the other hand, using Micro API Gateway as a replacement for service mesh or micro integration layer is not recommended even though it can serve the purpose for some of the cases. That approach will introduce lot more complexities when your system grows in the future.
The next advancement or the service which can offer through this micro-architecture is the serverless (or Function as a Service — FaaS) capability to developers. Any technology vendor can combine the infrastructure layer with the micro gateway and micro integration capabilities which are hosted on their data centers to offer the serverless service to their customers so that customer can write their implementations in whatever the preferred programming language and run them as microservices under the hood within their infrastructure. In a serverless world, MS type implementations will be done by the users and all the other components will be deployed, hosted and maintained by the cloud provider.
Finally, the applications can consume the relevant APIs by contacting relevant micro gateways. Based on the application type and the API requirements, same application can use all the micro gateways as well.
As the last piece of this article, I’ll share some of the existing technologies which can utilize to realize this micro-architecture.
Microservices
  • - Java (SpringBoot, DropWizard)
  • - Javascript (NodeJs)
  • - Go
Micro Integrations
  • - Ballerina
  • - Java (Spring Boot)
Service Mesh
  • - LinkerD
  • - Istio/envoy
  • - Nginx
Micro Gateway
  • - WSO2 APIM
  • - Apigee
  • - Kong
Infrastructure
  • - IaaS (GCP, AWS, Azure)
  • - VM (VMWare)
  • - Physical (Bare metal)
Containerization
  • - Docker
  • - Rocket
Orchestration
  • - Kubernetes
  • - Docker Swarm
  • - Mesos DC/OS

Understanding WSO2 Stream Processor — Part 1

Streaming analytics has been one of the trending topics in the software industry for some time. With the production of billions of events through various sources, analyzing these events provides the competitive advantage for any business. The process of streaming analytics can be divided into 3 main sections.
  1. * Collect — Collecting events from various sources
  2. * Analyze — Analyzing the events and deriving meaningful insights
  3. * Act — Take action on the results
WSO2 Stream Processor (WSO2 SP) is an intuitive approach to stream processing. It provides the necessary capabilities to process events and derive meaningful insights with its state of the art “Siddhi” stream processing runtime. The below figure showcases how WSO2 SP acts as a stream processing engine for various events.
Image may be NSFW.
Clik here to view.
Source: https://docs.wso2.com/display/SP410
With the WSO2 SP, events generated from various sources like devices, sensors, applications and services can be received. The received events are processed in real time using the streaming SQL language “Siddhi”. Once the results are derived, those results can be published through APIs, alerts or visualizations so that business users can act on them accordingly.
Users of WSO2 SP need to understand a set of basic concepts around the product. Let’s identify the main components which a user needs to interact with.
Image may be NSFW.
Clik here to view.
WSO2 Stream processor comes with built-in components to configure, run and monitor the product. Here are the main components.
  • * WSO2 SP runtime (worker) — Executes the realtime processing logic which is implemented using Siddhi streaming SQL
  • * Editor — Allows users (developers) to implement their logic using Siddhi streaming SQL and debug, deploy and run their implementations similar to an IDE
  • * Business Rules — Allows business users to change the processing logic by simply modifying few values stored in a simple form
  • * Job Manager — Allows to deploy and manage siddhi applications across multiple worker nodes
  • * Portal — Provides ability to visualize the results generated from processing logic which was implemented
  • * Status Dashboard — Monitor multiple worker nodes in a cluster and showcases the information about those nodes and the siddhi applications which are deployed
In addition to the above components, the diagram includes
  • * Source — Devices, Apps, Services which generates events
  • * Sink — Results of the processing logic are passed into various sinks like APIs, dashboards, notifications
With these components, users can implement plethora of use cases around streaming analytics and/or stream processing whatever you called it. The next thing you need to understand about WSO2 SP is the “Siddhi” streaming SQL language and its high level concepts. Let’s take a look at those concepts as well.
Image may be NSFW.
Clik here to view.
Figure: Siddhi high level concepts in a nutshell
The above figure depicts the concepts which needs to be understood by WSO2 SP users. Except the source and sink which we have looked through in the previous section, all the other concepts are new. Let’s have a look at these concepts one by one.
  • * Event — Actual data coming from sources which are formatted according to the schema
  • * Schema — Define the format of the data which is coming with events
  • * Stream — A running (continuous) set of incoming events are considered as a stream
  • * Window — Is a set of events which are selected based on number of events (length) or a time period (duration)
  • * Partition — Is a set of events which are selected based on a specific condition of data (e.g. events with same “name” field)
  • * Table — Is a static set of events which are selected based on a defined schema and can be stored in a data store
  • * Query — Is the processing logic which uses streams, tables, windows, partitions to derive meaningful data out of the incoming data events
  • * Store — Is a table stored in a persistent database for later consumption through queries for further processing or to take actions (visualizations)
  • * Aggregation — Is a function (pre-defined) applied on events and produce outputs for further processing or as final results
  • * Triggers — Are used to inject events according to a given schema so that processing logic executes periodically through these events
Now we have a basic understanding about WSO2 SP and its main concepts. Let’s try to do a real streaming analysis using the product. Before doing that, we need to understand the main building block of WSO2 SP runtime which is a “Siddhi Application”. It is the place where users configure WSO2 SP runtime to make it happen.
Image may be NSFW.
Clik here to view.
Figure: Siddhi application compoents
Within a Siddhi application, we have 3 main sections.
  • * Source definition — This is the place to define incoming event sources and their schemas. Users can configure different transport protocols, messaging formats, etc.
  • * Sink definition — This section defines the place to emit the results of the processing. Users can choose to store the events in tables, output to log files, etc.
  • * Processing Logic — This section implements the actual business logic for data processing using the Siddhi streaming SQL language
Now you have a basic understanding about WSO2 SP and it’s main concepts. The next thing you can do is to make your hands dirty by trying out few examples with it. The tutorials section of the documentation is a good point to start things off.

Understanding WSO2 Stream Processor - Part 2

In the first part of this tutorial, I have explained about the concepts around WSO2 Stream Processor and how they are correlated with each other and which components users can use to implement their streaming analytics requirements. It laid out the platform for this tutorial (part 2) where we get our hands dirty with WSO2 SP.
The first thing you have to do is download the WSO2 SP runtime from WSO2 website.

https://wso2.com/analytics/install

Once you download the product distribution, you can extract that into a directory and run the product from the bin directory. You need to set the “JAVA_HOME” environment variable to your java installation (1.8 or higher) before starting the product. In this part of the tutorial, we are going to implement some streaming analytics use cases with WSO2 SP. Hence we need to start the SP in “editor” mode using the following command (for linux).

$ sh bin/editor.sh

This command will start the editor profile of the WSO2 SP and prints the URL of the editor in the console similar to below.


Now you can click on the above link and it will open up the editor in a browser window.

Image may be NSFW.
Clik here to view.


This is your playgorund where you can implement your streaming analytics use cases and test, debug and deploy into the runtime. All these activities can be done without moving away from the editor. Editor comes with so many samples which are self-explanatory and easy to execute. Let’s open up an existing sample to get things going.

Let’s start with the sample “ReceiveAndCount” by clicking on the sample. This will open the source file of this siddhi application. If you ignore the comments section, the code looks like below. You can save this file with the name “ReceiveAndCount.siddhi”.

@App:name("ReceiveAndCount")

@App:description('Receive events via HTTP transport and view the output on the console')

@Source(type = 'http',
       receiver.url='http://localhost:8006/productionStream',
       basic.auth.enabled='false',
       @map(type='json'))
define stream SweetProductionStream (name string, amount double);

@sink(type='log')
define stream TotalCountStream (totalCount long);

-- Count the incoming events
@info(name='query1')
from SweetProductionStream
select count() as totalCount
insert into TotalCountStream;

Let’s go through this code and understand what we are doing here. First we define the name of the siddhi application and a description about the use case.

@App:name("ReceiveAndCount")

@App:description('Receive events via HTTP transport and view the output on the console')

Then we define the source of the events with the following code segment. Here we are specifying the protocol as “http” and the data type as “json”. Also we specify the URL of the exposed service and the format of data which is coming (schema).

@Source(type = 'http',
       receiver.url='http://localhost:8006/productionStream',
       basic.auth.enabled='false',
       @map(type='json'))
define stream SweetProductionStream (name string, amount double);
After that we define the sink where we specify action on the output and the format of the output stream. Here we are pushing the result to “log” file.

@sink(type='log')
define stream TotalCountStream (totalCount long);

Finally we have the processing logic where we give the name “query1” through the @info annotation for identification of this query. Here we are taking events from input stream which we have defined in the source section and then using the “count()” function to count the number of events and push the result into output stream which we have defined within the sink section.

-- Count the incoming events
@info(name='query1')
from SweetProductionStream
select count() as totalCount
insert into TotalCountStream;

With this understanding, let’s run this siddhi application from the editor by saving this file and clicking on the “Run” button or select the relevant menu item. If it is deployed and started successfully, you will see the below log message in the editor console.

ReceiveAndCount.siddhi - Started Successfully!

Now let’s send some events to this siddhi application. You can either use a tool like PostMan/SOAPUI or the built in event simulation feature of the editor. Here I’m using the event simulator which is coming with the editor. You can click on the “event simultor” icon which is on the left side panel (second icon) and it will expand that panel and open the event simulation section.

Image may be NSFW.
Clik here to view.


Here you need to select the following values.
  • Siddhi App Name = ReceiveAndCount
  • Stream Name - SweetProductionStream
  • name(STRING) - Flour (sample value)
  • amount(DOUBLE) - 23 (sample value)

Once you select those values, you can click on “Send” button and it will send an event with following JSON format

{ name: “Flour”, amount: 23}

If you observe the console which you start the editor at the beginning, you will see the following line getting printed.
[2018-06-01 10:57:01,776] INFO {org.wso2.siddhi.core.stream.output.sink.LogSink} - ReceiveAndCount : TotalCountStream : Event{timestamp=1527830821771, data=[1], isExpired=false}

If you click on send event 2 more times, you will see that “data” element of the above log line is aggregating to number of events you have sent.

[2018-06-01 10:58:51,500] INFO {org.wso2.siddhi.core.stream.output.sink.LogSink} - ReceiveAndCount : TotalCountStream : Event{timestamp=1527830931494, data=[2], isExpired=false}

[2018-06-01 10:58:52,846] INFO {org.wso2.siddhi.core.stream.output.sink.LogSink} - ReceiveAndCount : TotalCountStream : Event{timestamp=1527830932845, data=[3], isExpired=false}


Congratulations! You have run your first siddhi application with WSO2 SP which counts the number of events received to a given http service.

Let’s do something meaningful with the next sample. Let’s say we want to implement a fraud detection use case where if someone is spending more than 100K within a 10 minute time interval from one credit card, that needs to be considered as a red flag and send an email to the user. We can implement this use case with the following siddhi application.


@App:name("AlertsAndThresholds")

@App:description('Simulate a single event and receive alerts as e-mail when a predefined threshold value is exceeded')

define stream TransactionStream(creditCardNo string, country string, item string, transaction double);

@sink(type='email',
     username ='sender.username',
     address ='sender.email',
     password= 'XXXXXXX',
     subject='Alert for large value transaction: cardNo:{{creditCardNo}}',
     to='email.address.to.be.sent',
     port = '465',
     host = 'smtp.gmail.com',
     ssl.enable = 'true',
     auth = 'true',
     @map(type='text'))
define stream AlertStream(creditCardNo string, country string, item string, lastTransaction double);

@info(name='query1')
partition with(creditCardNo of TransactionStream)
begin
from TransactionStream#window.time(10 minute)[sum(transaction) > 100000]
select creditCardNo, country, item, transaction as lastTransaction
insert into AlertStream;
end;

The above application sends an email when there is a fraudulent event occurs. The execution flow and the application logic can be explained using the below figure.

Image may be NSFW.
Clik here to view.


Here we create a partition of the event stream using a given credit card number. Within that partition, we check for a 10 minute time window and within that period, we do an aggregation and check the value to be greater than 100K. If all those conditions are satisfied, we choose the last arrived event and send those details through an email to the relevant user.

You can save the above siddhi application as “AlertsAndThresholds.siddhi” file within the editor and then send a series of events from event simulation section and observe that when there are transactions which sums up to 100K for a given credit card number, it will send an email to the configured email address. The email will look similar to below.

Alert for large value transaction: cardNo:444444

creditCardNo:"444444",
country:"lk",
item:"test",
lastTransaction:50000.0

That’s it. You just wrote a siddhi application to detect fraudulent activities. You can extend this application based on your conditions.

How to protect your APIs with self contained access token (JWT) using WSO2 API Manager and WSO2 Identity Server


In a typical enterprise information system, there is a high chance that people will use different types of systems built by different vendors to implement certain types of functionalities. The APIs might be hosted in an API Manager developed by vendor A and the user management can be implemented using a different vendor (vendor B). In this type of a situation, one system will not be able to directly contact the other system but they want to use both systems in tandem.

Self-contained access tokens are used in these types of situations where applications can get the token from one system and use that in another system to access protected resources. In this scenario, the second system does not need to make a contact to the first system over the network to validate the user information since the token is self-contained and it has relevant details about the user. This will improve the token processing time significantly since it completely removes the network interaction.

The below figure showcases a scenario where the client application receives a JWT (self-contained token) from the WSO2 Identity Server and then use that token to consume an API protected by WSO2 API Manager.




To implement the above use case, first, we need to download WSO2 Identity Server and the WSO2 API Manager from the WSO2 web site.



Configure WSO2 Identity Server to issue JWT self-contained tokens

Once you download and extract the WSO2 Identity Server, you need to configure it to generate JWT tokens. Follow the steps mentioned below.

  • Open the <IS_HOME>/repository/conf/identity/identity.xml file and set the <Enabled> element (found under the <OAuth>,<AuthorizationContextTokenGeneration> elements) to true as shown in the code block below.


<AuthorizationContextTokenGeneration>

<Enabled>true</Enabled> <TokenGeneratorImplClass>org.wso2.carbon.identity.oauth2.authcontext.JWTTokenGenerator</TokenGeneratorImplClass> <ClaimsRetrieverImplClass>org.wso2.carbon.identity.oauth2.authcontext.DefaultClaimsRetriever</ClaimsRetrieverImplClass>

<ConsumerDialectURI>http://wso2.org/claims</ConsumerDialectURI>

<SignatureAlgorithm>SHA256withRSA</SignatureAlgorithm>

<AuthorizationContextTTL>15</AuthorizationContextTTL>

</AuthorizationContextTokenGeneration>


Note: By default, the user claims are retrieved as an array. To retrieve the claims as a string instead of an array, add the following property under the

<AuthorizationContextTokenGeneration> tag in the identity.xml file.
<UseMultiValueSeparator>false</UseMultiValueSeparator>
  • Add the following property under <OAUTH> section to use the JWT Token Builder instead of the default Token Builder.
<IdentityOAuthTokenGenerator>org.wso2.carbon.identity.oauth2.token.JWTTokenIssuer</IdentityOAuthTokenGenerator>

  • Configure the “audiences” parameter as mentioned below so that the token includes information about the intended audiences who can use the generated token for authenticating the user.


<EnableAudiences>true</EnableAudiences>

<!-- Comment out to add Audience values to the JWT token (id_token) -->

<Audiences> <Audience>${carbon.protocol}://${carbon.host}:${carbon.management.port}/oauth2/token</Audience>

</Audiences>


  • Configure a meaningful value to the <IDTokenIssuerID> parameter in the identity.xml file

<IDTokenIssuerID>apim-idp</IDTokenIssuerID>
Now you can start the WSO2 Identity Server by executing the following command within the IS_HOME/bin directory.

$ sh wso2server.sh

Configure WSO2 API Manager to work with JWT token issued by WSO2 Identity Server

  • Since we are running both the server on the same machine, we need to change the port offset value to 1 in the carbon.xml file located in APIM_HOME/repository/conf directory
<Offset>1</Offset>
Now you can start the WSO2 API Manager by executing the following command within the APIM_HOME/bin directory.

$ sh wso2server.sh



Configure service provider within the WSO2 Identity Server

Now we need to configure a service provider who is going to get JWT tokens on behalf of the users. You can log in to the WSO2 IS management console under following URL with default admin:admin username/password pair.


Go into Service Providers->Add and give a name for the service provider as depicted below and click on register.

Image may be NSFW.
Clik here to view.

Then configure the service provider by clicking on the edit button of the service provider you just created. Select Inbound Authentication Configuration->OAuth OpenID Connect Configuration section and click on “configure” button. Fill the callback URL with some dummy value as depicted below and click update.

Image may be NSFW.
Clik here to view.

That’s all you have to do in the WSO2 Identity Server. You need to take a note of client key and client secret of this service provider which is now showing under OAuth/OpenID Connect configuration section as depicted below.
Image may be NSFW.
Clik here to view.



Configure Identity Provider within WSO2 API Manager

Now you need to log in to the WSO2 API Manager console and configure the identity provider which is issuing the JWT tokens. In this case, it is WSO2 Identity Server. You can log in with default username/password pair using the below URL.


Go into Identity-> Identity Providers and click on “Add”. You will get a window similar to below figure. You need to configure the identity provider with the below mentioned values.

Image may be NSFW.
Clik here to view.

When you are configuring this section, you need to give values which are compatible with the identity provider which is WSO2 Identity Server.
  • Identity Provider Name - This needs to be the same value as the <IDTokenIssuerID> value you configure at the identity.xml file since this value will be the issuer ID of the JWT token. Here the value is given as “apim-idp” to match the above mentioned parameter. 
  • Identity Provider Public Certificate - Here you need to upload the public certificate of the WSO2 Identity Server in a pem file format. The Identity Provider Public Certificate is the public certificate belonging to the identity provider. Uploading this is necessary to authenticate the response from the identity provider. This can be any certificate. Since we are using WSO2 Identity Server as the IDP we can generate the certificate using the below mentioned commands.

To create the identity provider certificate from the wso2carbon.jks file, follow the steps below.



1. Open your Command Line interface, go to the <IS_HOME>/repository/resources/security/directory. Run the following command.

keytool -export -alias wso2carbon -file wso2.crt -keystore wso2carbon.jks -storepass wso2carbon

2. Once you run this command, the wso2.crt file is generated and can be found in the <IS_HOME>/repository/resources/security/ directory.

Click Choose File and navigate to this location in order to select and upload this file.

  • Alias - You need to give the clientID (client key) of the service provider which you have configured in the WSO2 Identity Server here. This will be checked when verifying the JWT token within the WSO2 API Manager. 

Now you are all set to access the API using a JWT token which is issued by WSO2 Identity Server.


Access the API using the JWT token issues by WSO2 Identity Server

Now let’s log in to the API manager publisher portal and create a sample API which connects to a simple hello world service which is running in the local machine.


The created API looks similar to below.

Image may be NSFW.
Clik here to view.

Once you create the API, you can log into the API manager store portal and sign up as a new user (testuser) with the self-sign up option.

Image may be NSFW.
Clik here to view.


This username and password will be used to get the JWT token for this user.

After that, you need to log in to the store portal and subscribe to the API using the default application.

Image may be NSFW.
Clik here to view.


Then you need to get the client id and the client secret of the default application.

Image may be NSFW.
Clik here to view.


Now you are all set.

  • You need to first get a JWT token from the WSO2 identity server by using the token endpoint with the password grant type. You can use the below mentioned curl command to get a JWT token
curl -u <clientID>:<clientSecret> -k -d "grant_type=password&username=testuser&password=testuser" -H "Content-Type:application/x-www-form-urlencoded" https://localhost:9443/oauth2/token


Here you need to replace the <clientID>:<clientSecret> with the relevant values of the service provider which is configured at WSO2 Identity Server. This will return the JWT token with a response similar to below.
{"access_token":"eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJqb2huZG9lQGNhcmJvbi5zdXBlciIsImF1ZCI6WyJpcGtXTnlGMWZYdTRNYlNoRTZ2YUpHTkdrRElhIl0sImF6cCI6Imlwa1dOeUYxZlh1NE1iU2hFNnZhSkdOR2tESWEiLCJpc3MiOiJhcGltLWlkcCIsImV4cCI6MTUyODM2ODEwMCwiaWF0IjoxNTI4MzY0NTAwLCJqdGkiOiIxOTQxYmY5YS1jMTJkLTQ3NjYtOTMzMi02ZTg1YTNlNzI2MTIifQ.MiAZkGcOrog6KKYs5V1zED_ojQVs0vxZyFjPVjk29CPATaAEgpmH2Rq56kHJqhE3uQk4oSgMDJzp-Zk2CNPIRJYzy8pJaeP-gEE54NvRfDe1WHZJl72AAtEz9wEIQiKxkI4ZFdMlsnqmIdv8c0_lEfU4BXpH8Uho_Vatsvklv54WLEbSvHzf3M-0dioRnBDEf7xsImkcTGEsbulcKMNw9DOQFxlGLUv7r-qJIh9NUNlf0V7vXE9lVPaBSS8YDGKsjOV-PqnMAtmF6uL4eN36vcqMT5QP0C0s3pFJdz_YxEoN8xnrEn8_UNiJlZ-IxWooRFqQxFJri7fd4hlveoAKIQ","refresh_token":"f723c75a-dd06-3b5e-99a6-b5291f3cab28","token_type":"Bearer","expires_in":3600}

  • Now with this JWT token, we can call the WSO2 API Manager with the JWT grant type to get an access token. You need to copy the JWT token and use that within this request.
curl -i -X POST -u <clientID>:<clientSecret> -k -d 'grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJqb2huZG9lQGNhcmJvbi5zdXBlciIsImF1ZCI6WyJpcGtXTnlGMWZYdTRNYlNoRTZ2YUpHTkdrRElhIl0sImF6cCI6Imlwa1dOeUYxZlh1NE1iU2hFNnZhSkdOR2tESWEiLCJpc3MiOiJhcGltLWlkcCIsImV4cCI6MTUyODM2ODEwMCwiaWF0IjoxNTI4MzY0NTAwLCJqdGkiOiIxOTQxYmY5YS1jMTJkLTQ3NjYtOTMzMi02ZTg1YTNlNzI2MTIifQ.MiAZkGcOrog6KKYs5V1zED_ojQVs0vxZyFjPVjk29CPATaAEgpmH2Rq56kHJqhE3uQk4oSgMDJzp-Zk2CNPIRJYzy8pJaeP-gEE54NvRfDe1WHZJl72AAtEz9wEIQiKxkI4ZFdMlsnqmIdv8c0_lEfU4BXpH8Uho_Vatsvklv54WLEbSvHzf3M-0dioRnBDEf7xsImkcTGEsbulcKMNw9DOQFxlGLUv7r-qJIh9NUNlf0V7vXE9lVPaBSS8YDGKsjOV-PqnMAtmF6uL4eN36vcqMT5QP0C0s3pFJdz_YxEoN8xnrEn8_UNiJlZ-IxWooRFqQxFJri7fd4hlveoAKIQ' -H 'Content-Type: application/x-www-form-urlencoded'https://localhost:9444/oauth2/token


Here you need to replace the <clientID>:<clientSecret> with the values related to the “default application” which you have used to subscribe to the API within the store. This will return the access token which you can use to access the API. A sample response is given below.

{"access_token":"400f2a54-53d8-3146-88e3-be1bf5e7450d","refresh_token":"c2656286-449f-369f-9793-2cee9132de9f","scope":"default","token_type":"Bearer","expires_in":3600}

  • You can use this access token to consume the API with the below mentioned curl request.

curl -v -H "Authorization: Bearer 400f2a54-53d8-3146-88e3-be1bf5e7450d"http://172.18.0.1:8281/jwt/1.0.0

Implementing a service mashup with WSO2 API Manager

WSO2 API Manager is one of the leading open source API management platforms available in the market. According to a recent Gartner research (2018) it has been identified as the best “visionary” type vendor in the market. It comes with a support for full API lifecycle management, horizontal and vertical scalability and deployment options of on-premise, public cloud (SaaS) and managed (private) cloud. In this article, I’m going to discuss about how you can implement a service mashup (or service orchestration) with WSO2 API Manager within 10 minutes.
Let’s get started by downloading the WSO2 API Manager from the following link.

https://wso2.com/api-management/install/

Once you downloaded the product, you can install it to the desired location. Let’s consider the directory which WSO2 API Manager is installed as “APIM_HOME”. You can start the product using the following command within the APIM_HOME directory.

$ sh bin/wso2server.sh
Before implementing the use case, let’s understand the scenario with the below image.
Image may be NSFW.
Clik here to view.



In this scenario, we have 2 backend microservices called “Service1” and “Service2” and they produce the following results.

Service1 = {"id":200,"name":"IBM","price":234.34}

Service2 = {"name":"IBM","industry":"technology","CEO":"John Doe","Revenue":"23.5 billion USD"}

Now we need to mashup these two responses to produce a result similar to below.

{results: [{"id":200,"name":"IBM","price":234.34}, {"name":"IBM","industry":"technology","CEO":"John Doe","Revenue":"23.5 billion USD"}]}


Let’s see how we can achieve this requirement with WSO2 API Manager. You can log in to the publisher portal and start creating an API as depicted below.
  • Create a new API by clicking on “Add API” button and then selecting the option 3 which is “Design a new REST API”. That will bring the below mentioned interface where you need to configure the API definition.
Image may be NSFW.
Clik here to view.

  • Once the above interface is filled with the values depicted above, click on the “Next: Implement”. This will bring you to the interface where you need to configure the API implementation logic. Here you need to configure the URL of “service1” as the endpoint URL and then select “Enable Message Mediation” option to upload the service mashup logic which is implemented as a custom mediation policy.
Image may be NSFW.
Clik here to view.

We are going to implement the service mashup logic within a custom mediation policy which is implemented using the “synapse” mediation language using XML. The mashup logic is shown below.


mashupSeq.xml



<?xml version="1.0" encoding="UTF-8"?><sequence xmlns="http://ws.apache.org/ns/synapse" name="mashupSeq">

<log level="full">

<property name="STATUS" value="RESP-1"/>

</log>

<enrich>

<source type="body" clone="true"/>

<target type="property" property="response1"/>

</enrich>

<call>

<endpoint>

<http method="POST" uri-template="http://localhost:9091/service2"/>

</endpoint>

</call>

<log level="full">

<property name="STATUS" value="RESP-2"/>

</log>

<enrich>

<source type="body" clone="true"/>

<target type="property" property="response2"/>

</enrich>

<payloadFactory media-type="json">

<format>{results: [result1:$1, result2:$2]}</format>

<args>

<arg xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope" xmlns:ns3="http://org.apache.synapse/xsd" evaluator="xml" expression="$ctx:response1"/>

<arg xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope" xmlns:ns3="http://org.apache.synapse/xsd" evaluator="xml" expression="$ctx:response2"/>

</args>

</payloadFactory>

<respond/>

</sequence>

In the above sequence, we are saving the response of the first endpoint call to a property called “response1” and then call the second endpoint “service2” and save the result in another property called “response2”. After that, using a payload factory mediator, we are mashing up the 2 responses and creating the final response. Then using the <respond> mediator we are sending back the response.

You need to upload this custom sequence as the “outFlow” of this API as depicted in the above image.
  • Once the above mediation sequence is uploaded, click on “Next: Manage” button and select the “Unlimited” subscription tier and click “Save and Publish” button to publish the API as depicted in the below image.
Image may be NSFW.
Clik here to view.



Now the API is created and published to the API store. Now let’s go and start the backend services. These services are implemented in Ballerina programming language. Following are the source code of these 2 services.

https://gist.github.com/chanakaudaya/bf38a28a6b6b43911d7f2b1a2c65951a

https://gist.github.com/chanakaudaya/714a1dac25beff34733fac4e1d86f3cf


Once these 2 service are started, they will be running on the below URLs.

http://localhost:9090/service1

http://localhost:9091/service2
  • Let’s log in to the API Store and subscribe to this API using the default application and execute the API using the generated access token.
Image may be NSFW.
Clik here to view.



Click on the “Applications” tab and generate access token to consume the API.

Image may be NSFW.
Clik here to view.



Now we have the access token to execute the API. Let’s send a CURL request to get the result.

curl -d “{\”name\”:\”WSO2\”}” -H “Content-Type: application/json” -X POST -H “Authorization: Bearer d30d7e25-bb40-3e70-86b0-714f86784cd2” http://localhost:8281/mashup/v1

You will get the below result.

{results: [result1:{"id":200,"name":"IBM","price":234.34}, result2:{"name":"IBM","industry":"technology","CEO":"John Doe","Revenue":"23.5 billion USD"}]}


That’s all. You can do the mashup of the results based on your requirement by modifying the payload factory mediator.

Building a fully automated CI/CD process for API development with WSO2 API Manager

I wrote a medium post on how to build a fully automated CI/CD process to develop APIs with WSO2 API Manager. The original post can be found in the below link. In this post I'm discussing about how to utilize WSO2 API Manager product level APIs to implement the continous integration (CI) and continous deployment (CD) aspects within your enterprise ecosystem and how that can be fully automated with tools like GitHub and TraviCI.

https://medium.com/wso2-learning/building-a-fully-automated-ci-cd-process-for-api-development-with-wso2-api-manager-d787431110aa

Understanding the modern enterprise integration requirements

Enterprise Application Integration (EAI) is a complex problem to solve and different software vendors have produced different types of software products like ESB, Application Server, Message Broker, API Gateways, Load Balancers, Proxy Servers and many other forms. These products have evolved from a monolithic, heavyweight, high-performing runtimes to lean, modularized, micro-runtimes. Microservices Architecture (MSA) is having a major impact on the way architects design their enterprise software systems. The requirements which were there 10 years ago has been drastically changed due to modern advancements of MSA, Containers, DevOps, Agility and mainly due to crazy customer demands. 
 
The below post discusses the requirements which needs to be fulfilled by modern enterprise application integration projects. 
 
 
 

Understanding WSO2 Product updates and open source release model


How to build a CI/CD pipeline for WSO2 ESB (WSO2 EI)

Continuous Integration and Continuous Deployment (CI/CD) has become a core requirement within the enterprise IT ecosystem. It does not matter what type of software you develop (e.g. back-end services, UI/UX, APIs, middleware), you should be able to automate your development, test and deployment process. WSO2 ESB is one of the popular open source integration solutions available for implementing middleware services. The below mentioned medium post discusses how to implement a CI/CD pipeline with WSO2 ESB.



WSO2 API Manager and open source

WSO2 API Manager has become a leader in the API Management space according to the latest Forrester Wave report which was published on 30th October 2018 (Q4, 2018). This announcement has marked a significant landmark on open source development where all the other vendors in the leaders' section are proprietary vendors. In the below medium post, I discuss about what are the important aspects of this announcement to the enterprise software world and the open source community.

https://medium.com/wso2-learning/how-wso2-api-manager-is-leading-the-open-source-pack-34277ad1a80f


How to expose your database as a managed API with WSO2 in 10 minutes

Database technologies are growing at a somewhat slower rate than other technology areas. But still, it is a major component of any enterprise software architecture. Due to the lack of innovation in the user experience side of the database technologies and the amount of “fear” people having when exposing databases to the end users have put the databases into a corner within the enterprise software systems. If you need to access a particular data set from a database table, you need to go through several layers and go after DB admins to get your work done. Databases have become the “precious” things in the enterprise. But do you need to be that way forever?

The below medium post explains how databases can be exposes through a managed API which will democratize the database access within your enterprise.

https://medium.com/wso2-learning/how-to-expose-your-database-as-a-managed-api-with-wso2-in-10-minutes-c9ac2595738b

Kubernetes Deployment Pattern

Kubernetes has become the de-facto standard for managing container based deployments. It is a container orchestration platform which provides capabilities like

  • Automatic scaling
  • Self-healing
  • Load balancing 
  • Monitoring
  • Service discovery
  • Storage sharing
In a real enterprise deployment, it is essential to understand how kubernetes can be used. The following medium post explains how to use kubernetes in an enterprise context.



Istio Service Mesh Pattern

Microservices architectures are becoming more and complex and challenging to maintain. Given the advantages it brings to the enterprise IT ecosystem, the complexity is out-numberes by the advantages. Service Mesh allows the enterprises to control communication amongst microservices as well as communications from external clients to the microservices. Istio is becoming more and more popular as a service mesh because it provides both data plane and control plane capabilities of a service mesh.

The following medium post explains the Istio Service Mesh along with its usage.
https://medium.com/microservices-learning/istio-service-mesh-pattern-61a2848704bf


Building an effective microservices strategy with Service Mesh and API Management

Microservices architecture is not a silver bullet for all your enterprise IT system requirements. Some requirements come just after implementing a microservices architecture. Service Mesh and API Management are such mechanisms which works along with microservices to build a comprehensive microservices architecture. The below post explains how these 3 pillars can help you to build an effective microservices platform.

https://medium.com/microservices-learning/microservices-service-mesh-and-api-management-7408c001fb31

Understanding Multi Cloud Enterprise Deployment architecture

If your enterprise is thinking about moving into cloud, you should be thinking about moving into multi-cloud. The reason is that, you can achieve the below mentioned advantages.

  • Provides much better service availability — Given that your applications are running across multiple cloud environments, your services can run without much interruption even if an entire cloud system goes down.
  • Get enterprises free from vendor lock-in — Cloud vendors always try to get as much applications into their own cloud and that will lock you into a single vendor so that you cannot get away from them even you need to do it. With the multi-cloud approach, you don’t need to lock into any vendor and move away from them at any time.
  • Can negotiate for better deals — Once you are in multi cloud, you have more power to demand for discounts since every vendor wants to increase their share of your account.
  • Can use best technology for the applications — Different cloud vendors are strong in different areas. When you have a multi cloud strategy, you can select the best technology for your task which is provided by the respective cloud vendor.
  • Can provide better performance to consumers — Your applications can run on multiple clouds and expose the functionality through cloud gateways which are closed to the consumers in relevant cloud deployments based on locations.
The below medium post discussed the multi-cloud enterprise deployment pattern in detail.



Architecting a modern digital platform with Open Source Software

The digital business landscape is helping businesses to grow beyond geographical boundaries. Transforming your business into a digital business is no longer an optional thing, rather it has become a necessity. Early adopters, late boomers, methodical players, every enterprise is trying to modernize its enterprise IT ecosystem to improve the efficiency and become a leader in their respective enterprise domain. If you are an enterprise architect who is responsible for building a digital platform from scratch, modernize an existing IT platform or lift and shift an existing deployment into the cloud, there are 100s of different software and technology vendors available to support your effort. The days of proprietary software is long gone and people are more and more migrating towards open source software(OSS). One of the major challenges of adopting OSS is the maintenance overhead. But that challenge is absorbed by the mega-cloud vendors as well as other cloud services offered by the vendors who created these OSS IP.

In this article, I'm discussing about building a modern digital platform with Open Source Software.
https://medium.com/@chanakaudaya/architecting-a-modern-digital-platform-with-open-source-software-25098933813e


Understanding Apache Kafka architecture

Apache Kafka has evolved as the defacto standard for building reliable event based systems with ultra high volumes. The unique, yet simple architecture has made Kafka an easy to use component which integrate well with existing enterprise architectures. At a very high level, it is a messaging platform which decouples the message producers from the message consumers while providing the reliability of message delivery to consumers at scale. Kafka has message producers which send messages (events) to kafka which kafka stores in a entity called a topic which will deliver the messages to one or more consumers in a reliable manner. There can be different types of producers and consumers depending on the use case.

In the below post, I'm explaining how kafka architecture works and integrate with other systems.

https://medium.com/@chanakaudaya/understanding-apache-kafka-the-messaging-technology-for-modern-applications-4fbc18f220d3


Understanding distributed systems messaging styles

Distributed systems has come a long way from where it started and have a long way to go. With more and more people connected through telecommunication technology over the largest distributed system on the planet which is the internet, the demand for an efficient distributed architecture is ever so important today. This is not a tutorial on distributed systems though. Rather some thoughts on building a futuristic distributed system which addresses the challenges of the modern needs keeping the focus at the messaging styles.
In a distributed system, messaging plays a pivotal role. Data flows from one system to another through messages. Different protocols and formats are used to share data within a distributed system as messages or events. Another key aspect of messaging is the nature of communication
The below post explains different messaging styles and how those can be used to build a future proof distributed system for the enterprise.

Understanding Pivotal Cloud Foundry - moving your enterprise back to on-premise

WSO2 API Microgateway deployment patterns

With this new microgateway, WSO2 allows enterprises to deploy API gateway with a whole new set of deployment patterns. In this article, I will be talking about a few possible deployment patterns with WSO2 API Microgateway. Though this article is focused on WSO2 API Microgateway, some patterns can be used independent of the same.
Here is the link to the original medium post.

Viewing all 93 articles
Browse latest View live