Quantcast
Viewing all 93 articles
Browse latest View live

WSO2 ESB 4.9.0 Released !!!

It has been months since I have written a blog post. One of the main reason for that was the subject of this post. We have been working on ESB 4.9.0 release day in day out and most of the time beyond mid night. With all the efforts, we have released the most awaited ESB release. You can download the binary from the following location

http://wso2.com/products/enterprise-service-bus/

What is special about WSO2 ESB 4.9.0

WSO2 ESB has been well known for the near zero latency and the ultra performance of its mediation engine. We made this mediation engine faster and more feature rich than ever.


  • Inbound Endpoints make ESB the ultimate integration engine with more dynamism. 
  • Coordination support for Scheduled tasks, Message Processors and Inbound endpoints make sure that you are safe unless there is at least one node up and running
  • Integration with different MQ protocols has been extended with RabbitMQ, MQTT, Kafka
  • Message storing capabilities has been extended with JDBC, RabbitMQ message stores in addition to JMS, In memory message stores
  • Connecting to external APIs has been improved with all the 100+ connectors which are available in the connector store
  • Improved the file handling capabilities with File inbound and improved FTP, SFTP support and distributed locking features with coordination
  • 100+ Improvements and 600+ bug fixes 

Inbound endpoints

WSO2 ESB has been supporting HTTP/S, JMS, File, Mail, RabbitMQ, SMS and many transport mechanisms from the early ages. These were the interfaces to connect with ESB. From the supported transports, only HTTP/S transports were supported in a multi tenanted environment. This has been addressed with the concept of inbound endpoints. 

  • All the OOTB inbound endpoints which includes HTTP/S, MQTT, HL7, Kafka, File, JMS, RabbitMQ has the multi tenant support. 
  • One major advantage of using an inbound endpoint over traditional axis2 transport is that, you can dynamically configure inbound endpoints without restarting the server.
  • You can create different interfaces for different ports and route incoming requests to different proxy/api/sequences by applying filters
  • You can configure dedicated thread pools per inbound
  • Inbound framework will allow you to extend and write your own inbound implementations of type polling or listening
You can get more detailed information on following blog posts.


Coordination support

One of the major limitations with the ESB 4.8.1 and prior versions was lack of coordination support when it comes to message processors, scheduled tasks, JMS and File (VFS) use cases. In a production setup, we had to go with some tricks like pinned servers, separate listeners for separate file extensions. But with the task coordination support of the carbon framework, we have addressed all the issues which were there in the earlier versions with a more cleaner approach.

  • Message processors can be deployed to the cluster of nodes and it will coordinate with other nodes and make sure that there is always a MP is running in the cluster until we have at least one node up and running. Schedule tasks will have the same behavior
  • VFS (File) processing is coordinated across the cluster and there will not be file locking issues or duplicate file processing occur. 
  • All the inbound endpoints are written on top of the task framework and that will make sure your inbound endpoints are running all the time somewhere on the cluster without your manual interaction when one or two nodes are down
You can find more detailed description of the MSMP use cases from following blog post.


Integration with MQ protocols

Message Queueing has been a heavily used enterprise integration pattern from the beginning of the SOA world. We have extended the MQ support with improved transports and new inbound endpoints for following transports
  • RabbitMQ (improved)
  • MQTT (new)
  • JMS (improved)
  • Kafka (new)

Improved Message Storing capabilities

Given that ESB is a stateless mediation engine which will route/transform incoming messages without keeping any state, we have improved the message storing capability of the ESB with new implementations.
  • JMS store (improved)
  • RabbitMQ store (new)
  • JDBC store (new)
In addition to the above improvements, we have introduced a guaranteed message delivery pattern with a failover message store and message processor implementation. If your original message store is not available, you can configure a secondary failover store and processor to make sure that you do not loose any message.

You can find more information on the JDBC message store implementation at below post.
https://buddhimawijeweera.wordpress.com/2015/06/06/wso2-esb-jdbc-message-store/


Connector store with 100+ connectors

With the new connector store, you can connect to external APIs provided by different organizations through WSO2 ESB connectors. There are more than 100+ connectors available for download for free and you can use without paying a dime. These connectors cover most of the enterprise level APIs like salesforce, google, amazon, github, payapl and many more. You can download and try out connectors from below URL.



Stabilizing and improvements

We have been able to make 100+ improvements to existing features while adding all the above mentioned and not mentioned new features. Also we have fixed 600+ bugs which were there in the previous versions of the ESB and made the product more stable than ever.  We have updated most of the third party dependencies for their latest and greatest versions.

That's all comes to my mind at the moment and there are more exciting features in the product which I have not uncovered and kept secret for your surprise .. :)

Keep Winning !!!


HTTP/2 tutorial for beginners

HTTP is the most widely used application layer protocol in the world. Entire Web is running on top of this protocol. HTTP 1.1 was introduced in 1999 and is still the de-facto standard for web communication. With the improvements of the web and the way people interact with the web (mobile devices, laptops, etc..), this protocol has been hacked to provide new functionality. This hacking is no longer stable and the world of internet needed a new protocol version. That is why IETF has developed HTTP/2 protocol to address the challenges which are faced by the web community. You can find the latest draft of this protocol here.

Why we need HTTP/2

  • Early days, bandwidth was the limiting factor. But today, average internet user in US has a bandwidth of 11Mbit/s
  • Latency is the new bandwidth. End users will not worry about the bandwidth as long as they get better responsive applications.
HTTP 0.9 - Initial version of the protocol introduced in 1991. Required a new TCP connection per request. Only GET method is supported. 

HTTP 1.0 - Improved version with POST, HEAD methods for transferring more rich content. New header fields are introduced to identify the request (ex: Content-Length). Still uses connection per request.

HTTP1.1 - New methods like, PUSH, DELETE, OPTIONS added. Keep alive (persistent) connections became the default. Improved latency.

Challenges with HTTP 1.1
When loading web pages with multiple resources, browsers will send parallel requests to reduce the latency. But this needs more resources since it needs to create new connections and latency will be affected. Even though there are hacks like HTTP pipelining by sending multiple requests through the same TCP connection asynchronously. But in this case, server will respond synchronously and reduce the latency and blocks the application if one of the resources slow to respond.

How HTTP/2 address the challenges of HTTP/1.1

The major design goals of the HTTP/2 protocol was to address the issues which were present in the HTTP/1.1 protocol.
  • Reduce the latency
  • Reduce total number of open sockets (TCP connections)
  • Maintain high level compatibility with HTTP/1.1
In addition to addressing these challenges, HTTP/2 has introduced several new features which were not there in the HTTP/1.1 and will improve the performance of the web.

What is new in HTTP/2

  • Multiplexing - Multiple requests can be sent over a single TCP connection asynchronously.
  • Server Push - Server can asynchronously send resources to the client's cache for future use
  • Header compression - Clients will not need to send the same headers for each and every request. Only the new headers required to be sent.
  • Request prioritization - Some requests can have more memory,cpu, bandwidth in the same TCP connection
  • Binary protocol - Data will be transmitted as binary frames. Not as text form in the HTTP/1.1

How HTTP/2 works

  1. Client will send a server upgrade request over HTTP/1.1 and if the server supports HTTP/2, it will respond with 101 response (Switching protocols). Now client can send HTTP/2 requests over the same connection.
  2. Every request and response is given a unique ID (stream ID) and divided into frames. This stream id will be used to identify the relevant frames for a given request/response. Single TCP connection can be used to connect to single origin only.
  3. Stream can have a priority value and according to that, server decides how much memory, cpu and bandwidth needs to be allocated for a request.
  4. SETTINGS frame will be used to apply HTTP level flow control (# of parallel requests for a connection, data transmission rate, # of bytes for a stream)
  5. Header compression make sure that headers are not transmitted redundantly for every request. Both client and server maintains a header table containing the last response and request headers and their values. When sending a new request, it will only send the additional headers.
  6. Server push enables developers to load contained or linked resources in an efficient way. Server will proactively send resources to the client's cache for future use. This is somewhat different from server push concept of Websockets protocol in which server can send events or data to clients at any time even without a request from client. Instead HTTP/2 server push still complies with the request/response pattern.
I would like thank the authors of the following blog posts which I referred when writing this post.


How to select an integration solution(ESB) for your enterprise IT

If you are an IT architect responsible for designing your enterprise IT system, you may have already experienced this topic. Modern IT systems consists of so many different systems developed by different vendors with different technology stacks. Even though those are developed independently, those systems should interact with each other to provide a seamless business experience to your stake holders. This is where the challenge of integration comes into the picture. Integrating heterogenous systems is a very complex task and you need to consider so many things before choosing a proper solution. With this blog post, I am going to discuss about the things which you need to consider when choosing an integration solution for your enterprise. I have grabbed some ideas from the following blog post[1] which also describes a similar topic.

[1] http://www.infoq.com/articles/ESB-Integration

Mainly, there are 2 streams of product categories available in the market.


  1. Open source software
  2. Proprietary software
Following table describes the capabilities of these 2 different streams of products with respect to the facts which we need to consider when choosing an integration solution.




Fact to be considered
Open Source
Proprietary



Usability
Easy to install in few minutes
Complex installations sometimes require consultants to install

Supports most operating systems
Good documentation

Easy to play around and explore features
High memory footprint and considerable learning curve

Low memory footprint




Maintainability and Monitoring
Administration and Monitoring capabilities are not up to the level of proprietary solutions
Powerful visual tools for admin and monitoring which are integrated to the solution



Community
Open source community around respective projects 
Less or zero free community

Own community with lot of free information in terms of blogs, articles, tutorials
Buy support and you get access to forums and other information



Enterprise support
Most of the open source companies earn money by providing support
Pay more and get good support

Support is provided by engineers who develop the code (WSO2)
If you pay less, then you have to do most of the things by going through forums

Quick and customer friendly support




Flexibility
Provide customizations pretty quickly
Hard to get features and take more time (even years)

Quick bug fixes
If you pay more, you have a chance of getting earlier



Functionality
May lack some features
Concrete set of features

Always improving and improving quickly
Stable releases with fixed road maps



Extensibility
Easy to extend with clear extension points
Do it yourself or pay more and get new components/products

Your custom code has same privileges as internal code (WSO2)




Connectors
Fully or partially free connectors
Full set of connectors which you need to buy ($$$)

Supports most of the important applications




Costs
Low and decent pricing
More and more. May increase when the project goes on



Licensing
Business friendly (Apache2) and transparent licensing
Complex price list which is hard to understand till you are charged

Pay for what you want 


No hidden fees




Vendors
WSO2, Fuse ESB, Mule ESB, Talend ESB, AdroitLogic ESB, Apache ServiceMix
Oracle ESB, IBM Websphere, Tibco, Microsoft BizTalk, SAP NetWeaver PI, Progress Sonic

According to your requirements and budget, you can go for either open source or proprietary solutions. 

What is different in WSO2 ESB compare to other integrations solutions

In my previous post , I have discussed about the facts you need to consider when selecting an integration solution for your enterprise IT infrastructure. I didn't provide any clue on what is the best vendor you can select since that depends on your requirements and budget and many other reasons. In this post, I will do a comparison between several open source and proprietary solutions and differentiate WSO2 ESB (my employer) features with them.

Proprietary ESB 


Pros 
  • Suite of solutions included and work smoothly with each other (eg: CEP, BPEL, MB, Registry)
  • Powerful and stable tooling and monitoring capabilities
  • Excellent support for price
Cons
  • High price, High complexity
  • Licensing and non-transparent pricing model
  • Different components may come from different code bases acquired through different companies
  • Heavy products
  • Installation may need consultants and take more time

Open source ESB


Pros
  • Simple installation and intuitive tooling
  • Some vendors offer free version and commercial version for price with more features
  • Customer friendly licensing and pricing
  • Connectors available for B2B integrations 
  • Great community support due to well known open source projects (eg: Camel, CXF, Synapse)
  • Common code base for all the components (Talend, WSO2)
  • Zero coding + custom java code and scripting support
Cons
  • Some vendors are not fully open source. Commercial version source code is proprietary.
  • Tooling support may not up to the same level as proprietary
  • Product documentation is not comprehensive as the proprietary products
Most of the advantages possessed by open source ESBs are present in WSO2 ESB. Here is a complete set of advantages you will get with WSO2 ESB.

WSO2 ESB


Pros

  • Easy to use, lightweight, lean product which requires minimal resources and easy installation
  • Open source and free. There are no separate commercial version with closed source. All the features are available in free version
  • Provides entire range of products to cover enterprise needs like ESB, Business Process Server, Analytics platform, Message Broker, Governance solution, Mobile and Security solutions which interacts smoothly
  • All the products are based on single code base built on top of a componentized architecture with common kernel (Carbon)
  • Easy to extend with custom extension points which provides the first class member privileges to custom code as internal code
  • 150+ connectors to connect with B2B applications and third party cloud APIs which are free and open source
  • Customer friendly, transparent pricing model and different levels of customer support
  • Zero coding and fully configuration based with ever increasing visual tooling based on eclipse platform. No programming skills required.
  • Cloud native implementations make it easy to move your enterprise solutions to cloud without any hazzle
  • Lot of community support from apache projects and WSO2 own community through blog posts/articles
  • Support is provided by the same engineers who develop the code and hence better response times
Cons

  • Visual tooling is not up to the proprietary solution level
  • Product documentation is not comprehensive as proprietary solutions

WSO2 ESB Performance tuning for XSLT transformations

This blog post is a continuation of my performance tuning article series which you can find in this blog site. Today I will be talking about one of the heavily used component of the WSO2 ESB and how to tune the performance of ESB for transformation use cases.

When we are doing transformation within ESB, we can use XSLT or FastXSLT mediators for doing complex transformations. The FastXSLT Mediator is similar to the XSLT mediator, but it uses the Streaming XPath Parser and applies the XSLT transformation to the message stream instead of to the XML message payload. The result is a faster transformation, but you cannot specify the source, properties, features, or resources as you can with the XSLT mediator. Therefore, the FastXSLT mediator is intended to be used to gain performance in cases where the original message remains unmodified. Any pre-processing performed on the message payload will not be visible to the FastXSLT mediator, because the transformation logic is applied on the original message stream instead of the message payload. In cases where the message payload needs to be pre-processed, use the XSLT mediator instead of the FastXSLT mediator.

Here is a performance comparison of the performance of XSLT and FastXSLT mediators.


Message sizeAverage response time(ms)Average TPSCPU %Concurrency

XSLTFastXSLTXSLTFastXSLTXSLTFastXSLT
8k14.766.34550610706050
8k52.7420.6877510357575100
8k104.1650.3782511779085150
105k285.9725.78144990505
105k611.7436.2615881007010
105k946.6348.37151211008015

According to the above results, you can clearly say there is a huge performance improvement with the FastXSLT mediator. You can enable FastXSLT mediator by adding the following configurations to your ESB.

Add the following parameter to enable streaming XPath in <ESB_HOME>/repository/conf/synapse.properties file. For example:
synapse.streaming.xpath.enabled=true
synapse.temp_data.chunk.size=3072 


To enable the FastXSLT mediator, your XSLT script must include the following parameter in the XSL output.
omit-xml-declaration="yes"

eg: <xsl:output method="xml" omit-xml-declaration="yes" encoding="UTF-8" indent="yes"/>

All is good if you are not doing any payload processing before the XSLT transformation. But in reality, you may find use cases where you do pre-processing before doing the actual message transformation using XSLT mediator.

If you look at the above results, it is bit worrying if you cannot use FastXSLT mediator. Because it has very low numbers for XSLT mediator when the message size is large (105k). In the next section, I will talk about how to tune up the performance of the XSLT mediator itself.

In the XSLT mediator, WSO2 ESB is using two parameters to control the memory usage of the the server. These two parameters can be configured in synapse.properties file as mentioned below.

synapse.temp_data.chunk.size=3072
synapse.temp_data.chunk.threshold=8

The above parameters decide when to write to file system when the message size is larger. The default values for the above parameters will allow WSO2 ESB to process messages with the size 3072*8 = 24K with the XSLT mediator without writing to the file system. Which means, there will not be any performance drop in the XSLT mediator when the message size is less than 25K. But when the message size is 105K, you can see clear performance degradation.

105k285.9725.78

We can improve the performance of the XSLT mediator by allowing more memory to use when processing large messages with XSLT mediator by configuring the parameters as mentioned below.

synapse.temp_data.chunk.size=16384
synapse.temp_data.chunk.threshold=32

With the above parameters, XSLT mediator can process messages upto 16K*32 = 512K without writing to file system. With the above configuration in place, here are the results.


Message sizeAverage response time(ms)Average TPSCPU %

XSLTXSLT-32k-BufferFastXSLTXSLTXSLT-32k-BufferFastXSLTXSLTXSLT-32k-BufferFastXSLT
105k285.9753.725.78143949905050
105k611.7475.5936.261566881007070
105k946.63115.9848.3715781211008080
With the above results, we can clearly see that performance of XSLT mediator has been improved by more than 500%. 

You can configure the above parameters according to your message sizes and observe massive performance improvements with the XSLT mediator (also for FastXSLT mediator).


Why Micro Services Architecture (MSA) is nothing but SOA in a proper, evolved state

If you are a person who is in the enterprise IT domain, there is a more chance that you may have heard the term "mircoservices". In your organization, there will be many people talking about this and you may already have read lot of material on this term. First of all, it is a great idea and if you can use those concepts in your enterprise, that is pretty good. So then, what is this topic all about?

Let me explain a bit about the topic and the message I want to spread. If you are an old enough IT professional, you may have gone through the hype of SOA and might have been adopted that in your enterprise. After spending millions of dollars and years of engineering time, now you have a solid SOA adoption and everything is running well (not perfect). As you know, technology industry does not allow you to settle down. It does not care about your money or time, it will keep on throwing some new concepts and jargons in to the picture. Micro services is that kind of thing which you have come across recently. With this blog post, I want to show the people who has spent most of their budget on SOA adoption, that you don't need to worry about the MSA hype. You are already doing that and it is a seamless transformation from where you are and where you need to go with the MSA (if you want to go that way).

I will start with a list of things that people tell about the existing SOA architecture and describes as advantages of MSA.


  • Applications are developed as single monoliths and deployed as a single unit
  • Utterly complex applications due to many components and their interaction
  • Even hard to practice agile development strategies due to tight coupling
  • Hard to scale parts of the application since entire application needs to be scaled
  • Reliability issues with one part of the application failure may cause the entire application to stop functioning
  • Startup time is minimal, since we don't need to startup fully fledged servers
Well, that is a some set of features which you need to be alarmed if your system is not having. Does that mean that you need to scamper through and collect all your resources to work and learn about MSA? Before doing that, let me explain how you can improve your existing system to fulfill these requirements without knowing anything about MSA (I'm not saying you shouldn't learn about it).



Applications are developed as single monoliths and deployed as a single unit

If you have followed the SOA principals in first place, you may not encounter this issue. The fundamental concept of SOA is to modularize your systems into independent services which caters specific requirements. If you have developed a single monolith with all the capabilities, then go and fix it. This is nothing new from MSA. it was already there and you have not executed properly. If you need to deploy these services in separate servers, you could have done that. But there were no concepts like containers back then and you didn't want to waste one server for one service. The container based deployments are not coming from MSA and it is already there and you can utilize that with your existing SOA services.

Utterly complex applications due to many components and their interaction

This is something you cannot get rid of even if you adopt MSA. It really depend on the capabilities of your application and the way you have developed and wired different components. You can revisit your application and design it properly. But it is irrelevant to SOA or MSA.

Even hard to practice agile development strategies due to tight coupling

Coupling between different services is utterly a design choice and it will be there no matter you are using MSA or not. If you design your services in a proper way, you can work on an agile way.

Hard to scale parts of the application since entire application needs to be scaled

This is again a design choice which you have taken in the past to couple the different services and deploy them in the same server. If you could have design it according to the SOA principals and if you had container based deployments, you may have not encountered this. Nothing coming from MSA.

Reliability issues with one part of the application failure may cause the entire application to stop functioning

Once again the idea of container based deployments and proper design of your services may have fixed this kind of issue.

Startup time is minimal, since we don't need to startup fully fledged servers

Nothing specific to MSA. Container based deployment and server less applications could have fulfilled this requirement. 

All in all, by considering the above facts, we can see that, there is nothing much coming from this micro services architecture but set of things which were already there in SOA and new concepts like container based deployments are represented as a concept and with a special word. I don't have any intention to criticize the term and the importance. What I wanted is to tell you is that, there is nothing much you need to change if you are already doing SOA in your enterprise and willing to adopt MSA. 

One last thing I wanted to mention is that, sometimes people think that they don't need the integration layer once they have the MSA in place. That is one of worst conclusions you have ever made and it will not going to work in your enterprise. If you need further information on that, you can read following links.

References:









Microservices, Integration and building your digital enterprise

It's time to rethink about your enterprise IT eco system. Technology space is going through a period of major revamp and whether you accept it or not, it is changing the way people do business. You may be a software architect of a multi billion enterprise or the only software engineer of a small startup organization which is trying to figure their way out in the business world. It is essential to know about the direction of the technology space and make your moves accordingly. At the 33000 feet, enterprises throughout the world are moving (most of them are already moved) towards digital technology. You may have already brought several third party systems in to your IT eco system and they are functioning well within your requirements. All is running well and your organization is profitable. 
All is well. Why bother about these hypes? Let me tell you one thing. The world of business is moving so fast that a billion dollar company today will become an organization with a debt on their account within a very short period. There will be a new startup offering some cool ideas and they will grab all your customers if you don't provide them the innovation the world is demanding. It is hard to do the innovations without having a proper infrastructure to deliver them to your customers. That is where you need to plan your IT eco system thinking not only about today but next few years. 
Having said all the above facts, there is always one thing which is stopping you from bringing these innovations to your organization. That is none other than the budget. Your boss might say "Well, that is a cool idea. Can you do it for free?" Well, you can, upto some extent. There are several open source solutions which you can use to bring innovations to your enterprise. Let's focus more about the methods rather than the budget. 
Let's think about a scenario where your organization is going to expose new business functionality to your customers through APIs such that web clients and mobile applications can consume your services through these APIs. To provide this new functionality, you need to integrate with different internal systems and you are going to develop new set of services to cater the business requirements. You have the following requirements to deliver your new business functionality to your customers.
  1. Providing APIs
  2. Integrate different systems
  3. Develop new services
There can be more requirements like monitoring, etc. But let's focus on the major requirements and start building your system. API management has been there for some time and there are so many open source and proprietary vendors from which you can select an offering. For the integration of systems also there are many. The real interesting thing is how can I develop my services? As you may have already heard, there is a new concept looming in within the software industry for developing services. That is Microservices Architecture (MSA). You can read about MSA and it's pros/cons almost everywhere. The concept of MSA is that you develop your services in a way that they can be developed/deployed and maintained in a loosely coupled, self contained way. The basic idea is that you should build your services in a way that those services provide real business functionalities as a self contained service. You can take down that service without shutting down your entire system but only that specific service. There are several micro services frameworks available as open source offerings and here is a list of promising MS frameworks.
  1. WSO2 MSF4J - http://wso2.com/products/microservices-framework-for-java/
  2. Spring Boot - https://spring.io/blog/2015/07/14/microservices-with-spring
  3. Spark - http://sparkjava.com/
  4. RestExpress - https://github.com/RestExpress
You can use any of the above mentioned frameworks for developing your new services. These services might expect messages with different formats and you need an integration layer to deal with these different types of messages. The below picture shows the architecture of your digital enterprise which consists of the previously mentioned key components (API, Integration, Services).


Sometimes there is a misconception about MSA that it will throw away the integration layer and built on top of "dumb pipes" for message exchange. This is not correct specially when you have more and more micro services, you need an integration layer to deal with different message types and protocol types. You need to keep this in mind and plan for future requirements rather than thinking about a simple service composition scneario where you can achive all the communication using "dump pipes".
One of the main areas of interest of MSA is the deployment strategy and the invlolvement of DevOps. You can deploy your micro services in containers such that they can be bring up and down whenever necessary without affecting other components. When it comes to integration solutions, they are like more solid components in your architecture which does not need to bring up and down so frequently. You can deploy them in a typical Hardware or Virtual infrastructure without worrying about containerization.
Once you have this kind of architecture, you can achieve the following key benefits which are critical in the modern business eco system.
  1. Ability to expose your services to multiple consumers (through APIs) in a rapid manner
  2. Ability to come with new services in a quick time (Micro services deployment is rapid)
  3. Ability to connect with third party systems without worrying about their protocols or message types (Integration layer)
Finally, you can add analytics and monitoring in to the picture and make your system fault tolerant and well monitored for any failures. That would be a subject matter for a separate article and I will stop right here.

Building Integration Solutions : A Rethink


History of Enterprise Integration

The history of enterprise integration goes back to early computer era where we had computers only in large enterprises. The early requirements came from the concept of Material Requirement Planning (MRP) where it requires a system to plan and control production and material flows. With the growth of the businesses and the interactions among different 3rd party organizations, MRP has been evolved into an Enterprise Resource Planning (ERP) systems where they are responsible for much more functionalities to bridge the different departments of the enterprise like accounting, finance, engineering, product management and many more. Proprietary ERP solutions were dealing with so many complex use cases and failed really big in some cases. With these lessons, people realized that there should be a better to way to build the enterprise IT infrastructure beyond the ERP systems.

Integration and SOA

Service Oriented Architecture (SOA) comes into the picture in a time where the world is searching for a proper way to handle their complex enterprise IT requirements. The Wikipedia definition of the SOA is like below.
“A service-oriented architecture (SOA) is an architectural pattern in computer software design in which application components provide services to other components via a communications protocol, typically over a network. The principles of service-orientation are independent of any vendor, product or technology”


Image may be NSFW.
Clik here to view.
SOA Architecture simple(1).png


Rather than having a proprietary system in your enterprise, SOA has built a set of loosely coupled independent services to interact with each other and provide the business functionality to other systems/users. With the concepts of loosely coupled services came the concept of integration where we need to connect with other services to provide the business functionality. At the early stages, it was only a peer to peer communication between services. This has lead to the complex “spaghetti” integration pattern.  


Image may be NSFW.
Clik here to view.
Spaghetti Integration.png

If you have 10 services in your system, you may need 45 point to point connections to communicate with all of the other services. Rather than connecting the services point to point, we can connect them to a central “Bus” and do the communication over that.

The Integration Era

Once people realized the value of SOA and the integration, enterprises started moving into that space more and more than the ERP systems and it became a common architectural pattern in most enterprises. Then came the Enterprise Service Bus (ESB) concept where you connect your all the disparate systems to the central bus and made the interaction possible across different services.


Image may be NSFW.
Clik here to view.
Bus Integration.png


Same type of service has been provided by so many different vendors and the standards around SOA has been emerged. People started thinking about common standards in a more serious manner and all the monopolies existed on the world of software has been little by little converged into common standards. Innovative ideas came into the picture and became standards and the integration space has been emerged as a challenging technology domain. Different wire level protocols, messaging formats, enterprise messaging patterns evolved with the heavy usage of SOA and integration in the enterprises. Almost all the big software vendors  has released their own products for application integration and this has become a billion dollar business.

Beyond Integration

The technology industry has been a moving target since its inception and the pace of the movement might have been varied time to time. At the moment, we are in a time where that pace has been increased and there are lot of new concepts taking over the technology industry. Integration has been pushed to the backyards and the new technology concepts like Micro Services Architecture (MSA), Internet Of Things (IOT), Big Data and Analytics have  been taking over the world of technology.  But any of these concepts are not going to fill the same bucket as integration. They are independent concepts which has surfaced with the increased usage of technology in people’s day to day activity. But the important thing is that Integration cannot live without thinking about these trends. The below diagram depicts the interaction between MSA and Integration Bus in a real enterprise IT system. This was captured from the blog post written by Kasun Indrasiri at [1].


Image may be NSFW.
Clik here to view.
MSA-Integration.png
Figure 4: MSA and Integration Server in modern enterprise


Integration for the future

Integration has been a complex subject from the beginning and it has been able to tackle most of the integration requirements popped up in enterprise IT infrastructures. But with the  advancement of other areas, integration solutions need to pay more attention to the following emerging concepts in the future and become more and more “lean”.


  1. Enterprise architects looking for vendor neutral solutions - Integration has been an area where you need to have not only domain experts but vendor experts to succeed. But the world is more and more moving towards domain expertise and vendor neutrality. Which means that enterprise architects always looking for solutions which can be easily replaceable with a different vendor.
  2. Integration solutions needs to be more user friendly - Architects want to see their integrations more clearly and with a more visually pleasing manner. They don’t want to read through thousands of XML files to understand a simple integration flow.
  3. Internet Of Things (IOT) will hit you very soon - Your solution needs to be able to accommodate IOT protocols and concepts as first class features.
  4. No longer sitting inside enterprise boundary -  Enterprises are moving more and more towards cloud based solutions and your solution needs to be run on the cloud while interacting with other cloud services
  5. Ability to divide will matter - Users will want to replace parts of your system with some other components which they have been using for longer time and worked for them. Your system should be able to compose into different independent components and be able to work in tandem with other systems.
  6. There will be more than “systems” to integrate - Integration has been dealing with different systems in the past and the future would be much different with the concepts of MSA where you have business functions exposed as services and there can be some other components like data, IOT gateways, smart cars you need to integrate. Better to prepare as early as possible.
  7. Make room to inject “intelligence” into your solution - Enterprises would like to inject some intelligence through the concepts like analytics and predictions to your integration solution which is the core of your enterprise IT infrastructure.


References:




Comparison of asynchronous messaging technologies with JMS, AMQP and MQTT

Messaging has been the fundamental communication mechanism which has been succeeded all over the world. Either it is human to human, machine to human or machine to machine, messaging has been the single common method of communication. There are 2 fundamental mechanisms we used to exchange messages between 2(or more)parties.

  • Synchronous messaging
  • Asynchronous messaging

Synchronous messaging is used when the message sender expects a response to the message within a specified time period and waiting for that response to carry out his next task. Basically he “blocks” until he receives the response.

Asynchronous messaging means that sender does not expect an immediate response and does not “blocks” waiting for the response. There can be a response or not, but the sender will carry out his remaining tasks.

Out of the above mentioned technologies, Asynchronous messaging has been the widely used mechanism when it comes to machine to machine communication where 2 computer programs talk to each other. With the hype of the micro services architecture, it is quite evident that we need an asynchronous messaging model to build our services.

This has been a fundamental problem in software engineering and different people and organizations have come up with different approaches. I will be describing about 3 of the most successful asynchronous messaging technologies which are widely used in the enterprise IT systems.

Java Messaging Service (JMS)

JMS has been one of most successful asynchronous messaging technology available. With the growth of the Java adoption of large enterprise applications, JMS has been the first choice for enterprise systems. It defines the API for building the messaging systems.



Here are the main characteristics of JMS.
  • standard messaging API for JAVA platform
  • Interoperability is only within Java and JVM languages like Scala, Groovy
  • Does not worry about the wire level protocol
  • Supports 2 messaging models with queues and topics
  • Supports transactions
  • Defines the message format (headers, properties and body)


Advanced Message Queueing Protocol (AMQP)

JMS was awesome and people were happy about it. Microsoft came up with NMS (.Net Messaging Service) to support their platform and programming languages and it was working fine. But then comes the problem of interoperability. How 2 programs written in 2 different programming languages can communicate with each other over asynchronous messaging. Here comes the requirement to define a common standard for asynchronous messaging. There was no standard wire level protocol with JMS or NMS. Those will run on any wire level protocol but the API was bound with the programming language. AMQP addressed this issue and come up with a standard wire level protocol and many other features to support the interoperability and rich messaging needs for the modern applications.



Here are the main features of AMQP.

  • Platform independent wire level messaging protocol
  • consumer driven messaging
  • Interoperable across multiple languages and platforms
  • It is the wire level protocol
  • have 5 exchange types direct, fanout, topic, headers, system
  • buffer oriented
  • can achieve high performance
  • supports long lived messaging
  • supports classic message queues, round-robin, store and forward
  • supports transactions (across message queues)
  • supports distributed transactions (XA, X/Open, MS DTC)
  • Uses SASL and TLS for security
  • Supports proxy security servers
  • Meta-data allows to control the message flow
  • LVQ not supported
  • client and server are equal
  • extensible


Message Queueing Telemetry Transport (MQTT)

Now we have JMS for Java based enterprise applications and AMQP for all other application needs. Why do we need a 3rd technology? It is specifically for small guys. Devices with less computing power cannot deal with all the complexities of AMQP rather they want a simplified but interoperable way to communicate. This was the fundamental requirement for MQTT, but today, MQTT is one of the main components of Internet Of Things (IOT) eco system.


Here are the main features of the MQTT.

  • stream oriented, low memory consumption
  • designed to be used for small dumb devices sending small messages over low bw networks
  • no long lived store and forward support
  • does not allow fragmented messages (hard to send large messages)
  • supports publish-subscribe for topics
  • no transactional support (only basic acknowledgements)
  • messaging is effectively ephemeral (short lived)
  • simple username, password based security without enough entropy
  • no connection security supported
  • Message is opaque
  • Topic is global (one global namespace)
  • Ability to support Last Value Queue (LVQ)
  • client and server are asymetric
  • Not possible to extend

WSO2 ESB Passthrough Transport in a nutshell

If you have ever used WSO2 ESB, you might already know that it is one of the highest performing open source ESB solutions in the integration space. The secret behind it’s performance is the so-called Pass Through Transport (PTT) implementation which handles the HTTP requests. If you are interested in learning about PTT from scratch, you can refer the fololwing article series written by Kasun Indrasiri.








If you read through the above mentioned posts, you can get a good understanding about the concepts and the implementation. But one thing which is harder to do is remember all the diagrams in your memory. It is not impossible, but little bit harder for a person with average brain. I have tried to draw a picture to capture all the required information related to the PTT. Here is my drawing on WSO2 ESB PTT.

WSO2 ESB Passthrough Transport


If you look at the above picture, it contains 3 main features of the PTT.
  • The green boxes at the edges of the middle box contains the different methods executed from http-core library towards the Source handler of the ESB when there is any new events
  • The orange boxes represents the internal state transition of the PTT starting from REQUEST_READY up until RESPONSE_DONE.
  • Light blue boxes depicts the objects created within a lifecycle of a single message execution flow and how those objects interacted and at which time they get created.
In addition to above 3 main features, axis2 engine and synapse engine also depicted with purple and shiny purple boxes. These components were depicted as black boxes without considering the actual operations happen within those components.

WSO2 ESB 5.0.0 Beta Released

WSO2 team is happy to announce the beta release of the latest WSO2 ESB 5.0.0. This version of the ESB has major improvements to the usability aspects of the ESB in real production deployments as well as development environments. Here are the main features of the ESB 5.0.0 version.

Mediation debugger provides the capability to debug mediation flows from WSO2 developer studio tooling platform. It allows the users to view/edit/delete properties and the payload of the messages passing through each and every mediator.
You can find more information about this feature at below post.
Data mapper is the most awaited data transformation tool for WSO2 ESB. It allows users to load input/output message formats through xml files, json schemas or create them from the data mapper user interface itself. In addition to mapping data from input to output directly, users can apply different functions like split, aggregate, uppercase, lowercase while doing the mapping.
You can find more information about data mapper from below post.
Comprehensive Statistics/Tracing with analytics for ESB distribution. With ESB 5.0.0 onwards, WSO2 ESB comes as a complete solution with runtime + tooling + analytics. Analytics for ESB is part of this solution and it comes with a pre installed features/ dashboards for ESB specific analytics. You will get a fully fledged analytics dashboard to visualize statistics about your services and more fine grained information about internal ESB components (Proxy, API, Endpoints, Sequence, Mediators). It also allows you to trace the messages throughout the mediation flows and detect any failure messages and go through each and every mediator which the message has been passed through.
JMS 2.0 support is coming with this release. JMS has been used in many enterprise integration scenarios and it has gone through a major revamp with JMS 2.0 in terms of features and developer APIs. WSO2 ESB becomes one of the early adopters of this JMS version.
You can find more information about this feature at below blog post.
Websocket can be opened with WSO2 ESB. You are no longer restricted to the capabilities of HTTP 1.1 version with WSO2 ESB. We have provided the support for the high performing websocket protocol for HTTP communication.
JMS distributed (XA) transactions will be supported with this version of WSO2 ESB 5.0.0. Now you can communicate with multiple distributed JMS endpoints and make the end to end message processing in a transactional manner.
In addition to the above mentioned features, we have added many more features, improvements and bug fixes with this release. You can download the completed solution from below mentioned links.
Runtime for WSO2 ESB 5.0.0 (Beta) — https://github.com/wso2/product-esb/releases/tag/v5.0.0-BETA
Analytics for WSO2 ESB 5.0.0 (Beta) — https://github.com/wso2/analytics-esb/releases/tag/v1.0.0-beta

Ballerina, the programming language for geeks, architects, marketers and rest

We@ WSO2 thrilled to announce our latest innovation at the WSO2ConUSA 2017. It is a programming language for all. It is for geeks who like to write scripts for everything they do, for architects who barely speaks without diagrams, for marketing folks who has no idea what programming is and for so called programmers who cracks any kind of programming language you throw at them. Simply put it is a programming language with visual and textual representation. You can try out live samples at ballerinalang web site.
Programming language inventions are not something we see so often. The reason is that, when people are happy with a language and get used to it, they are reluctant to move from that eco system. Unless it is super awesome and can’t live without it, they prefer holding their position. This is even harder for general purpose programming languages than Domain Specific Languages (DSLs).
Integration of systems has been a tedious task from the beginning and nothing much has changed even today. While working with our customers, we identified that there is a gap in the integration space where programmers and architects speaks in different languages and sometimes this resulted in huge losses of time and money. Integration has lot to do with diagrams. Top level people always prefer diagrams than code but programmers do the other way around. We thought of filling this gap with a more modernized programming language. That was our starting point.
Once we started the development and while doing the design of this programming language, we identified that there are so many cool features spread across different programming languages but there is no one programming language with all the cool features. Then we made design changes to make ballerina a more general purpose language than a DSL.
Today, we are happy to announce the “Flexible, Powerful, Beautiful” programming language “Ballerina”. Here are the main features of the language in a short list.
  • Textual, Visual and Swagger representation of your code
  • Parallel programming made easier with workers and fork-join
  • XML, JSON and DataTable as built in data types for easier data handling
  • Packaging and module system to write, share, distribute code in elegant fashion
  • Composer (editor) makes it easier to write programs in a more visual manner
  • Built in debugger and test framework (testerina) makes it easier to develop and test
Tryout ballerina and let us know your thoughts on medium, twitter, facebook, slack, google and many other channels. We are happy to hear from you make integration great again.

Ballerina — Why it is different from other programming languages?

In this post, we’re going to talk about special features of the Ballerina language which are unique to itself. These features are specifically designed to address the requirements of the technology domain we are targeting with this new language.

XML , JSON and datatable are native data types

Communication is all about messages and data. XML and JSON are the most common and heavily used data types in any kind of integration eco system. In addition to those 2 types, interaction with databases (SQL, NoSQL) is the other most common use case. We have covered all 3 scenarios with native data types.
You can define xml and json data types inline and manipulate them easily with utility methods in jsons and messages packages.
json j = `{"company":{"name":"wso2", "country":"USA"}}`;
messages:setJsonPayload(m, j);
With the above 2 lines, you can define your own json message and replace the current message with your message. You can do the same thing for XML messages as well.
If you need to extract some data from a message which is of type application/json, you can easily do that with following lines of code.
json newJson = jsons:getJson(messages:getJsonPayload(m), "$.company");
The above code will set the following json message to the newJson variable.
{"name":"wso2","country":"USA"}
Another cool feature of this inline representation is the variable access within these template expressions. You can access any variable when you define your XML/JSON message like below.
string name = "WSO2";
xml x = `<name>{$name}</name>`;
The above 2 lines create an xml message with following data in it.
<name>WSO2</name>
You can do the same thing for JSON messages in a similar fashion.
Datatable is a representation of a pointer to a result set returned from a database query. It works in a streaming manner. The data will be consumed as it is used in the program. Here is a sample code for reading data within a ballerina program using the datatable type.
string s;
datatable dt = sql:ClientConnector.select(testDB, "SELECT int_type, long_type, float_type, double_type, boolean_type,
string_type from DataTable LIMIT 1",parameters);
while (datatables:next(dt)) {
s = datatables:getString(dt, "string_type");
// do something with s
}
You can find the complete set of functions in Ballerina API documentation.

Parallel processing is as easy as it can get

The term “parallel processing” scares even experienced programmers. But with Ballerina, you can do parallel processing as you do any other action. The main concept of term “Ballerina” stems from the concept of a ballet dance where so many different ballet dancers synchronized with each other during the dance act by sending messages between each other. The technical term for this process is called “Choreography”. Ballerina (language) brings this concept into a more programmer friendly concept with following 2 features.

Parallel processing with worker

The concept of a worker is that, it is an execution flow. The execution will be carried by the “Default Worker”. If the Ballerina programmer wants to delegate his work to another “Worker” which is working in parallel to the “Default Worker”, he can create a worker and send a message to that worker with the following syntax.
worker friend(message m) {
//Do some work here
reply m';
}
msg -> friend;
//Do my own work
replyMsg <- friend;
There are few things special about this task delegation.
  • worker (friend) will run in parallel to the default worker.
  • default worker can continue it’s worker independently
  • when default worker wants to get the result from the friend worker, it will call the friend worker and block their until it gets the result message or times out after 1 minute.

Parallel processing with fork-join (multiple workers)

Sometimes users needs to send the same message to multiple workers in the same time and process results in different ways. That is where fork-join comes into rescue. The Ballerina programmer can define workers and their actions within the fork-join statement and then decide on what to do once the workers are done with their work. Given below is a sample code of a fork-join.
fork(msg) {
worker chanaka(message m1) {
//Do some work here
reply m1';
}
worker sameera(message m2) {
//Do something else
reply m2';
}
worker isuru(message m3) {
//Do another thing
reply m3';
} join (all)(message[] results) {
//Do something with results message array
} timeout (60)(message[] resultsBeforeTimeout) {
//Do something after timeout
}
The above code sample is a powerful program which will be really hard to implement in any other programming language (some languages cannot do this even). But with Ballerina, you get all the power with simplicity. Here is an explanation of the above program.
  • workers “chanaka”, “sameera” and “isuru” are executed in parallel to the main “Default worker”
  • join condition specifies how user would need to get the results of the above started workers. In this sample, it waits for “all” workers. It is possible to join the workers in one of the following options
— join all of 3 workers
— join all of named workers
— join any 1 of all 3 workers
— join any 1 of named workers
  • timeout condition is coupled with the join block. User can specify the tiemout value in seconds to wait until the join condition is satisfied. If that join condition is not satisfied within the given time duration, timeout block will get executed with any results returned from the completed workers.
  • Once the fork-join statement is started and executing, “default worker” is waiting until it completes the join block or timeout block. It will be stayed idle during that time (some rest).
In addition to the above mentioned features, workers can invoke any function declared within the same package or any other package. One limitation with the current worker/fork-join implementation is that workers cannot be communicated with any other worker than “Default worker”.

Comprehensive set of developer tools to make your development experience as easy as it can get

Ballerina is not the language and the runtime itself. It comes with a complete set of developer tools which can help you to start your Ballerina experience as quickly and easily as possible.

Composer

The Composer is the main tool for writing Ballerina programs. Here’s some of what it can do:
  • Source, Design and Swagger view of the same implementation and ability to edit through any interface
  • Run/Debug Ballerina programs directly from the editor
  • Drag/Drop program elements and compose your program

Testerina

This is the unit testing framework for Ballerina programs. Users can write unit tests to test their Ballerina source code with this framework. It allows users to mock Ballerina components and emulate the actual Ballerina programs within a unit testing environment. You can find details from thismedium post.

Connectors

These are the client connectors which are written to connect with different cloud APIs and systems. This is one of the extension points Ballerina has and users can write their own connectors from Ballerina language and use within any other Ballerina program.

Editor plugins

Another important set of tools coming with Ballerina tooling distribution is the set of editor plugins for popular source code editors like Intellij Idea, Atom, VSCode, Vim. This will make sure if you are a hardcore script editing person who is not interested in IDEs, you are also given the power of ballerina language capabilities in your favourite editor.
I am only half done with the cool new features of Ballerina, but this is enough for a single post. You can try out these cool features and let us know your experience and thoughts through our Google user groupTwitterFacebookMedium or any other channel or by putting a comment to this post.

Getting started with Ballerina in 10 minutes

Ballerina is the latest revelation in programming languages. It has been built with the mind of writing network services and functions. With this post I’m going to describe how to write network services and functions within a 10 minute tutorial.

Image may be NSFW.
Clik here to view.
Source: http://vkool.com/tips-and-tricks-on-ballet-for-beginners/

First things first, go to ballerinalang website and download the latest ballerina tools distribution which has the runtime and all the tools required for writing Ballerina programs. After downloading, you can extract the archive into a directory (let’s say BALLERINA_HOME) and set the PATHenvironment variable to the bin directory of BALLERINA_HOME which you have extracted the downloaded tools distribution. In linux, you can achieve this as mentioned below.
export PATH = $PATH:/BALLAERINA_HOME/bin
e.g. export PATH = $PATH:/Users/chanaka-mac/ballerinalang/Testing/ballerina-tools-0.8.3/bin
Now you have setup the ballerina in your system. Now it is time to run the first example of all, the Hello World example. Ballerina can be used to write 2 types of programs.
  • Network services
  • Main functions
    Here, network services are long running services which keeps on running after it is started until the process is killed or stopped by external party. Main functions are programs which executes a given task and exit by itself.
    Let’s run the more familiar main program style Hello World example. Only thing you have to do is run the ballerina command pointing to the hello world sample. You change your directory to the samples directory within ballerina tools distribution ($BALLERINA_HOME/samples). Now run the following command from your terminal.
    $ ballerina run main helloWorld/helloWorld.bal
    Hello, World!
    Once you run the above command, you will see the output “Hello, World!” and you are all set (voila!).
    Let’s go to the file and see how a ballerina hello world program looks like.
    import ballerina.lang.system;
    function main(string[] args) {
    system:println("Hello, World!");
    }
    This small program has several key concepts covered.
    • Signature of the main function is similar to other programming languages like C, Java
    • You need to import native utilities before using them (no auto-import)
    • How to run the program using ballerina run command

      Image may be NSFW.
      Clik here to view.
      Source: http://vkool.com/tips-and-tricks-on-ballet-for-beginners/

      Now the basics are covered. Let’s move on to the next step. Which is running a service which says “Hello, World!” and keeps on running.
      All you have to do is execute the below command in your terminal.
      $ ballerina run service helloWorldService/helloWorldService.bal
      ballerina: deploying service(s) in 'helloWorldService/helloWorldService.bal'
      ballerina: started server connector http-9090
      Now things are getting little bit interesting. You can see 2 lines which describes what has happened with the above command. It has deployed a service which was described the the mentioned file and there is a port (9090) opened for http communication. Now this service is started and listening on port 9090. We need to send a request to get the response out of this service. If you browse to the README.txt within the helloWorldService sample directory, you can find the below curl command which can be used to invoke this service. Let’s run this command from another command window.
      $ curl -v http://localhost:9090/hello
      > GET /hello HTTP/1.1
      > Host: localhost:9090
      > User-Agent: curl/7.51.0
      > Accept: */*
      >
      < HTTP/1.1 200 OK
      < Content-Type: text/plain
      < Content-Length: 13
      <
      * Curl_http_done: called premature == 0
      * Connection #0 to host localhost left intact
      Hello, World!
      You can see that, we got a response message from the service saying “Hello, World!”. Let’s crack into the program which does this. Go the Ballerina file within helloWorldService/helloWorldService.bal.
      import ballerina.lang.messages;
      @http:BasePath ("/hello")
      service helloWorld {

      @http:GET
      resource sayHello (message m) {
      message response = {};
      messages:setStringPayload(response, "Hello, World!");
      reply response;

      }

      }
      This program covers several important aspects of a Ballerina program.
      • annotations are used to define the service related entities. In this sample, “/hello” is the context of the service and “GET” is the HTTP method accepted by this service
      • message is the data carrier coming from the client. Users can do what ever they want with message and they can create new messages and many other things.
      • “reply” statement is used to send a reply back to the service client.
        In the above example, we have created a new message called “response” and set the payload as “Hello, World!” and then replied back to the client. The way you executed this service was
        In the above command, we specified the port (9090) which the service was started and the context (/hello) we defined in the code.

        Image may be NSFW.
        Clik here to view.
        Source: http://vkool.com/tips-and-tricks-on-ballet-for-beginners/

        We have few mins left, let’s go for another sample which is bit more advanced and completes the set.
        Execute the following command in your terminal.
        ballerina run service passthroughService/passthroughService.bsz
        ballerina: deploying service(s) in 'passthroughService/passthroughService.bsz'
        ballerina: started server connector http-9090
        Here, we have run a file with a different extension (bsz) but the result was similar to the previous section. File has been deployed and the port is opened. Let’s quickly invoke this service with the following command as mentioned in the README.txt file.
        curl -v http://localhost:9090/passthrough
        > GET /passthrough HTTP/1.1
        > Host: localhost:9090
        > User-Agent: curl/7.51.0
        > Accept: */*
        >
        < HTTP/1.1 200 OK
        < Content-Type: application/json
        < Content-Length: 49
        <
        * Curl_http_done: called premature == 0
        * Connection #0 to host localhost left intact
        {"exchange":"nyse","name":"IBM","value":"127.50"}
        Now we got an interesting response. Let’s go inside the source and see what we have just executed. This sample is bit advanced and hence it covers several other important features we have not mentioned in previous sections.
        • Ballerina programs can be run as a self-contatining archive. In this sample, we have run service archive file (.bsz) which contains all the artifacts required to run this service.
        • Ballerina programs can have packages and the package structure follows the directory structure. In this sample, we have a package called “passthroughservice.samples” and the directory structure is similar passthroughservice/samples.
          Here are the contents of this sample.
          passthroughService.bal
          package passthroughservice.samples;
          import ballerina.net.http;
          @http:BasePath ("/passthrough")
          service passthrough {
          @http:GET
          resource passthrough (message m) {
          http:ClientConnector nyseEP = create http:ClientConnector("http://localhost:9090");
          message response = http:ClientConnector.get(nyseEP, "/nyseStock", m);
          reply response;
          }
          }
          nyseStockService.bal
          package passthroughservice.samples;
          import ballerina.lang.messages;
          @http:BasePath ("/nyseStock")
          service nyseStockQuote {
          @http:GET
          resource stocks (message m) {
          json payload = `{"exchange":"nyse", "name":"IBM", "value":"127.50"}`;
          message response = {};
          messages:setJsonPayload(response, payload);
          reply response;
          }
          }
          In this sample, we have written a simple integration by conncting to another service which is also written in Ballerina and running on the same runtime. “passthroughService.bal” contains the main Ballerina service logic in which,
          • Create a client connector to the backend service
          • Send a GET request to a given path with the incoming message
          • Reply back the response from the backend service
            In this sample, we have written the back end service also from ballerina. In that service “nyseStockService.bal”,
            • Create a json message with the content
            • Set that message as the payload of a new message
            • Reply back to the client (which is the passthroughService)
              It’s Done! Now you can run the remainder of the sample or write your own programs using Ballerina.
              Happy Dancing !

              WSO2 ESB usage of get-property function

              What are Properties?

              WSO2 has a huge set of mediators but property mediator is mostly used mediator for writing any proxy service or API.

              Property mediator is used to store any value or xml fragment temporarily during life cycle of a thread for any service or API.

              We can compare “Property” mediator with “Variable” in any other traditional programming languages (Like: C, C++, Java, .Net etc).

              There are few properties those are used/maintained by ESB itself and on the other hand few properties can be defined by users (programmers). In other words, we can say that properties can be define in below 2 categories:

              • ESB Defined Properties
              • User Defined Properties.

              These properties can be stored/defined in different scopes, like:

              • Transport
              • Synapse or Default
              • Axis2
              • Registry
              • System
              • Operation

              Generally, these properties are read by get-properties() function. This function can be invoked with below 2 variations.

              • get-property(String propertyName)
              • get-property(String scope, String propertyName)

              1st function doesn’t require to pass scope parameter, which always reads propetries from default/synapse scope.

              Performance Hit (Found in WSO2 ESB 4.9.0 and prior versions):

              It has been discovered that use of get-properties() function degrades performance drastically for any service.

              Why it happens?

              Get-Properties() function first does ESB Registry look-up and then later on difference scopes, however, using scope identifiers limits its search to the relevant area only.

              Solution (Scope-Style Reading):

              Instead of using get-properties() function these properties can be referenced by below prefixes separated by colon

              • $ctx – from Synapse or Default Scope
              • $trp – from Transport scope
              • $axis2 – from Axis2 Scope
              • $Header – Anything from Header
              • $body – for accessing any element in SOAP Body (applicable for SOAP 1.1 and SOAP 1.2)

              Example:

              Lets assume that there is any property set with name “Test-Property”

              From Default Scope

              <property name=”Read-Property-value” expression=”get-property(‘Test-Property’)”/>

              <property name=”Read-Property-value” expression=”$ctx:Test-Property”/>

              From Transport Scope

              <property name=”Read-Property-value” expression=”get-property(‘transport’,’Test-Property’)”/>

              <property name=”Read-Property-value” expression=”$trp:Test-Property”/>

              From Axis2 Scope

              <property name=”Read-Property-value” expression=”get-property(‘axis2′,’Test-Property’)”/>

              <property name=”Read-Property-value” expression=”$axis2:Test-Property”/>

              We should prefer to use blue colour format for accessing these properties for better performance.

              Please note this syntax is not applicable for few ESB defined properties, like: OperationName, MessageID, To
              It will work as expected with get-properties, but not with $ctx.

              So, please make sure you are using correct way for accessing ESB define properties.


              API management design patterns for Digital Transformation

              Digital Transformation (DT) has become the buzz word in the tech industry these days. The meaning of DT can be interpreted in different ways at different places. But simply it is the digitization of your business assets with the increased usage of technology. If that definition is not simple enough, you can think of an example like moving your physical file/folder based documents to computers and make them accessible instantly rather than browsing through 1000s of files stacked in your office. In a large enterprise, this will go to the levels where every asset in the business (from people to vehicles to securtiy cameras) becomes a digital asset and instantly reachable as well as authorized.

              Once you have your assets in digitized format, it is quintissential to expose that digital information to various systems (internal as well as external) through properly managed interfaces. Application Programming Interface (API) s are the de facto standard of exposing your business functionalities to internal and external consumers. It is evident that your DT story will not be completed without having a proper API management platform in place.

              Microservices Architecture (MSA) has evovled from being a theory in the Martin Fowler’s website to a go-to technology to implement REST services for your organization when achieving the DT. Most of the developers in the enterprise are moving towards MSA when writing business logic to implement back end core services. But in reality, there are so many other systems which are coming as Commercial Off The Shelf (COTS) offerrings which does not fit into microservices landscape natively.

              With these basic requirements and unavoidable circustances within your organization’s IT eco system, how are you going to implement an efficient API management strategy? This will be the burning problem in most enterprises and I will touching up on possible solution patterns to address this problem.

              API management for green field MSA

              If your organization is just a startup and you don’t want to use high cost COTS software in your IT eco system, you can start off the things with full MSA. These kind of organizations are called as green field eco systems where you have the complete control of what needs to be developed and how those services are going to be developed. Once you have your back end core services written as microservices, you can decide on exposing them as APIs through proper API management platform.

              Pattern 1 - Central API manager to manage all your micro services

              As depicted in the below figure, this design pattern can be applied for a green field MSA where microservices discovery, authentication and management can be delegated to the central API management layer. There is a message broker for asynchronous inter-service communication.
              Image may be NSFW.
              Clik here to view.
              MSA central API gateway.png
              Figure 1: Central API management in a green field MSA

              Pattern 2 - Service mesh pattern with side car (micro gateway)

              This pattern also applies to a green field MSA where all the back end systems are implemented as microservices. But this pattern can also be applied for scenarios where you have both microservices as well as monolithic (COTS) applications with slight modifications.

              Image may be NSFW.
              Clik here to view.
              MSA micro API gateway (Service mesh architecture).png
              Figure 2: API management with service mesh and side car (micro gateway)

              API management for practical enterprise architcture

              As mentioned at the beginning of this post, most of the real world enterprises use COTS software as well as various cloud services to fullfill their day to day business requirements. In such an environment, if you are implementing a MSA, you need to accept that existing systems are there to stay for a long time and MSA should be able to live along with those systems.

              Pattern 3: API management for modern hybrid eco system

              This design pattern is mostly suited for enterprises which has COTS systems as well as MSA. This pattern is easy to implement and has been identified as the common pattern to apply on hybrid microservices eco system.

              Image may be NSFW.
              Clik here to view.
              Modern enterprise central API gateway with ESB.png
              Figure 3: API management for modern enterprise

              The same pattern can be applied for any enterprise which do not have any micro services but only traditional monolithic applications as back end services. In such scenarios, micro services will be replaced by monolithic web applications.

              How to achieve 100% availability on WSO2 product deployments

              WSO2 products comes with several different components. These components can be configured through different configurations. Once the system is moved into production, it is quintissential that system needs to go through various updated and upgrades during it's life time. There are 3 main configurations related to WSO2 products.
              • Database configurations
              • Server configurations
              • Implementation code
              Any one or all of the above configuration components can be changed during an update/upgrade to the production system. In order to keep the system 100% available, we need to make sure that product update/upgrade processes does not impact the availability of the production system. We can identify different scenarios which can challenge the availability of the system. During these situations, users can follow the guidelines mentioned below so that system runs smoothly without any interruptions.

              During outage of server(s)

              • We need to have redundancy (HA) in the system in terms of active/active mode. In a 2 node setup, if 1 node goes down, there must be a node which can hold the traffic for both nodes for some time. Users may get some slowness, but system will be available. During the capacity planning of the system, we must make sure that at least 75% of the overall load can be handled from 1 active node.
              • If we have active/passive mode in a 2 node setup, each node should be capable of handling the load separately and passive node should be in hot-standby mode. Which means that passive node must keep on running even though it does not get traffic. 
              • If an entire data center goes down, then we should have a Disaster Recovery (DR) in a separate data center with the same setup. This can be in a cold-standby mode since these type of outages are very rare. But if we go with cold standby, there will be a time window of service unavailability

              Adding a new service (API)

              • Database sharing needs to be properly done through master-datasources.xml file and through registry sharing
              • File system sharing needs to be done so that deployment is one time and other nodes will get the artifacts through file sharing
              • Service deployments needs to be done from one node (manager node) and other nodes needs to be configured in read-only mode (to avoid conflicts)
              • Use passive node as manager node (If you have active/passive mode)
              • Once the services are deployed in all the nodes, do a test and expose the service (API) to the consumers

              Updating an existing service (fixing a bug)

              • Bring one additional passive node to the system with existing version of services. This is in case if the active node goes down while updating the service on first passive node (system will be 1 active/ 2 passive)
              • Disable the file sharing (rsync) in passive node.
              • Deploy the patched version and carry out testing into this passive node
              • Once the testing is passed, allow traffic into passive node and stop traffic from active node. 
              • Enable file sharing and allow active node to synced up with the patched version. If you don’t have file sharing, you need to manually deploy the service.
              • Carry out testing on other node and once it is passed, allow traffic into new node (if required)
                Remove the secondary passive node from the system (system will be 1 active/ 1 passive)

              Applying a patch to the server (needs a restart)

              • Bring one additional passive node to the system with existing version of services. This is in case if the active node goes down while applying the patch on first passive node (system will be 1 active/ 2 passive)
              • Apply the patch on first passive node and carry out testing
              • Once the testing is done enable traffic into this node and remove traffic from active node
              • Apply the patch on active node and carry out testing
              • Once the testing is done, enable traffic into this node and remove traffic from previous node (or you can keep this node as active)
              • Remove the secondary passive node from the system (system will be 1 active/ 1 passive)

              Doing a version upgrade to the server

              • Bring one additional passive node to the system with existing version of services. This is in case if the active node goes down while applying the patch on first passive node (system will be 1 active/ 2 passive)
              • Execute the migration scripts provided in WSO2 documentation to move the databases to the new version in passive node
              • Deploy the artifacts in the new version in passive node
              • Do a testing on this passive node and once testing is passed, expose traffic into this node
              • Follow the same steps into the active node
              • Once the testing is done, direct the traffic into this node (if required)
              Instead of maintaining the production system through manual processes, WSO2 provides artifacts which can be used to automate the deployment and scalability of the production system through docker and kubernetes.

              Deployment automation

              What is WSO2 Store and what you can get from it?

              WSO2 provides extensions to provide additional functionality which are not available with the OOTB product offerings. These extensions are hosted in the WSO2 store. All the extensions can be downloaded for free and use without any cost.
              WSO2 Store provides 4 types of extensions to WSO2 platform.
              1. ESB Connectors - These are connectors which can be used to connect WSO2 ESB with popular cloud APIs as well as enterprise systems. Some examples are Salesforce, SAP, PeopleSoft, AS4.
              2. IS Connectors - These are connectors which can be used to connect WSO2 Identity Server (IS) with external Identity providers over different protocols. Some examples are OpenID Connect, Mobile Connect, SAML Authenticator, SMSOTP
              3. Analytics Extensions - These are extensions which can be used integrate different technologies with Siddhi query language which is used in WSO2 Data Analytics Server (Stream Processor). Some examples are R extension, Python extension, Javascript extension, PMML extension
              4. Integration Solutions - These are pre-built integration templates which can be used integrate 2 or more different systems. Some examples are Github to Google Sheets template, Salesforce to Gmail and Google Sheets template
              All these extensions comes with comprehensive documentation. WSO2 provides professional support for customers who wants to use these connectors in their enterprise systems.

              A reference architecture for Digital Transformation

              Digital Transformation is as real as global warming. It is as real as Donal Trump becoming the US president. It is real, but you might not have taken it seriously. According to a latest survey done by Gartner, 42% of CEOs are taking actions to align their organizations with Digital transformation.

              What is Digital Transformation?

              DT (Digital Transformation) is that making your organizations assets (physical, intellectual) digitally accessible to fullfill your business requirements through engagement of technology. It is not only for Internet companies like Google, Facebook, Apple, Amazon or Microsoft. They have already transformed. But the challenge is for organizations which are not high tech. Think about Transportation, Logistics, Pharmaceuticals, Real Estate. These industries didn’t have much technology requirements in the past. But it is not the case anymore.

              Consumer driven business

              We have come through different technological advancements. At the Industrial age, machines were the main focus point. Then came the age of transportation and aviation where people focused about large aircrafts, automobiles, ships and international trade. With the invent of Personal Computer (PC), focus was shifted towards computing. Internet was born and information sharing has become the focus point. Then came the world of digital technology where people started controlling other people and objects through their mobile or hand held device. The world has come to the fingertips of the people. The advancement of digital technology allowed people to consume goods and services from their fingertips. Bricks and Mortar shops are no longer popular. Amazon, Ebay, Alibaba has changed the way people do shopping.
              The power of the business has shifted from the producer to consumer. Consumers don’t care about the status quo or your history. What they care about is how easily your products/services can be accessed and how quickly you can deliver. They want to see your products from their home.

              Early adopters of DT

              There are some organizations which have started just 5 years back and now controlling the entire world through their technological capabilities. Some of them are
              • AirBnb (Largest hospitality service without owning any property)
              • Uber (Largest taxi service without owning any vehicle)
              • Netflix (Largest media streaming company which does not produce any mdia)
              • Alibaba

              Transforming your business into a digital business

              First things first! You need to first understand the value of digital transformation. It needs to come from top to bottom. Not the other way around. Value of a digital business needs to be well understood before thinking about any digital transformation. Just think about a sample organization called “MyPharma” which is a famous pharmacy chain. Let’s think this organization has 100s of branches acrsoss United States. CTO of this organization has decided that they need to come up with new services to their consumers so that consumers will not go away from them. He has identified following high level things which needs to be done to provide innovative services to their customer.
              • Understand the customers who are coming to the pharmacy
              • Interconnect all the pharmacies so that customers get a unified experience whenever they jumped into a “MyPharma” shop
              • Integrate all the systems into one single platform so that services are provided through standard interfaces
              • Expose their data (medicines, offers, reminders to customers) through mobile and web based applications
              • Securely engage with premium customers and provide services which are customized for them

              Reference architecture for DT

              Once the requirements are clearly understood by the CTO, he evangelize this idea across senior leadership team through presentations and providing references about successful digital businesses. Somehow, he convince the senior leaders to take a shot at DT and gradually move their pharmacy business into a digital business. After scanning through all the available systems in their IT infrastructure, he is fascinated by how many different systems are squeezed into their system without even CTO is unaware of. He sees systems which are
              • Commercial Off The Shelf (COTS),
              • Web Services,
              • Cloud Services,
              • Data bases
              He wants to
              • integrate these systems with each other without going with point to point connections.
              • Once these systems are integrated, he wants to expose these services to different stake holders like customers, other branches, vendors, partners, etc.
              • Once the services are exposed, he wants to monitor their KPIs and do improvements based on them
              • He wants to provide secured access to information since medical information is sensitive to people and their health
              He comes up with a reference architecture which fullfills the above mentioned requirements as well as his business ambitions.
              Image may be NSFW.
              Clik here to view.
              Figure: Digital Transformation Reference Architecture
              As depcited in the above figure, CTO of “MyPharma” has identified 4 main capabilities which are required to build a digital business.
              • API Management — Managing how people interact with the digital services you offer
              • Integration — Enablement of you desparate systems through a common platform without affecting any of the existing systems
              • Identity and Access Management — Manage the users and their capabilities and avoid unauthorized access to data
              • Analytics — Monitor and analyze your business activities and frequently provide feedback to improve the business
              Once this architecture is identified and approved from the senior leaders, CTO of “MyPharma” needs to select a vendor based on following factors
              • Completeness of the solution in terms of implementing the full DT
              • Future vision of the vendor and how innovative are they
              • Financial ROI
              • Support for the products and quality of support
              Finally, CTO comes up with a set of requirements in the face of RFI/RFP and contact vendors to showcase their capabilities. If required, vendors are called for onsite/offsite demo and then do the selection.
              Happy DT !!!

              Open APIs and Digital Transformation

              Digital transformation is as real as it can get. Based on a recent survey done by Gartner, 42% of CEOs have already started working on it. This 42% does not include the tech giants like Google, Facebook, Amazon or Microsoft since they are already beyond the digital transformation (more towards AI part of it). Even though there is still 50% of CEOs who has not started on this journey, they definitely will (hopefully if they want to stay up to the competition). While enterprises are moving towards the digital transformation, the technology itself has gone way beyond the standard term.

              People who engaged with the technical aspects of the digital transformation has identified that it is not only about individual enterprises’s digital transformation, but the entire industry’s cooperation can reap more rewards to the entreprises as well as customers. That is where the concept of “Open APIs” or “Open standards” came into the grand stand. One of the hot topics in the european region is the Payment Services Directive 2 (PSD2) compliance regulation which required all the banks operating in EU to be compatible with the PSD2 by January 2018. Even though this came as a regulation, technically this is a revelation in the way people deal with their Banks.

              GSMA one API is another set of “Open APIs” which allows multiple mobile network operator (MNO)s to interconnect with each other and reap the benefits of much larger customer base than doing business with their own custom bases.

              In a technical sense, both these “Open API” standards provides a mechanism to interconnect enterprises which are offerring similar type of services to their customers (PSD2 - Financial services, Open API - Mobile network services) and make them share the customer bases which they have so that they can benefit from somewhat larger customer base (aggregated customer base). From the customers perspective, they will also be able to use multiple of their accounts (or profiles) when they purchase services from 3rd parties.

              At an abstract level, the architecture for the Open APIs would be as depicted in the below picture.

              Image may be NSFW.
              Clik here to view.
              Figure 1: Open API architecture

              As depicted in the above figure 1, with the concept of Open APIs, different vendors can expose information about their customers with customers consent in a unified manner. Their internal implementations for providing these APIs can be different. But the APIs are unified. Using the Open APIs, third party service providers (e.g. online shopping, merchants, location based services, etc.) can engage with the customers when customers are purchasing their products or services.

              Let’s take an example where you want to buy a laptop from Amazon.com and you need to make the payment using your existing bank account rather than a credit card. When you checkout your item from Amazon.com, the web site will provide you with the option to select from which bank account you are going to make the payment. This is achieved through the Open API which has been used by all your banks to expose your account information. Now you select Bank A and you will be redirected to Bank As web site. Now you confirm the Bank A that you allow Amazon.com to debit the relevant amount for the product which you are purchasing from Amazon.com. That’s all. No credit card. No third party credit card providers.

              The previous example showcases the power of Open APIs in a banking use case. But this is true for all sorts of different industries. GSMA one API is being used in different places across the globe for various use cases like mobile connect, mobile ID, etc. It has allowed people who didn’t have any facilities to connect with entities like Banks using their mobile phone. This has transformed their life like never before.

              As a final thought, with the concept of Open APIs and Digital transformation, the world is becoming a more connected place than ever before. It allows people from different capacities (wealthy as well as poor) to reap the benefits of the digital age.
              Viewing all 93 articles
              Browse latest View live