Wednesday 29 June 2016

Functional Programming with Java 8 and javaslang

Previous decade there was a meme about Object oriented programming and now it is, the Functional programming. What it is really? 

Functional programming leverages idea of providing behaviour using functions. Since its inception, Java has always been too verbose. It may be good for birth of a programmer, while he is on embarkation of logical thinking but it takes too many lines of code to solve a problem.  

Ironically, this has added complexity with Java. But with Java 8 the compilers have become incredibly smarter and can infer certain things letting Java coder focus on problem statement. When I first heard Java introducing lamdas, I was so excited to try it out, and finally felt that community is listening and bridging gap between Scala, its other counterpart running on JVM.

Java 8’s lambdas (λ) empower us to create wonderful API’s. They incredibly increase the expressiveness of the language.

Lets try to understand this with an example. We will look at a program that has list of int Strings and then we convert them into list of Integer objects. Later we add an additional integer element to list. And finally we need to print out the even numbers that are greater than 5 among them. See how verbose legacy Java code could be.













































When to go Functional?


Side effects


Real world program will usually have side-effect, meaning they modify state of an object. Functional programming aims towards immutability. This avoids state change with mutable nature of program such as changing objects themselves, or manipulating data and persisting it to database. But in certain cases mutability has its place, and functional style cannot comply to requirement, like objects shared among services and are suppose to be mutable. Immutability provides thread safety, reliable hash, type-safe for unchecked casting and does not require cloning.
















Referential transparency


Functions should purely depend on input. If they depend on state then that violates reusability. Functional style acts like shared services that act upon set of input parameters and are more consistent in nature. A function, or more general an expression, is called referential transparent if a call can be replaced by its value without affecting the behaviour of the program.














With plusOne function, the compiler can easily determine that given an input of 5, the expression plusOne(5) will evaluate to 6.  It can just replace that with 6 and make it faster at runtime.

In second example, let's say "G" can be modified externally, by something outside of the compiler's control. It can't replace plusG(5) with anything.  The expression can only be evaluated at runtime because the potential of modification makes it referentially “opaque".

Decoupled


Functions can go along with applications that can be defined as decoupled components. Each function having single responsibility and must be generic.

Exploring Javaslang 

http://www.javaslang.io

Java 8 gives us opportunity to write precise and decoupled code and opened door for other libraries to provide more that it couldn't simply offer out of its hat. Functional programming is all about values and transformation of values using functions. Javaslang functional interfaces are Java 8 functional interfaces on steroids. They also provide features like:

  • Composition
  • Lifting
  • Currying
  • Memoization


Composition 


The term "composition of functions" refers to the combining of functions in a manner where the output from one function becomes the input for the next function creating third function. You could use andThen instead.










More, Coming soon...


































Wednesday 8 May 2013

Jboss AS 7.x cluster setup

Overview

This blog will help you understand the steps required to configure Jboss AS 7.x in clustered mode. Jboss can be configured in the 'standalone' or 'domain' mode. For this we would have to modify from both side Apache as well as JBoss AS7 side, hence let’s see the configuration one at a time.

Apache side configuration

Let’s see what all configurations has to be made from Apache side


Download the required binaries for your OS from below link, example for RHEL x64 bit its [dynamic libraries linux2-x64] http://www.jboss.org/mod_cluster/downloads/1-1-0.html

Copy following .so files to your “<Apache_Home>/modules” folder.
  • mod_proxy.so
  • mod_proxy_ajp.so
  • mod_slotmem.so
  • mod_manager.so
  • mod_proxy_cluster.so
  • mod_advertise.so
Edit your httpd.conf (i.e. /conf/http.conf) and add following lines at the bottom of the file. 


############### mod_cluster Setting - STARTED ###############
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so
<VirtualHost 127.0.0.1:80>
<Directory />
Order deny,allow
Allow from all
</Directory>
<Location /mod_cluster_manager>
SetHandler mod_cluster-manager
Order deny,allow
Allow from all
</Location>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ManagerBalancerName testcluster
AdvertiseFrequency 5
</VirtualHost>
############### mod_cluster Setting - ENDED ###############

You can just add the IP_ADDRESS of the box on which Apache is running instead of “*”

NOTE:

You have to COMMENT the follwoing module (i.e. mod_proxy_balancer.so) in “httpd.conf” file, or else you would get error while starting your Apache, this is been done because we are now using mod_proxy_cluster.so instead of mod_proxy_balancer.so

#LoadModule proxy_balancer_module modules/mod_proxy_balancer.so

You have to do the following things before starting the APACHE

$ setenforce 0

The above command will disable SE Linux for your current running session. However, if you want to disable it permanently then follow the below steps.

$ vi /etc/selinux/config

#SELINUX=enforcing (comment this line and add the below line)
SELINUX=disabled

May be this would need to restart your system, this way you will be sure that the changes have taken place.


JBoss side configuration

Standalone mode


In JBoss AS 7 we have by default two modes which are domain mode and standalone mode. Here, we would be using standalone mode. However in standalone mode also we have different xml files under the configuration folder from which cluster is enabled in standalone-ha.xml and standalone-full-ha.xml, thus make sure you would be using them and not other xml files

We would be seeing two scenarios here one would be creating a cluster on the same box and second when creating a cluster between different boxes.

Scenario 1: Cluster on same box

Once you have unzipped jboss-as-7.1.1.Final.zip , you would have to create two copies ofstandalone folder and rename them as standalone-node1 and standalone-node2 as shown below

/home/user/jboss-as-7.1.1.Final/standalone-node1
/home/user/jboss-as-7.1.1.Final/standalone-node2

Note: Make sure you keep the original copy for standalone folder as it is for future usage.
Give a unique name in the server element, as shown below .

standalone-node1

<server name="standalone-node1" xmlns="urn:jboss:domain:1.2">

standalone-node2

<server name="standalone-node2" xmlns="urn:jboss:domain:1.2">


We have to add the instance-id attribute in web subsystem as shown below in both the standalone nodes.


<subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" instance-id="${jboss.node.name}" native="false">
<connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/>
<connector name="ajp" protocol="AJP/1.3" scheme="http" socket-binding="ajp"/>
.
.
.
</subsystem>

Last you just have to add the proxy-list in the attribute in mod-cluster-config of modcluster subsystem, which would be having IP Address and Port on which your Apache server is running so that JBoss server can communicate with it, as shown below in both the standalone nodes.

<subsystem xmlns="urn:jboss:domain:modcluster:1.0">
<mod-cluster-config advertise-socket="modcluster" proxy-list="1.1.1.1:80">
.
.
.
</mod-cluster-config>
</subsystem>
Now you would have to run the below command to start both the JBoss node in a cluster

Node1

./standalone.sh -c standalone-ha.xml -b 127.0.0.1 -u 230.0.0.4 -Djboss.server.base.dir=../standalone-node1 -Djboss.node.name=node1 -Djboss.socket.binding.port-offset=100

Node2

./standalone.sh -c standalone-ha.xml -b 127.0.0.1 -u 230.0.0.4 -Djboss.server.base.dir=../standalone-node2 -Djboss.node.name=node2 -Djboss.socket.binding.port-offset=200


Where:
-c = is for server configuration file to be used
-b = is for binding address
-u = is for multicast address
-Djboss.server.base.dir = is for the path from where node is present
-Djboss.node.name = is for the name of the node
-Djboss.socket.binding.port-offset = is for the port offset on which node would be running

Note: However we need to keep in mind the following things
  • Both the nodes should have same multicast address
  • Both the nodes should have different node names
  • Both the nodes should have different socket binding port-offsets


Once both the node comes up properly you would not see them in cluster, hence to make sure if both of the nodes are in a cluster then you would need to deploy the an application which has the distributable tag in web.xml . You can download one of our sample clustered application by  clicking here

After downloading the ClusterWebApp.war you just have to keep it in (/home/user/jboss-as-7.1.1.Final/standalone-nodeX/deployments) both nodes deployments folder, just after that you would see similar below messages in both the nodes prompt, having both node names in there cluster view.

17:31:06,600 INFO  [stdout] (pool-14-thread-1)
17:31:06,601 INFO  [stdout] (pool-14-thread-1) -------------------------------------------------------------------
17:31:06,601 INFO  [stdout] (pool-14-thread-1) GMS: address=standalone-node1/web, cluster=web, physical address=127.0.0.1:55300
17:31:06,602 INFO  [stdout] (pool-14-thread-1) -------------------------------------------------------------------
17:31:08,791 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-2) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be pasivated.
17:31:08,796 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-5) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be pasivated.
17:31:08,839 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-15-thread-1) ISPN000078: Starting JGroups Channel
17:31:08,844 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-15-thread-1) ISPN000094: Received new cluster view: [standalone-node1/web|0] [standalone-node1/web]
17:31:08,845 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-15-thread-1) ISPN000079: Cache local address is standalone-node1/web, physical addresses are [127.0.0.1:55300]

Considering everything went well you should be able to view clusters by hitting URL http://localhost/mod_cluster_manager . Below image is showing when cluster is been made in on the same box using standalone mode


Scenario 2: Cluster on different boxes

After unzipping JBoss AS 7 in both the boxes [i.e. box-1=10.10.10.10 and box-2=20.20.20.20 ] then you can create just a single copies of standalone folder in respective boxes

Box-1 : 10.10.10.10
/home/user/jboss-as-7.1.1.Final/standalone-node1

Box-2 : 20.20.20.20
/home/user/jboss-as-7.1.1.Final/standalone-node2
Now you would have to run the below command to start both the JBoss node in a cluster

Note: However we need to keep in mind the following things
Both the nodes should have same multicast address
Both the nodes should have different node names
Both the nodes should be running on the IP_ADDRESS or HOST_NAME of the box

Node1 on Box-1 [10.10.10.10]
./standalone.sh -c standalone-ha.xml -b 10.10.10.10 -u 230.0.0.4 -Djboss.server.base.dir=../standalone-node1 -Djboss.node.name=node1

Node2 on Box-2 [20.20.20.20]
./standalone.sh -c standalone-ha.xml -b 20.20.20.20 -u 230.0.0.4 -Djboss.server.base.dir=../standalone-node2 -Djboss.node.name=node2
Here we would not have to worry about the port conflicts as we are running both the nodes on different boxes having different binding address.

Repeat the same step-3 and step-4 of Scenario-1 and you would then see the same cluster view in each running nodes prompts.
If you are looking for running multiple clusters, then you would have to make sure you give a different set of multicast address (i.e.  -u option) for each cluster.

Domain mode


Scenario 1: Cluster on same box

Now in “/home/user/jboss-as-7.1.1.Final/domain/configuration/domain.xml” file make the below changes, which is just adding a new server-group (i.e. ha-server-group) which would be using haprofile and ha-sockets socket binding group, where ha is for cluster enabled.

<server-groups>
<server-group name="ha-server-group" profile="ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="ha-sockets"/>
</server-group>
.
.
</server-groups>

Where:
profile: tells which type of profile is been used (i.e. web, messaging, cluster, full)
  • socket-binding-group: tells which all type of protocols is been used (i.e. web [http,ajp], messaging, jgroups [udp, tcp], full)
  • server-group : tells what profile is been used and what type of sockets is been used

You just have to add the proxy-list in the attribute in mod-cluster-config of modcluster subsystem, which would be having IP Address and Port on which your Apache server is running so that JBoss server can communicate with it, as shown below in profile element that are using in domain.xml.

<subsystem xmlns="urn:jboss:domain:modcluster:1.0">
<mod-cluster-config advertise-socket="modcluster" proxy-list="1.1.1.1:80">
.
.
.
</mod-cluster-config>
</subsystem>

After that you would have to make the below changes in “/home/user/jboss-as-7.1.1.Final/domain/configuration/host.xml” file which is just adding two new JBoss nodes with the name ha-server-1 and ha-server-2 which are using the ha-server-group server group created in the pervious setp and making this servers clusterd enabled.

<servers> <server name="ha-server-1" group="ha-server-group" auto-start="true"> <socket-bindings port-offset="100"/> </server> <server name="ha-server-2" group="ha-server-group" auto-start="true"> <socket-bindings port-offset="200"/> </server>..</servers>

Note: You are giving unique name and port offset for these servers, as both the servers are running on the same box.

Create a Management User using the add-user.sh script as shown below. This is done so that we can access admin console.

bin]$ ./add-user.sh 

What type of user do you wish to add?
 a) Management User (mgmt-users.properties)
 b) Application User (application-users.properties)
(a): a

Enter the details of the new user to add.
Realm (ManagementRealm) :
Username : testuser
Password : testpassword
Re-enter Password : testpassword
About to add user 'testuser' for realm 'ManagementRealm'
Is this correct yes/no? yes
Added user 'testuser' to file '/home/user/jboss-as-7.1.1.Final/standalone/configuration/mgmt-users.properties'
Added user 'testuser' to file '/home/user/jboss-as-7.1.1.Final/domain/configuration/mgmt-users.properties'

Once everything is done start your server by using the below command, however you would not see that the nodes ha-server-1 and ha-server-2 are in a cluster for that you would have to deploy an application which has the distributable tag in web.xml .
bin]$ ./domain.sh

Now you can download one of our sample clustered application by : clicking here and deploy it from admin console from the URL “http://localhost:9990/console”


Just after deploying application and adding it to ha-server-group you would see the below cluster view in the prompt in which the domain is running.

[Server:ha-server-2] 15:12:33,971 INFO  [org.jboss.web] (MSC service thread 1-1) JBAS018210: Registering web context: /ClusterWebApp
[Server:ha-server-2] 15:12:34,239 INFO  [org.jboss.as.clustering.impl.CoreGroupCommunicationService.lifecycle.web] (Incoming-1,null) JBAS010247: New cluster view for partition web (id: 1, delta: 1, merge: false) : [master:ha-server-2/web, master:ha-server-1/web]
[Server:ha-server-2] 15:12:34,242 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,null) ISPN000094: Received new cluster view: [master:ha-server-2/web|1] [master:ha-server-2/web, master:ha-server-1/web]
.
.
[Server:ha-server-1] 15:12:34,377 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-14-thread-3) ISPN000078: Starting JGroups Channel
[Server:ha-server-1] 15:12:34,378 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-14-thread-3) ISPN000094: Received new cluster view: [master:ha-server-2/web|1] [master:ha-server-2/web, master:ha-server-1/web]

Troubleshooting for SELinux

Adding policy for mod_cluster under SELinux

Mod_cluster needs to open port and create shared memory and files, therefore some permission have to be added, you need to configure something like:

policy_module(mod_cluster, 1.0)
require {
        type unconfined_java_t;
        type httpd_log_t;
        type httpd_t;
        type http_port_t;
        class udp_socket node_bind;
        class file write;
}
#============= httpd_t ==============
allow httpd_t httpd_log_t:file write;
corenet_tcp_bind_generic_port(httpd_t)
corenet_tcp_bind_soundd_port(httpd_t)
corenet_udp_bind_generic_port(httpd_t)
corenet_udp_bind_http_port(httpd_t)
#============= unconfined_java_t ==============
allow unconfined_java_t http_port_t:udp_socket node_bind;


Put the above in a file for example mod_cluster.te and generate the mod_cluster.pp file.

# make -f /usr/share/selinux/devel/Makefile
Compiling targeted mod_cluster module
/usr/bin/checkmodule:  loading policy configuration from tmp/mod_cluster.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 14) to tmp/mod_cluster.mod
Creating targeted mod_cluster.pp policy package
rm tmp/mod_cluster.mod.fc tmp/mod_cluster.mod

The mod_cluster.pp file should be proceeded by semodule as root:
# semodule -i mod_cluster.pp
You can confirm policy added
# semodule -l

Adding AJP port using semanage

Default port for AJP is 8009. When we configure # of clusters using mod_cluster in JBoss each cluster has its specific AJP port usually configured as per offset value. For example if you are running cluster node 1 with offset 100 then your port will be 8109.
SElinux will not allow apache to connect to AJP port, we need to add this port using following command.

Port contexts
Allow Apache to listen on tcp port 8109
#  semanage port -a -t http_port_t -p tcp 8109
See newly added port by following command
#  semanage port –l | grep http
You will need to add all the AJP ports based on # of clusters nodes you have configured.


Reference 



Monday 6 May 2013

Nagios monitoring for Jboss AS 7.x - jboss2nagios


jboss2nagios


Integrate JBoss into Nagios monitoring through a small MBean and a perl based Nagios plugin. Lets you read and monitor JMX values from JBoss servers very efficiently. On the Nagios server no JDK or JBoss installation is needed. This Jboss SAR MBean and Perl plug-in is compatible with Jboss 7.1.1.Final. It allows you to monitor following


  1. HeapMemoryUsage
  2. Non heap memory usage
  3. CPU usage
  4. Take heap dump



Installation:


  • Copy the collector.sar (from the mbean/ directory) to your JBoss deploy directory. Port 5566 is then open for the plugin to access it.
  • Copy the plugin check_mbean_collector (from the plugin/ directory) to the Nagios plugin directory on the Nagios server.
  • Edit your Nagios config to use the check_mbean_collector to monitor any attributes of any MBean.

You might check that the plugin and the MBean is working properly by doing a test run on the Nagios server:

./check_mbean_collector -H jbossserver -p 5566 -m java.lang:type=Memory -a HeapMemoryUsage -w 70 -c 90 Please note that you need the nagios-plugins package installed and of course replace "jbossserver" above with you server name. Plugin usage:

Retrieve some MBean attribute value from a JBoss server through the collector MBean: check_mbean_collector -H host -p port -m mbean_name -a attribute_name -w warning_level -c critical_level


Usage:

check_mbean_collector -H host[,host,..] -p port -m mbean-name -a attribute-name -w warning-level -c critical-level check_mbean_collector [-h | --help] check_mbean_collector [-V | --version]


  • [host] The server running JBoss. Giving a comma separated list of hosts switches to a check for a singleton in a cluster.
  • [port] The port the deployed collector MBean is listening to
  • [mbean_name] The JMX name of the MBean that includes the attribute, e.g. ava.lang:type=Memory Use the ${some.env} notation to refer to a JVM environment variable on the server. In Nagios config files this must be escaped like this: $$\{some.env}
  • [attribute_name] The name of the MBean attribute to retrieve, e.g. HeapMemoryUsage Prefix with * to get the difference between two calls (delta). ${...} can be used.
  • [warning_level] The level as a number from which on the WARNING status should be set
  • [critical_level] The level as a number from which on the CRITICAL status should be set

Commands

$cd /usr/local/nagios/libexec/


  1. HeapMemoryUsage : ./check_mbean_collector -H jbossserver -p 5566 -m java.lang:type=Memory -a HeapMemoryUsage -w 70 -c 90
  2. Non heap memory usage : ./check_mbean_collector -H jbossserver -p 5566 -m java.lang:type=Memory -a NonHeapMemoryUsage -w 70 -c 90
  3. CPU usage : ./check_mbean_collector -H jbossserver -p 5566 -m java.lang:type=OperatingSystem -a CPUUsage -w 70 -c 90
  4. Take heap dump : ./heap_util -H jbossserver -p 5566 -m com.sun.management:type=HotSpotDiagnostic

Source

Original project has been modified and made compatible with Jboss AS 7.x. It has also been tested with Jboss 6 EAP. https://github.com/manishdevraj/jboss2nagios/


SourceForge Project

The project is hosted on SoruceForge and has it's own project homepage 
https://sourceforge.net/projects/jboss2nagios/

Reference : http://sourceforge.net/projects/jboss2nagios

Friday 4 January 2013

My Fellow Coders

Do you remember the last time when you had to clean someone else’s CRAPPY code?  I am sure at least once in our lifetime we had to deal with a mess so grave that it took weeks to do what should have taken hours? Have you seen what should have been a one-line change, made instead in hundreds of different modules? These symptoms are all too common in the life of a coder.

Wonder, why does this happen to code? Why good code turns into bad code? We have lots of explanations for it.

Obvious as it may sound, we complain that the requirements changed so quickly that the original design did not comply. We blame the schedules that were too tight to do things right.

We blather about stupid managers and intolerant customers and useless marketing people.  But the fault, my dear friends, is not in our stars, but in us.

“We are unprofessional!!”

This may be a bitter pill to swallow. How could this mess be our fault? What about the requirements? What about the schedule? What about the stupid managers and the useless marketing peoples? Don’t they share some of the blame?

NO. The managers and marketing people look to us for the information which they can use to make promises and commitments to the clients; and even when they don’t look to us, we should not be shy about telling them what we think. The end users/clients look to us to validate the requirements and the way it will fit into the system. The project managers look to us to help work out the schedule/project plan. We are deeply associated in the planning of the project and in sharing a great deal of the responsibility for any failures; especially if those failures have to do with bad code!

“But wait!” you say. “If I don’t do what my manager says, I’ll be fired.” Probably not.

Most managers want the truth, even when they don’t act like it. Most managers want good code, even when they are obsessing about the schedule.

They may defend the schedule and requirements with passion; but that’s their job. It’s your job to defend the code with equal passion.

To drive this point home, what if you were a doctor and had a patient who demanded that you stop all the silly hand-washing in preparation for surgery because it was taking too much time?  Clearly the patient is the boss; and yet the doctor should absolutely refuse to comply. Why? Because the doctor knows more than the patient about the risks of disease and infection. It would be unprofessional (never mind criminal) for the doctor to comply with the patient.

So too it is unprofessional for programmers to bend to the will of managers who don’t understand the risks of making messes.

Tuesday 2 October 2012

Key to successful Software Development

One fine day when I was quite bored with my regular work, I went across to my colleague - Niraj Khatmode’s desk and was discussing about the paper that he was going to present in one of the conferences, later I was distracted by a book that he was reading on his PC before I got there. I asked what this book is all about and as he explained it fascinated me to read it. The book was "Extreme programming explained" authored by Kent Beck. As a programmer its title really impressed me.
I am a programmer since 7 years now and all these years I have been evolving as a software developer; in this evolution I have gone through some good, bad and worse phases of a software development life cycle. All these experiences have thought me certain elements about software development life cycle (SDLC). A chapter from the book “Four variables” talks about those elements.
The elements are very straight forward and are very much dependent on each other. Right dose of each of these elements will lead to successful and satisfied software development.
  • Cost
  • Time
  • Quality
  • Scope
When I retrospect and relate these elements with my past projects, I can imagine the reasons behind the good, bad and worse situation that have happened. Software development is a collaborative work between the customer, manager and the executing team and usually the customer and the manager are the forces that manipulate the above elements; however it’s difficult to control all the four elements together because if one increases or decreases the other element has an adverse effect, which means customers, managers get to control any three of the elements while the development team will have to deal with the resultant value of the fourth element. To prove this, let’s consider customer, manager try to control all the four elements by deciding that "you are going to get all these requirements done by the first of next month with exactly this team with a quality that is up to the usual standard”. When this happens, quality always goes down since nobody does good work under too much stress. Also likely to go out of control is time. At the end what you get is crappy software late.
The solution is to make these four elements visible to everyone (customer, manager, and team). This way all can decide and can consciously choose which three elements to control. If they don't like the result implied for the fourth element, they can always change the combination of these elements.
There is not a simple relationship between the four elements. For example, you can't just get software faster by spending more money. As the saying goes, "Nine women cannot make a baby in one month." and on the contrary, eighteen women still can't make a baby in one month.
In many ways, cost is the most constrained variable. You can't just spend your way to quality, or scope, or short release cycles. In fact, at the beginning of a project, you can't spend much at all. The investment has to start small and grow over time. After a while, you can productively spend more and more money. The time element is often out of the hands of the project manager and in the hands of the customer. Quality is another strange element. Often, by insisting on better quality you can get projects done sooner, or you can get more done in a given amount of time. There is a human effect from quality. Everybody wants to do a good job, and they work much better if they feel they are doing good work.
Now I have started believing that the one element that drives the control over cost, time and quality is the scope. As a software developer, I should give more importance to the scope element because If I change/eliminate/manage the scope the managers and customers will get better control over cost, quality and time. It has been fairly observed that developers complain about the requirement not been clear and there is a discrepancy between what we gave them and what they actually wanted, this is an absolute truth of software development. On the contrary as a software developer if you start thinking from the side of a customer you would agree that development of a piece of software, changes its own requirements, as when the customers see the first release they learn what they want in the second release or what they wanted in the first. It’s a learning curve and it’s a valuable learning because it couldn't have possibly taken place based on speculation. It is learning that can only come from experience. But customers can't get there alone. They need people, who can program, not as guides, but as companions
What if as a software developer I see the "softness" of requirements as an opportunity, not a problem?  Then I can choose to see scope as the easiest of the four elements to control because, I can shape it, a little this way, a little that way. If time gets tight toward a release date, there is always something that can be deferred to the next release.
All these controlling elements would make up-to great software and I believe all this can be possible through the agile way of development.

Tuesday 26 June 2012

Portal Development Basics Part 1

Portal Development Basics Part 1


If you have been part of application development or even if you are application user for a quite long time now, then you must have sensed that heart of any application is “content”.  As things advanced we realized that “user experience” is important as well. The lack of a standard approach and technology to address user-experience requirements, such as personalization, customization, and content aggregation in web applications, led to ad hoc ways of implementing these features. The result was maintenance nightmares, lost developer productivity, and longer turnaround time for incorporating new features.

With evolving Java portlet technology it allows us to quickly develop a web application providing service orchestration. Java portlet technology it isn’t standalone technique to achieve development, it uses traditional JSPs and servlets to build application along with portlet. On top of that there is MVC frameworks like Spring MVC portlets allows you to develop individual portlets in the Spring technology. This blog shall aid you understand fundamental need of portal development and getting familiar with different aspects of it. Next blog will talk more technically about different standards of Java portlet technology JSR 168 (Java Portlet Specification V1.0), JSR 286 (Java Portlet Specification V2.0).

Portal Overview


A portal is a web based application that comprises of many small web applications called “portlets”. Primary goal of a portal system is to facilitate personalization, authentication, and content aggregation from different sources and hosts the presentation layer of information systems. Aggregation is the action of integrating content from different sources within a web page. A portal may have sophisticated personalization features to provide customized content to users.
To get a feel for portals, you can visit the iGoogle portal (http://www.google.com/ig).  In the iGoogle portal you can see portlets showing emails from Gmail, headlines from CNN, content from YouTube, and so on.

Portlets are not widgets


While looking at iGoogle example you might start wondering what makes portlet different than web widgets as we know today. A developer can quickly develop widgets having some knowledge of Javascript and XML, however Java portlet concepts require deep learning curve to understand its fundamentals. Portlets are well suited for medium to complex application requirements as found in enterprise applications.  

Why use portal?


Quick answer is when you want to develop service oriented architecture. As portlets represent services and are pluggable components, you can get plug and play behavior using portlets. Because portlets can interact with each other at the user interface layer (a process referred to as inter-portlet communication), they play a major role in developing SOA applications. The portlet container is responsible for handling the communication between portlets.

The portal infrastructure, which includes a portlet container and portal server, provides portlet instance lifecycle management, instance pooling, content caching, security, a consistent look and feel, and single sign-on features to your web portal.
Let’s take example of iGoogle portal we talked about. As a user you need to frequently check your emails, browse videos on YouTube, take glimpse of news from news articles and keep updated on technology trends. All of these web applications have their own business requirement and data sources which you will need to browse individually. If we have a single point of entry using a single sign-on feature to allow access to all of these applications then that would be better solution.

Let’s say that as an employee of an organization you need to frequently access organization-specific business-to-employee (B2E) applications (like time card, help desk, knowledge management, and service request applications) so you can keep track of missing time cards, recently published articles, closed help desk tickets, and so on. These different web applications have their own data sources, and you’d usually need to go to each of these different applications to access this information. Certainly this isn’t an optimal way of accessing information and services, because you need to go to different web applications and authenticate each time. An intranet site that provides a single sign-on feature and access to all these different applications would be a better solution.

By providing the single sign-on feature, if the service provider has provided easy access to the B2E applications, but you still need to filter the information that interests. Like if you interested to know more about Java technology trends only. You will have to search for Java articles in primary application. Individual application will have some level of user preference managed as personalization. But ideal scenario for user to have unified view of all services is in single application itself. This is achieved using portal development as seen in iGoogle today.

The portlets can be personalized by users to change the number of emails displayed in the portlet, the number of CNN headlines they want to view, the location they want to receive RSS feeds from, and so on. Users can drag and drop these portlet windows on the portal page to customize the way.

Where portal doesn’t work?


Portals aren’t the answer to every business requirement; organizations should consider carefully whether there is a business case for developing a portal. If the requirement doesn’t require gathering content from distinct information systems to loosely integrate disparate systems, the business should consider developing independent web applications to meet the business requirement. The personalization and customization features of portals are important from the user’s perspective. From the business’s perspective, the most important requirement to consider is content aggregation.

Reference


Portlet In Action

Sunday 2 October 2011

Kindle Fire opened an Open source model for mobile



With rise of open source software platforms everyone was convinced that the Open source model will never go down. But start of 2007 Apple announced about their new innovative mobile technology ‘iPhone’. Apple came with an appealing and winning model, most of them agreed. 
What it also brought is closed and proprietary model that wouldn’t leave a scope for mobile open source platform. They said mobile and open source together would not work. That this market was too closed. That the carriers would not allow any openness. 


It turned out to be false, because Google made Android open source and it became the fastest growing OS of all time, passing Apple at a speed nobody expected. Here comes the ray of hope for mobile to go open source again, but again not everyone was optimistic about Android that time.
Today we hear about Kindle Fire and I am confident that it is going to be the biggest competitor for Apple in mobile space. Well, I wouldn’t term ‘kill’ as it is not going to suddenly drop Apple market but it will live with it for a while. Good question to ask is – how did Amazon managed to challenge Apple in a jiffy?


Open Source.


Think about it. Amazon took an open source mobile operating system (Android), forked it, changed the UI a little and added a few apps. With that, it is going to build the most successful tablet of our times.


Amazon couldn’t stand a chance without open source. It gave them speed and time-to-market. It gave them a stable and high-quality platform. The ability to compete and innovate. And maybe the biggest advantage of all: an enormous development community. All things we always said open source would bring to the table, making a huge difference.


The Kindle Fire is yet-another demonstration of the power of open source. The innovation is not going to stop here. Expect even greater things in the future. With the market moving towards HTML5 and the open web, we'll be talking about the dominance of open source and openness again and again. 


And yes, open source in mobile is doing way better than in the PC world. And yes, it is clearly showing to be the winning model.