Tuesday, 29 October 2013

More maps and reduction.

Maps and reduction are useful in a variety of situations beyond just simple math. After all, in any case where a collection of objects can be transformed into a different object (or value) and then collected into a single value, map and reduction operations work. The map operation, for example, can be useful as an extraction or projection operation to take an object and extract portions of it, such as extracting the last name out of a Person object: Once the last names have been retrieved from the Person stream, the reduction can concatenate strings together, such as transforming the last name into a data representation for XML.

String xml =
"<people data='lastname'>" +
people.stream()
.map(it -> "<person>" + it.getLastName() +
"</person>")
.reduce("", String::concat)
+ "</people>";
System.out.println(xml);

And, naturally, if different XML formats are required, different operations can be used to control the contents of each format, supplied either ad hoc, or from methods defined on other classes, such as from the Person class itself, which can then be used as part of the map() operation to transform the stream of Person objects into a JSON array of object elements. The ternary operation in the middle of the reduce operation is there to avoid putting a comma in front of the first Person serialized to JSON. Some JSON parsers might accept this format, but that is not guaranteed, and it looks ugly to have it there. It is ugly enough, in fact, to fix. The code is actually a lot easier to write if we use the built-in Collector interface and its partner
Collectors, which specifically do this kind of mutable-reduction operation .
This has the added benefit of  being much faster than the versions using the explicit reduce and String::concat from the earlier examples, so it’s generally a better bet.Oh, and lest we forget our old friend Comparator, note that Stream also has an operation to sort a stream in-flight, so the sorted JSON representation of the Person list looks like.This is powerful stuff.
Parallelization.

What’s even more powerful is that these operations are entirely independent of the logic necessary to pull each object through the Stream and act ion each one, which means that the traditional for loop will break down when attempting to iterate, map, or reduce a large collection by breaking the collection into segments that will each be processed by a separate thread. The Stream API, however, already has that covered, making the XML or JSON map() and reduce() operations shown earlier a slightly different operation—instead of calling stream() to obtain a Stream from the collection, use parallelStream() instead. For a collection of at least a dozen items, at least on my laptop, two threads are used to process the collection: the thread named main, which is the traditional one used to invoke the main() method of a Java class, and another thread named ForkJoinPool.commonPool worker-1, which is obviously not of our creation.
Obviously, for a collection of a dozen items, this would be hideously unnecessary, but for several hundred or more, this would be the difference between “good enough” and “needs to go faster.” Without these new methods and approaches, you would be staring at some significant code and algorithmic study. With them, you can write parallelized code literally by adding eight keystrokes(nine if you count the Shift key required
to capitalize the s in stream) to the previously sequential processing. And, where necessary, a parallel Stream can be brought back to a sequential one by calling—you can probably guess— sequential() on it.
The important  thing to note is that regardless of whether the processing is better done sequentially or in parallel, the same Stream interface is used for both. The sequential or parallel implementation becomes entirely an implementation detail, which is exactly where we want it to be when working on code that focuses on business needs (and value); we don’t want to focus on the low-level details of firing up threads in thread pools and synchronizing across them.

Friday, 25 October 2013

Additional changes in Collections in Java 8

With some additional APIs on the Collection classes themselves, a variety of new and more powerful approaches and techniques open up, most often leveraging techniques drawn from the world of functional programming. No knowledge of functional programming is necessary to use them, fortunately, as long you can open your mind to the idea that functions are just as valuable to manipulate and reuse as are classes and objects.
Comparisons. One of the drawbacks to the Comparator approach shown earlier is hidden inside the Comparator implementation. The code is actually doing two comparisons, one as a “dominant” comparison
over the other, meaning that last names are compared first, and age is compared only if the  last names are identical. If project requirements later demand that sorting be done by age first and by last names second, a new Comparator must be written—no parts of compareLastAndAge can be reused. This is where taking a more functional approach can add some powerful benefits. If we look at that comparison as entirely separate Comparator instances, we can combine them to create the precise kind of comparison needed. Historically, writing the combination by hand has been less productive, because by the time you write the code to do the combination, it would be just as fast (if not faster) to write the multistage comparison by hand. As a matter of fact, this “I want to compare these two X things by comparing values returned to me by a method on each X” approach is such a common thing, the platform gave us that functionality out of the box. On the Comparator class, a comparing method takes a function (a lambda) that extracts a comparison key out of the object and returns a Comparator that sorts based on that. The Person is no longer about sorting, but just about extracting the key by which the sort should be done. This is a good thing—Person shouldn’t have to think about how to sort; Person should just focus on being a Person. It gets better, though, particularly when we want to compare based on two or more of those values.
Composition. As of Java 8, the Comparator interface comes with several methods to combine Comparator instances in various ways by stringing them together. For example, the Comparator .thenComparing() method takes a Comparator to use for comparison after the first one compares. So, re-creating the “last name then age” comparison can now be written in terms of the two Comparator instances LAST and AGE. Or, if you prefer to use methods rather than Comparator instances . By the way, for those who didn’t
grow up using Collections.sort(), there’s now a sort() method directly on List. This is one of the neat things about the introduction of interface default methods: where we used to have to put that kind of noninheritance-based reusable behavior in static methods, now it can be hoisted up intointerfaces. Similarly, if the code needs to sort the collection of  Person objects by last name and then by first name, no new Comparator needs to be written, because this comparison can, again, be made of the two particular atomic comparisons as shown below.
Collections.sort(people,
Comparators.comparing(Person::getLastName)
.thenComparing(Person::getFirstName));
This combinatory “connection” of methods, known as functional composition, is common in functional programming and at the heart of why functional programming is as powerful as it is. It’s important to understand that the real benefit here isn’t just in the APIs that enable us to do comparisons, but the ability to pass bits of executable code (and then combine them in new and interesting ways) to create opportunities
for reuse and design. Comparator is just the tip of the iceberg. Lots of things can be made more flexible and powerful, particularly when combining and composing them. 

Monday, 21 October 2013

Collections and Algorithms

The Collections API has been with us since JDK 1.2, but not all parts of it have received equal attention or love from the developer community. Algorithms, a more functional-centric way of interacting with collections,
have been a part of the Collections API since its initial release, but they often get little attention, despite their usefulness. For example, the Collections class sports a dozen or so methods all designed to take a collection as a parameter and perform some operation against the collection or its contents. Consider, for example, the Person class

public class Person {
public Person(String fn, String ln, int a) {
this.firstName = fn; this.lastName = ln; this.age = a;
}
public String getFirstName() { return firstName; }
public String getLastName() { return lastName; }
public int getAge() { return age; }
}

which in turn is used by a List that holds a dozen or so Person objects. Now, assuming we want to examine or sort this list by last name and then by age, a naive approach is to write a for loop (in other words, implement the sort by hand each time we need to sort). The problem with this, of course, is that this violates DRY (the Don’t Repeat Yourself principle) and, worse, we have to reimplement it each time, because for loops are not reusable. The Collections API has a better approach: the Collections class sports a sort method that will sort the contents of the List. However, using this requires the Person class to implement the Comparable method (which is called a natural ordering, and defines a default ordering for all Person types) or you have to pass in a Comparator instance to define how Person objects should be sorted. So, if we want to sort first by last name and then by age (in the event the last names are the same . But that’s a lot of work to do something as simple as sort by last name and then by age. This is exactly where the new closures feature will be of help, making it easier to write the Comparator .The Comparator is a prime example of the need for lambdas in the language: it’s one of the dozens of places where a one-off anonymous method is useful. (Bear in mind, this is probably the easiest—and  weakest—benefit of lambdas. We’re essentially trading one syntax for another, admittedly terser, syntax, but even if you put this article down and walk away right now, a significant amount of code will be saved just from that terseness.) If this particular comparison is something that we use over time, we can always capture the lambda as a Comparator instance, because that is the signature of the method—in this case, "int compare(Person, Person)"—that the lambda fits, and store it on the Person class directly, making the implementation of the lambda and its use
even more readable . Storing a Comparator<Person> instance on the Person class is a bit odd, though. It would make more sense to define a method that does the comparison, and use that instead of a Comparator instance. Fortunately, Java will allow any method to be used that satisfies the same signature as the method on Comparator, so it’s equally possible to write the BY_LAST_ AND_AGE Comparator as a standard instance or static method on Person and use it instead . Thus, even without any changes to the Collections API, lambdas are already helpful and useful. Again, if you walk away from this article right here, things are

pretty good. But they’re about to get a lot better.

Tuesday, 15 October 2013

Applications for Websocket

The Java EE 7 specification adds new functionality to the Java EE specification. One capability many developers were asking for is support for WebSockets. Real-time bidirectional traffic on the internet is growing. Different actors are involved in this area, including content providers, broadcasters, software developers, and telecom operators. Due to the different characteristics of the involved actors, the success of WebSockets required standardization. The Internet Engineering Task Force (IETF) defined a standard for the WebSocket protocol. This standard defines the low-level protocol that technologies implementing WebSockets should adhere to. For example, it defines how an HTTP connection should be upgraded to a full-duplex bidirectional WebSocket connection. The importance of this standard cannot be underestimated. Different languages and platforms are used to develop applications relying on WebSockets, and they should all rely on the same protocol in order to be interoperable. On top of the WebSocket protocol, a number of technologies and implementations exist facilitating the use of WebSockets in a specific language or platform. For example, the W3C has a working draft describing how to leverage WebSockets from a web page, and ever since Java EE 7, a similar specification has existed for dealing with WebSockets in Java. This specification is defined in JSR 356 and was approved to be part of the Java EE 7 specification. The Java API for WebSocket contains a server API and a client API. Containers that claim to be Java EE 7– compliant implement the server API. The only real difference between a server container and a client container is that a server container provides the infrastructure for registering WebSocket endpoints—it will listen for incoming requests on specific endpoints. The client API is, therefore, a subset of the server API. The Reference Implementation for JSR 356, named Tyrus, is included in GlassFish 4 and contains client modules as well as server modules. The client modules can be used in any Java application and allow developers to connect to any WebSocket endpoint, as long as the endpoint adheres to the IETF 6455 standard. On top of the client modules, the server modules allow applications to register endpoints that will handle all WebSocket communications, as long as the other peer (the client) adheres to the IETF 6455 standard.

Monday, 14 October 2013

How developer's are using the cloud

Several improvements in Java EE 7 are relevant. Obviously, the new batch specification, JSR 352, is tremendously valuable, and not only for cloud computing. It’s able to take significant amounts of work that might otherwise have to be done synchronously and break it down into smaller units and perform those asynchronously. In addition, the Java EE role definitions have been updated to better map to the security requirements we see in cloud environments, such as the difference between someone administering platform as a service [PaaS], on which the application is running, versus administering the application itself that is being hosted. Software also works better in a cloud environment when we get rid of certain assumptions. One way to do this is through dependency injection, which was introduced in Java EE 5 and, as of Java EE 7, now applies across the entire Java set of enterprise specifications and is called Context and Dependency Injection [CDI]. Instead of going out and finding what you need as a component, you simply declare what you need. If you need to connect to a database, the component declares that it needs to be connected to a database and then it allows whatever environment it’s running in to provide it with a connection to that database based on how that environment is configured. IaaS manages the requisite networking and server hardware, and it typically hosts virtual machines, each of which is running an operating system—all of which is below the level of Java EE. We imagine that every data center, every private cloud, and every public cloud will be providing IaaS: infrastructure services that platforms and applications can take advantage of through standardized IaaS APIs. Generally speaking, what developers are looking for is PaaS. Developers need to be able to describe an application hosting environment: an application server or a cluster of application servers, which are often combined with a database. And they need to have their application exposed to the real world, protected by a firewall, and automatically balanced by a load balancer. In other words, they’re not interested in building or installing their own firewall, load balancer, or database. They want to be able to take an application, including necessary components and their database design, and deploy it in a way that is accessible and secure. So, think of an application that you’re deploying for a mobile platform; it’s probably going to consist of a significant number of RESTful interfaces that are exposed to a mobile platform. Those RESTful interfaces will be exposed through a URL that will come to a load balancer that spreads requests across an elastically scaled application server infrastructure, providing both high availability and elastic scalability. The application servers will be running the application logic, which will be transforming information from a database or other data services into RESTful responses that are sent back to the mobile application itself. All of the complexity of provisioning, configuring, monitoring, and scaling the application infrastructure is provided and managed by PaaS.

Sunday, 13 October 2013

More focus on Clouds for Java developers

Let us take the discussion started in the previous article posted to another level  and try to understand the exact reason of why java developers focuss on Clouds. First, it’s important to appreciate the wide scope of the term cloud computing. Obviously, we talk about the public cloud, including Amazon EC2 or Microsoft Azure, not to mention Oracle’s own public cloud. But cloud computing is also changing the way that we manage internal data centers —what we refer to as the private cloud. In other words, the very same concepts, efficiencies, and capabilities that the public cloud brought us are now available inside the data center—for example, basic tenets such as self-service and being able to get a development, staging, or production environment for an application in minutes—without going through any significant paperwork or human workflow. On top of that, the elasticity of the cloud means that we’re no longer constrained by a
one-size-fits-all model. Applications must be built so that they can dynamically scale out and dynamically balance the load across multiple servers. This means that when servers are added while an application is
running, there’s no interruption of service. To accomplish this, it’s good programming practice to always assume that an application or the components of an application are running in a scaled environment, which means many previous assumptions no longer apply. I’m talking about things such as file systems—which could be either local or shared and could trip up developers either way—or global Java objects on the heap, which won’t actually be “global” when the application is scaled out. Developers must also understand whether the addition of resources will actually enable an application to scale out effectively. In other words, just because an application runs correctly when you add additional servers, that doesn’t mean it will actually support more transactions or users. So, you need to explicitly design into the scaleout model the ability for the application to scale as close to linearly as possible. Obviously, data caching is a huge part of this, as is minimizing the amount of information that has to be shared across servers, as is minimizing contention
on shared resources such as database systems. Finally, the application needs to be understood in terms of the metrics of elasticity: when are additional resources actually needed, and when is it safe for resources to be taken away? So, you need to know when you need to scale out the application, which involves knowing what to monitor.

Saturday, 12 October 2013

LimeLight on Embedded Java

This article throws more light on the embedded aspect of java . If you are passionate java developer I'm sure this will help you understand embedded concepts more. So you’re a regular Java developer. You have done years of service developing apps on the server side and get your kicks with any number of server- and client-side APIs. You might have even developed some MIDlets for the Java ME environment. You think you’ve done it all. But suddenly, everyone seems to be talking about a new technology: embedded Java. Can Java really go so small? You thought that the era of resource-constrained programming was over. With the advances in the architecture of Android, iPhone, and Java ME devices, lack of memory was no longer an issue. Suddenly, lack of resources is a reason to celebrate. As you might have guessed, we are not talking about consumer devices (at least, not direct-toconsumer devices). We are talking about devices such as the Raspberry Pi and other microcontrollers with which you can manipulate circuit boards and small resource systems. Such microdevices allow you to manipulate and work directly with the onboard circuitry. Embedded versions of Java use the same Java technology that you work with now—except the embedded versions are bite size. For example, Oracle Java ME Embedded has a footprint smaller than that of Java ME, and it is targeted at devices that power set-top boxes, vending machines, sensors, or, well, microcontrollers. Java can be defined and adapted to different devices using either Oracle Java ME Embedded or Oracle Java SE Embedded. In this article, we look at how to do that and we look at how Oracle Java ME Embedded technologies are adapted to the embedded environment. Oracle Java ME Embedded is defined by the  Information Module Profile-Next Generation (IMP-NG) specification (JSR 228). (There is a separate specification for Oracle Java SE Embedded.)As you might have guessed, this JSR is an extension of the really old JSR 195, which was—not surprisingly— called the IMP specification. That JSR never got off the ground, but adding the next generation  bit has done wonders. Or it could be that the time is right for the new specification. An Information Module Profile is a strict subset of the Mobile Information Device Profile (MIDP), which you are probably well acquainted with. So, if we were creating MIDlets using MIDP, we must define a new name for the applications that we create with IMP-NG.

Thursday, 10 October 2013

Java developers in need of cloud computing

Cloud Computing is now an emerging field and let us discuss on why java developer’s should focus on cloud computing. First, it’s important to appreciate the wide scope of the term cloud computing. Obviously, we talk about the public cloud, including Amazon EC2 or Microsoft Azure, not to mention Oracle’s own public cloud. But cloud computing is also changing the way that we manage internal data centers what we refer to as the private cloud. In other words, the very same concepts, efficiencies, and capabilities that the public cloud brought us are now available inside the data center—for example, basic tenets such as self-service and being able to get a development, staging, or production environment for an application in minutes—without going through any significant paperwork or human workflow. On top of that, the elasticity of the cloud means that we’re no longer constrained by a one-size-fits-all model.  Applications must be built so that they can dynamically scale out and dynamically balance the load across multiple servers. This means that when servers are added while an application is running, there’s no interruption of service. To accomplish this, it’s good programming practice to always assume that an application or the components of an application are running in a scaled environment, which means many previous assumptions no longer apply. I’m talking about things such as file  systems—which could be either local or shared and could trip up developers either way—or global Java objects on the heap, which won’t actually be “global” when the application is scaled out. Developers must also understand whether the addition of resources will
actually enable an application to scale out effectively. In other words, just because an application runs correctly when you add additional servers, that doesn’t mean it will actually support more transactions or users. So, you need to explicitly design into the scaleout model the ability for the application to scale as close to linearly as possible. Obviously, data caching is a huge part of this, as is minimizing the amount of information that has to be shared across servers, as is minimizing contention on shared resources such as database systems. Finally, the application needs to be understood in terms of the metrics of elasticity: when are additional resources actually needed, and when is it safe for resources to be taken away? So, we  need to know when we  need to scale out the application,  which involves knowing what to monitor.

Wednesday, 9 October 2013

Oracle Java Cloud Service

Let us discuss some more features which induce the user in using the oracle's platform as a serviceA simplistic explanation of Oracle Java Cloud Service is that it’s Oracle WebLogic Server integrated with Oracle Database. So developing and deploying on Oracle Java Cloud Service is akin to developing and deploying on Oracle WebLogic Server and using Oracle Database for persistence. Oracle Java Cloud Service runs Oracle WebLogic Server Release 10.3.6, which is the latest version in the Oracle WebLogic Server 11g line. Oracle Java Cloud Service drastically reduces the complexity associated with the deployment and maintenance of enterprise Java applications. Oracle Java Cloud Service supports a mix of Java EE 5, Java EE 6, and Oracle WebLogic  Server capabilities. It supports all the commonly used Java technologies such as Servlets, JSP, JavaServer Faces (JSF), Enterprise JavaBeans (EJB) Java Persistence
API (JPA), JAX-RS, JAX-WS, and more.However, while Servlet version 2.5 (Java EE 5) is supported, there’s support for JSF 2 (Java EE 6). Also, note that today Oracle Java Cloud Service supports Java 6 APIs but not Java 7 APIs. So Oracle Java Cloud Service is more of a Java EE mixand- match and not the same as having a full Java EE 5 or Java EE 6 server on the cloud. Beyond the standard Java EE specifications, Oracle Java Cloud Service supports the deployment of applications that make use of Oracle WebLogic Server– specific extensions as well as Oracle Application Development Framework constructs. One of the highlights of Oracle Java Cloud Service is that, unlike some other Java PaaS vendors, it puts great emphasis on being a standardsbased solution. So Oracle Java Cloud Service does not force users to use any proprietary APIs. You can develop and deploy on Oracle Java Cloud Service while sticking purely to the relevant Java EE specification. Oracle Java Cloud Service is easy to get started with for anyone with a Java EE background who is familiar with deploying applications on an application server. Considering that it is a Java-only cloud setup, there’s also nothing that  would seem strange or confusing to a Java EE developer. I also like the fact that, unlike with some vendors, Oracle Java Cloud Service does not have its own jargon for how it is priced and packaged. The Oracle Java Cloud Service Software Development Kit (SDK) provides tools to help you develop, deploy, and manage your applications.Note that this SDK is not meant to provide classes and libraries that you have to use in your applications. It merely consists of tools and plug-ins that you can choose to use or not use. Oracle Java Cloud Service offers rich integration with popular Java IDEs, suchas NetBeans, Eclipse, and Oracle JDeveloper, all of which leverage the SDK “under the hood” to interact with Oracle Java Cloud Service. The SDK also includes Ant tasks and Maven plug-ins for  interacting with your Oracle Java Cloud Service instances.

Friday, 4 October 2013

Oracle's new cloud services.

Oracle has started several new cloud services today including java as a service.Oracle’s cloud push began in 2011, and since then Oracle has launched several cloud solutions that support more than 25 million cloud users worldwide. Oracle Java Cloud Service and Oracle Database Cloud Service have been Oracle’s most visible PaaS solutions so far. Oracle’s other PaaS offerings are Oracle Developer Cloud Service, Oracle
Storage Cloud Service, and Oracle Messaging Cloud Service. Oracle Developer Cloud Service simplifies development with an automatically provisioned development platform that supports the complete
development lifecycle. Oracle Storage Cloud Service enables businesses to store and manage digital content in the cloud. Oracle Messaging Cloud Service provides an infrastructure that enables communication between software components by sending and receiving messages via a single messaging API, establishing a dynamic, automated business workflow environment.

Thursday, 3 October 2013

Key Considerations for platform as a service.

Platform as a service is now an emerging area and below are the key considerations while providing java as a software development as a service. There’s a fair bit of overlap between the features that available cloud services offer, but the key points to consider from a software development platform point of view are as follows:
■Costs and pricing strategies vary widely across vendors. Some charge based on fine-grained usage details, while others provide duration-based subscriptions. You need to evaluate whether you would like to
go with a subscription or with a pay-as-you-go model.
■ Are the supported technologies and features in line with your requirements? Is your chosen framework officially supported by the cloud vendor? Which version is supported?
■Is the vendor sticking to standard technologies, or would you need to write custom, vendorspecific code?
■ Considering that you are putting your precious application and data on the vendor’s hardware, you want to be sure about the vendor’s credentials and ability to be up and running, say, 10 years from now.
■With PaaS, you do not have access to the actual hardware setup or micro details about  hardware performance. So you need to evaluate the administration dashboard carefully,  because it’s the primary source of information about the service and about how an application is performing.
■ Is the PaaS solution integrated with your favorite Java integrated development environments (IDEs)?
■ Ease of use is quite important because some cloud services can be rather confusing and, at times, even intimidating. I found this especially true with services that support not just Java but many other technologies as well.
■Most cloud vendors support at least one SQL data store and, in
some cases, a NoSQL data store as well. You need to examine whether these work for you.
■ Is the vendor offering a closed stack that would lock you in? Would it be possible for you to migrate to a new vendor if the need arises?
■ While some vendors are focused Java cloud players, there are others that support multiple technologies. This seems to affect the features, the documentation, the ease of use, and the overall priority areas for the
service. Some clouds offer Java support, but they just don’t come across like they are talking about Java.

How difficult would it be to build a team capable of developing and deploying for a particular PaaS?

Java PaaS

PaaS is about renting a software platform and running a custom business application on it. The promise of PaaS is to let developers focus on the business application and not have to worry about the hardware or the core software platform. Gartner famously stated that 2011 will be the year of PaaS. However, my many unscientific surveys at conferences in 2011 and 2012 showed that while there was great interest and experimentation happening with PaaS, actual adoption was pretty low. This was due to multiple factors, such as PaaS offerings not being mature enough, developer uncertainty, and managers being unwilling to change to a new shared model. Having said that, the various PaaS offerings have matured rapidly over the past year or so. Many now provide services comparable to what developers have been used to in on-premises Java EE environments. Now, some vendors don’t just provide a deployment environment; they even provide a rich development environment on the cloud. There’s healthy competition building up in this space, which should be good news for developers and customers. So while Java PaaS might not yet be the norm, adoption looks certain to keep rising at an ever-increasing pace.

Tuesday, 1 October 2013

Java - Platform as a Service

Java EE has been the primary software platform for enterprise and server-side development for more than a decade, and it is increasingly the platform of choice on the cloud. In this article, we will look at the Java cloud space and how you can go about choosing a Java platform-as-a-service (PaaS) provider, and then take a closer look at Oracle Java Cloud Service. Only a few years back, when someone discussed a Java EE project, it was presumed that the project would also require setting up the requisite hardware infrastructure and having a team to manage and monitor the setup. Java EE was never great at shared hosting, but nobody seemed to care much about that, because shared hosting was almost considered below the dignity of Java EE. It used to be blasphemous for an architect to suggest that a Java EE application could be run in a shared environment. The cloud wave turned this approach on its head. Not only is a shared environment now being considered, it is actually fashionable to be talking about running an application in a shared  environment on the cloud. Java as yet has no cloud-centric specifications in place, so the Java cloud space isn’t as standardized as developers might expect. Yet we find that Java is being used in all kinds of cloud deployments, especially PaaS offerings and software-as-aservice (SaaS) solutions built with Java.