Let
us take the discussion started in the previous article posted to another
level and try to understand the exact
reason of why java developers focuss on Clouds. First, it’s important to
appreciate the wide scope of the term cloud
computing. Obviously, we talk about the public
cloud, including Amazon EC2 or Microsoft Azure, not to mention Oracle’s own
public cloud. But cloud computing is also changing the way that we manage
internal data centers —what we refer to as the private cloud. In other words, the very same
concepts, efficiencies, and capabilities that the public cloud brought us are
now available inside the data center—for example, basic tenets such as
self-service and being able to get a development, staging, or production
environment for an application in minutes—without going through any significant
paperwork or human workflow. On top of that, the elasticity of the cloud means that
we’re no longer constrained by a
one-size-fits-all
model. Applications must be built so that they can dynamically scale out and
dynamically balance the load across multiple servers.
This means that when servers are added while an application is
running,
there’s no interruption of service. To accomplish this, it’s good programming
practice to always assume that an application or the components of an
application are running in a scaled environment, which means
many previous assumptions no longer apply. I’m talking about things such as
file systems—which could be either local or shared and could trip up developers
either way—or global Java objects on the heap, which won’t actually be “global”
when the application is scaled out. Developers
must also understand whether the addition of resources will actually
enable an application to scale out effectively. In other words, just because
an application runs correctly when you add additional servers, that doesn’t
mean it will actually support more transactions or users. So, you need
to explicitly design into the scaleout model the ability for the application to
scale as close to linearly as possible. Obviously, data caching is a huge part
of this, as is minimizing the amount of information that has to be shared
across servers, as is minimizing contention
on
shared resources such as database systems. Finally, the application needs
to be understood in terms of the metrics of elasticity: when are additional resources
actually needed, and when is it safe for resources to be taken away? So, you
need to know when you need to scale out the application,
which involves knowing what to monitor.
No comments:
Post a Comment