Friday 1 October 2010

The Charity Cloud

ThinkGrid and appiChar are aiming to offer cloud services such as SaaS and Hosted Virtual Desktops for charitable organisations. The cloud suits such organisations as they operate on a variable headcount and a low budget. 

Read more here.

How is Uncle Sam using the cloud?

We heard Ian Osborne revealing the latest on the UK’s G-Cloud project. 

This made me curious to check out how Uncle Sam is using the cloud. It is said that the US has saved 1.7m$ by moving its services site onto the cloud. Then it has allowed its army to adopt a cloud based CRM tool saving 90% of the cost. And its interior department (DoI) is on track to save 67% of the costs. And there is more.

Lack of cloud computing vision is hurting most enterprises

This is the view expressed by David Linthicum in his cloud computing blog. So, it is really back to the basics then. Enterprises must look at their business problems and look for technology options that solve them instead of jumping on the private cloud bandwagon. 

We agree. In fact we
are trying to formulate a step by step process for creating a cloud computing strategy. We are sharing this message in our regional cloud workshops and webinars.

Do join us to know more about this procedural approach on October 6th between 2 and 2:30 PM BST

Cloud Services with End To End Service Levels

Orange Business Services, Cisco, EMC and VMware have formed an alliance, called Flexible 4 Business. Their plan is to offer four cloud services with tiered end-to-end service-level agreements. 

The services being offered include: 
(1) Private cloud that can be deployed either on customers' sites or on Orange Business Services’ (OBS) network 
(2) data backup 
(3) security-as-a-service and 
(4) unified communications services.  

Good for businesses – I think. 

What attracted us is the phrase “end to end service levels”. 
But there are some immediate questions that come to my mind: 
(1) Are all of the four services fully ready and available in one go? 
(2) Are the SLAs limited to availability and data location? 
(3) How would SLAs vary if the private cloud is deployed on customer’s site and not on OBS? 

These questions need detailed investigation.

Wednesday 29 September 2010

Book Review: The Art of Scalability


The Scalable Computing Programme examines several dimensions and delivery models of scalable digital systems. Cloud Computing, Multi-core processors, Large Scale Complex IT Systems are the channels we have chosen to examine process, people and technology challenges associated with truly scalable, reliable digital systems. But addressing scalability – across technologies, architectures, delivery models and organisation is quite a challenge.
I found a book on scalability that covers technical, human, managerial, procedural, practical and theoretical dimensions of Scalability. I am presenting a quick overview of the same here.
The book is a first of its kind – as it is well rounded in its approach. It is written by two of the brightest minds who have actually have worked with challenging enterprise architecture models. They have built reliable, round the clock scalable applications – eBay, Paypal, Quigo and many more.
The book has four major sections
  1. Staffing a Scalable Organisation: In this section, the book examines the impact of people and leadership on scalability and suggests a set of roles for a scalable technology organisation. Starting with basic definition, authors have analysed how various elements of organisation type, management, and leadership impact scalability. The section also contains practical advice on building a strong business case for scale.
  2. Building Processes For Scale: Authors emphasise the fact that processes are quite critical to scale. They do so by critically examining role of well designed processes to manage incidents, problems, crisis and escalations. They have also examined how change in production environments affect the scale and needs to be managed.
Having laid a strong process foundation for scale, the authors present twelve architectural principles (N+1 Design, Stateless Systems, Scale Out Not Up etc.) and six scalability principles (Design to be monitored, design at least two axes of scale etc.) The true experience and maturity of authors is visible in these chapters.
There is also an in-depth discussion on typical trade-offs – such as Build or Buy, Fast or Right and impact of each choice on scalability.
The authors provide four handles – organisational tools – to manage scalability: JAD (Joint Architecture Design), ARB (Architecture Review Board), PST (Performance and Stress Testing for Scalability) and Barrier Conditions & Rollback. There is very practical advice on using these tools and integrating them into the organisation’s process framework to achieve scale and performance.
  1. Architecting Scalable Solutions: This section begins by refreshing design principles. Here, the authors have introduced concepts such as Technology Agnostic Architecture (TAD) and Technology Agnostic Design (TAD). But aren’t architecture and design technology agnostic, by definition?
The authors provide AKF Scale Cube – a three axes method to model scalability.
·    X-Axis: Represents cloning of services or data such that work can be distributed across instances. Implementation of X-axis is said to be relatively easy and inexpensive.
·    Y-Axis: Represents separation of work by responsibility, action or data. Going along Y-axis helps scaling transactions.
·    Z-Axis: Represents separation of work by customer or requestor. It is expensive to go along Z-axis.
The authors apply scale cube to explain how applications and data can be split to achieve scale. Synchronisation issues and impact of various types of caching methods on scalability are also discussed.
  1. Solving Other Issues and Challenges: This section discusses problems associated with proliferation of data and how they can be handled. Also included are the detailed comparative discussion on clouds and the grids. There is also a valuable chapter on designing data centres based on Three Magic Rules of Three. Interestingly, three is a magic number for data centres!
This section and everything else explained in this book culminates in three case studies: eBay, Quigo and ShareThis
The appendix consists of formulae to calculate availability, capacity, load and performance.
It is quite an interesting book, worthy of being a text on scalable computing.
Book Details:
Full Title              The Art of Scalability: Scalable Web Architecture, Processes, and Organizations for the Modern Enterprise
Authors                Martin L. Abbott; Michael T. Fisher
Edition                  1st Edition
Publisher             Addison-Wesley Professional
Pub. Date             December 16, 2009
ISBN Numbers      Print (1) 0-13-703042-8 (2) 978-0-13-703042-2 Web (1) 0-13-703143-2 (2) 978-0-13-703143-6
Pages                   592 in Print Edition
Rating                   4.5 out of 5 rating [12 Ratings]
Price                    Around 24£ to 20£

Sustainable IT: EC Code of Conduct for Data Centres

This article examines the European Commission’s Code of Conduct on Data Centres Energy Efficiency.

What is it? It is a voluntary initiative that aims to inform and stimulate data centre operators and owners to reduce energy consumption in a cost-effective manner without hampering the mission critical function of data centres.


Why do we need it? Many data centres are poorly designed with large tolerances allowing for capacity changes, and possible future expansion. Thus they end up using significant power, the majority of which is consumed by redundant power supplies and cooling systems. Hence selectively switching off IT systems does not result in significant energy savings.

Power used by data centres contributes substantially to the overall electricity consumed in the European Union commercial sector. Also, power costs are rising as a percentage of overall IT costs. Hence there is an urgent need to encourage data centres to take remedial action.

But don't we already have such measures within the industry? Yes, we do. But there is a risk of confusion, mixed messages and uncoordinated activities. A central, EC-wide Code of Conduct helps the cause.

Who should take note of this Code of Conduct? Data centre owners and operators, data centre equipment and component manufacturers, service providers, and other large procurers of such equipment.

The Code of Conduct classifies organisations as Participants (Data Centre Owners and Operators) and Endorsers (supply chain and service providers including vendors, consultancies, utilities, Government, Industry Associations/Standards bodies, educational institutions).

How does it aim to achieve its objective? By proposing general principles and practical actions to be followed by all parties involved in data centres, operating in the EU.

The Code of Conduct considers the entire data centre as a complete system. It provides guidelines and best practices for existing and new data centres covering IT Load and Facilities Load at Equipment Level and System Level.

What are the components of the Code of Conduct? It has a Secretariat and three Working Groups to establish and monitor commitments, and oversee the Code. 

Working Group
Chairman
Objectives
Best practices
Liam Newcombe
BCS Data Centre Specialist Group
Explore and exploit energy saving opportunities in the data centre
Energy efficiency metrics and measurements
Jan Viegand
Danish Energy Agency and the Danish Electricity Saving Trust
To develop a method to measure the electricity consumption and energy efficiency of data
centres and server rooms
Data collection and analysis
Anson Wu
UK Department for Environment (DEFRA)
To measure the energy consumption, calculate the energy efficiency of data centres and establish performance benchmarking

The Secretariat is composed of representatives of the European Commission DG JRC and the chair persons of the three working groups

What could we do next?
  • Initially: To qualify as a participant, submit an initial report describing the simple physical and operational characteristics of your data centre along with the most recent one month facility and IT energy consumption details.
  • Regularly provide energy usage details to the designated working group of the Code of Conduct.
  • Within 3 years plan to meet Expected Minimum Levels of energy saving. These levels are identified out of best practices documented by the Code of Conduct.
Who is already part of the Code of Conduct? Some of the participants include: Fujitsu, HP, Intel, the Met Office, Microsoft, Business&Decision, evoswitch, Lamda Hellix Datacentres, Memset, Petroleum Geo-Services, reed.co.uk, TCN, Telecity Group, Telekom Austria, VCD and Vodafone

DSKTN View: The EC Code of Conduct provides a framework which helps data centres to target energy reductions in a more structured manner, leading to financial, environmental and infrastructure benefits. Also, flexibility and continuous improvement methods built into the code make it easier to adapt to a variety of national efficiency programmes, climates and energy infrastructures.

We suggest participants consider the following to help comply with the code
  • Modern process architectures such as multi-cores
  • Virtualisation to increase hardware utilisation
  • Carbon costs associated with software enhancements, replacements, retirement
  • Value of IT to minimise/avoiding travel and deliver services
  • Embrace cloud based services where possible – even the futuristic Data Center as a a Service
We will endeavour to bring experts associated with the Code of Conduct to you as soon as possible. We encourage you to join our Sustainable IT Special Interest Group on our website.

Tuesday 28 September 2010

Exciting webinars on fresh topics

Currently we are running a new webinar series on fresh topics related to Scalable Computing, Cloud Computing and Sustainable IT. Each webinar features an industry expert and his/her area of work.

Our idea is to deliver around 13 webinars over 15 weeks. As of now, we have delivered 2 webinars and 6 others are already scheduled.


  • Sharing Research Assets Over The Cloud: 13th Oct 2010 1400-1430 GMT.
Listen to Prof. Jim Austin - Lead, Advanced Computer Architectures Group, University of York about various challenges associated with sharing research assets over the cloud.

  • A Cloud Platform for R&D: 3rd Nov 2010 1400-1430 GMT.
Prof. Paul Watson - Director of North East Regional e-Science Centre, Newcastle University will introduce a cloud based platform for managing research and development activities.
 
  • HPC Scalability: Multicore to Exascale: 10th Nov 2010 1400-1430 GMT.
John Barr - Research Director & Financial Markets and Head of EU Research of The 451 Group will examine nature of HPC and challenges in managing its scalability highlighting Multi-scale processors, their impact on HPC provisioning and other associated challenges.
 
  • Processing and communications challenges in building the world’s largest radio telescope: 17th Nov 2010 1400-1430 GMT.
Know more about Square Kilometre Array (SKA) programme, a €1.5 billion international project to develop the world’s largest telescope and the UK's contribution to it. Specifically learn about Data Processing and Communication challenges associated with this telescope. And learn about potential opportunities for the UK businesses.
Speaker: Andrew Faulkner, Project Engineer - European SKA Design Studies with University of Cambridge
 
  • Elastic Cloud Services using Amazon EC2: 24th Nov 2010 1400-1430 GMT.
Matt Wood, EMEA Evangelist with Amazon UK will talk about cloud services, advantages that elastic infrastructure, and on-demand provisioning can have for capacity planning, agility and bringing new ideas to market quickly.

You are welcome to participate in these webinars.

Tuesday 13 July 2010

Cloud Computing and Corporate Karma

There are so many definitions of the word Karma. And, there are several types of karma – the good, the bad and ... wait a minute ... I haven’t heard about the ‘ugly’ karma.
The phrase Corporate Karma is also not new. There are numerous interpretations of it. There is even a movie by that name I think. To me, majority of the definitions of Corporate Karma seems to have a slightly negative overtone.
For the sake of this article, let’s assume Corporate Karma to be just a metric that tracks current activities and strategies of the enterprise and somehow impacts the future of a business. And let’s confine our focus to Corporate Karma accumulating at IT department.
There are two important questions now:
First how does one accumulate the bad Karma?
Any book on eastern philosophy will answer this question. Overconsumption, accumulation, waste, controlling others, glorifying oneself and not serving others.
Applying this to business, let’s see how a business accumulates karma. It does so, by
-          accumulating assets
-          over-consuming, wasting resources, energy and money
-          establishing control on methods, tools, people and processes
-          creating illusion of mystic aura around nerdy services
-          abusing the word ‘service’
Now, “how does one redeem the bad Karma?” would be our second question.
There are several methods to do so. Not surprisingly, the method that works for you is always unknown, and, seems like a moving target. So, every approach is equally good until you find something that actually works. Hence, it is not surprising to see many people offering different approaches. Even I want slip in one from my side – “Cloud Computing”
How so?
Through virtualisation, you can do more with less and hence won’t need many assets. Through SaaS you can rent software and avoid expensive recurring licenses. Through new pricing models, you will buy only what you actually need. As a result, you could reduce accumulation, spend and wastage.
That’s not all – cloud computing helps further. With the cloud technologies, you could transfer some of the control of your mystic IT to external service providers by aligning your methods and processes with theirs. Thus, you would become unified with the global whole – you are no more an island – your dot is connected with that of others ...
With cloud technologies, you would open up to your enterprise. IT is no more a mystic department hidden behind complex equipment. You become an interesting element of every department. You are no more a nerd or a geek – but just a buyer of services. And you can manage more than an awkward smile.
You will not be busy looking complex or fighting complex code or beastly machines. And slowly the word ‘service’ becomes your mantra.
Oy ... please come down to earth – you seem to be floating! Floating is prohibited. Health and Safety you know.
So, there you go. Cloud computing helps you not to waste resources, not to accumulate assets, to give up control and serve others.
If these won’t help redeeming bad karma, what else would?
Think ... think ... think ...
(Needless to say that this article should be taken lightly and not lightly at the same time!)

AppStores: Future of Enterprise Computing

Smart mobile phones such as Blackberry and Iphone introduced a user friendly approach to browse, select and download a variety of useful – free or otherwise. Now a similar model seems to be on the way for large enterprises, thanks to some recent development in cloud computing especially in the SaaS segment.
What is an Appstore?
In simple terms, an appstore is like a supermarket for applications. You enter one, browse around at intuitively stacked up apps, search for apps or perhaps seek help of online staff to help you pick an application that suits your need.
Just like in a supermarket, you can expect to see promotions of “features” applications, bargains, buyer guides and may be in future, price comparisons. That’s perhaps is a farfetched idea for now – in IT comparison of any two product seems to be similar to comparing apples with oranges.
Just like in the realworld, there is likely to be “buy readymade” or DIY appstores. As their names imply, you may buy what is on offer in the former model and build your own in the latter case. ASDA, M&S are analogous to the former and B&Q, IKEA to the latter.
What types of appstores can we expect?
Several types, actually. But I would say that there would be one appstore within each ‘buying environment’. So, within a large enterprise, there would be a global/regional appstores. Special consortiums of companies such as EADS and mega brands such as Virgin could have their own internal appstores. The Government will have its own appstores – some of them, such as defence, ring fenced for extra security. And, there will be many in the public domain. Major SaaS provider is likely to have one for each industry vertical.
How does an Appstore application work?
There could be a mixture of methods in the way these appstore applications work. Many are likely to run in a cloud based environment – such as a private cloud within the enterprise – or on trusted clouds hosted by SaaS providers.
As the inter-cloud-interfacing APIs and application interoperability matures, there will be applications hosted in one environment or cloud safely and reliably working with applications on other environments or clouds.
It is also possible to imagine meta-appstores which provide just an outer wrapper to other appstores.
Who fills and Appstore and How?
This is an ordinary looking extraordinary question. In the case of large enterprises, procurement department would control the appstore from commercial perspective. There is likely to be a “selection committee” which validates application characters in test environments against arguably tough entry criteria which would involve more non functional characteristics such as reliability, scalability, availability etc.
Vendor maintained appstores will be filled by vendors and their partners.
The process of stacking an appstore in this case is likely to be haphazard – with only a few applications being strategic from vendor perspective.
Enterprise appstores are filled largely based on business requirements.
How to choose applications from appstores?
It is neither an art nor a science. But careful consideration of techno-commercial properties of an application is needed. Certain applications may need ‘subscription’ to a service that runs elsewhere; an application, especially those chosen from a public source may be unreliable and potentially dangerous. Common sense and consultation are the two key things that should guide a user.
DIY Appstores
Some SaaS providers have developed appstores from which you can pick and choose software and service components and build your own situational business application – just like the way you work with Lego pieces. Some service providers like Cordys claim that a simple situational application could be assembled and published in under ten minutes! There would be no need for traditional software design or programming, but thorough testing is always a necessity.
Such applications could help meeting dynamic business requirements and promote agility, innovation at unprecedented low costs.
On the flipside, it could lead to proliferation of uncontrolled business applications over time within the enterprise. Staff turnover, poor configuration management and knowledge management practices will only add to the problem. To remedy this situation, the enterprise must strengthen application reuse and knowledge management practices.

In summary, appstores being in a new dimension into enterprise computing. They help providing an intuitive, controlled, on demand provisioning of IT. Customisable, componentised, DIY appstores bring agile, code free, DIY computing to the enterprise at a risk of application proliferation. They don’t simplify the enterprise IT architecture just yet. But they provide an interesting twist to IT delivery.

Monday 3 May 2010

Impact Of Scalable Computing On Software Engineering Processes


Of late, scalability has become a key quality characteristic which is demanding increased attention from the developer community. There are many factors for this – including evolution of cloud computing and also due to advances in hardware such as multi-core processes.
Software Engineering processes, if followed true to its spirit, easily accommodate scalability requirement of the software being developed. But the width of the gap between process definition and its practice is left to anyone’s imagination.
So, we need to mind the gap, eh?
In this post I have listed few key elements that deserve an increased attention from software designers and developers for few of the software engineering processes. My intention is to link industry best practices and key concerns to software engineering processes.
If your organisation is using review checklists, perhaps you could consider including appropriate points to check these elements.

Requirements Gathering and Management
  • Difficulty in
    • visualising requirements from a wide base of subscribers for a SaaS application.
    • capturing service management related requirements such as metering, billing etc. due to either inexperience or lack of a strong business process backing it.
  • Increased focus is needed in capturing and managing
    • Interoperability requirements as in future SaaS applications will increasingly inter-operate with each other. As a result, APIs that need to be used and exposed to other products must be known at this stage.
    • Security, Privacy, Trust requirements
    • User interface requirements as cloud application are likely to be accessed from a variety of mobile devices.
  • Need to separate requirements into two groups – common features available for all SaaS tenants and potential customisations

Software Design
  • Good assumptions to keep in mind include:
    • Design must be akin to that of SOA elements – that is – loosely coupled service components
    • The applications must work in a clustered environment
    • Database server on the cloud is virtualised and is more susceptible to failure than the physical counterpart.
    • You have to design the application for performance which is likely to be bound by SLAs with commercial and/or legal consequences.
    • Underlying infrastructure in which your application will run will be of lower quality and lower bandwidth.
    • In almost all cases, you have to design your application to be remotely managed in real time.
    • Stateless architecture is preferred. In this architecture you are not getting rid of the state data, but you will be either keeping it on a peer buddy server or on the network but never on the server serving your application.
  • You have to consciously design
    • Security and privacy provisions. You need to be aware of not only standard SSL but also OpenID (Open source solution for unique username and password), PCI DSS (Payment Card Industry Data Security Standards) etc.
    • Scalability – including providing inputs for capacity planning, dynamic scalability, load balancing –these factors could be manually controlled or automatic.
      • To design for scalability, you must use new modelling methods such as Cube based models.
  • Be aware of new protocols, standards and when to use them. For example:
    • XMPP – Extensible Messaging and Presence Protocol – which is a way communication protocol between the client and the web server designed to eliminate polling/pinging the host.
    • JSON – JavaScript Object Notation – which is a low resource consuming alternative to XML when sending/receiving data using JavaScript
    • REST (Representational State Transfer) versus SOAP (Simple Object Access Protocol)
  • You should be aware of open source tools available which you can make use of.

Coding
  • Be ready to learn or re-learn
    • new programming languages such as APEX or Python.
      • They could be new to you not necessarily to the IT industry.
    • distributed transaction management
    • new models for state management
    • multi-threading
  • Your aim should be to develop lean code that uses as fewer resources (mainly memory and CPU) as possible.
  • It is safe to assume that bad performance of the code will ultimately lead to a commercial impact on your company’s services.
  • Forget stored procedures! They aren’t easily portable across databases.
  • You are likely to use a development infrastructure provided by a “Platform as a Service” provider or an “Infrastructure as a Service” provider. Hence your development practices and review methods are likely to be influenced by these provider’s facilities and practices.
  • In order to achieve scalability, you must split your application threads in real time based on functionality and demand.

Testing
  • You need to test your applications in an environment similar to a commercial cloud in which you will not have visibility or control. This is likely to be a challenge. You are likely to outsource this to a “Testing as a Service” provider.
  • Troubleshooting on the cloud is not easy.
  • You should be in a position to test and certify application performance.

Release
  • You need re-design your deployment processes in light of virtualised servers.
  • You may have to optimise your releases minimising impact on a large number of user community.

Change Management
  • If you are developing a SaaS application, expect a part of your software subscribers to come back to you with change requests to your software. Compared to traditional software development, implementing change requests in a cloud environment is not easy. You might want to keep your basic software offering stable yet you will be compelled to honour variety of change requests submitted by your tenants.
Of course it is only a partial list of impact on processes. I will update this post when I collect few more relevant points.
I appreciate your feedback on this post.