Friday, August 20, 2010

Interesting articles on Virtualization and Cloud Computing

In this article, I shall point you to some really interesting articles on Virtualization & Cloud Computing I came across during the literature survey for my research. These articles are indeed very stimulating and have been useful in my research in some way or the other. The texts mentioned here are categorized under different areas like Resources for beginners, Physical-to-Virtual (P2V) conversions, Server consolidation, Economics of cloud computing.

Stuff for beginners

P2V conversions/migrations

Economics of Cloud Computing

Saturday, July 3, 2010

Virtualization Lifecycle in the Context of Cloud Computing

Virtualization is a key technology for enabling cloud computing. As we have read before, virtualizing applications and consolidating hardware reduces IT Infrastructure costs (which includes purchasing hardware and maintaining it), allowing easier management of resources and on-demand provisioning of resources in the data center. The Infrastructure as a Service (IaaS) model of the Cloud deals with provisioning of hardware, storage and networking resources. Virtualizing applications via the use of Virtual machines holds great significance in this model to deliver and manage IT services. The Virtualization lifecycle comprises a set of technical assessment activities which are governed by business and operational decisions. Technical assessment for virtualizing candidates revolves around meeting end-user Service Level Agreements (SLA's), reducing IT costs, and designing an optimized data center. Every phase in the virtualization lifecycle for cloud computing is highly challenging with a wide variety of complex open problems which are currently being tackled.

Analysis & Discovery : For the process of moving from Physical environments to Virtualized environments (P2V), solid analysis of the virtualization candidates must be performed. This stage involves discovering the data center entities (servers, networks, storage devices), collecting utilization profile data of those entities along the different dimensions (CPU, memory, network i/o, disk i/o). The main theme of P2V is to move applications from an under-utilized bare metal environment to a virtualized / hypervisor environment to enable optimum utilization of hardware. In addition to discovering the heavy artillery in the data centers, it is important to assess the applications deployed on them. The OS characteristics (scheduling policies, caching strategies etc), application characteristics and configurations (Tomcat no. of starting threads etc). Once the performance and application characteristics are assessed, capacity management models for the need to be developed to host those applications in the virtual environments. For more information on the technical assessment for virtualization, see 'Conduct a Technical Assessment for Server Virtualization'.

Implementing Models : Developing capacity models for a virtual environment is a tricky task since it is governed by other business and operational factors. Target SLA's (performance, availability ), power consumption levels are to be kept in mind along with the possible impacts of virtualization (hardware normalization, hypervisor overheads). The idea is to come up with a 'pre-VM placement' strategy which describes the 'footprints' of VM's.

VM Placement & Management : Allocating Virtual Machines to Physical Machines dynamically and optimally would be the goal of IT Enterprises. VM's can be scaled out/up on demand. Provisioning of VM's goes hand-in-hand with capacity management monitoring to track the desired Service Levels of applications. Server consolidation is an important and inherent part of this phase, as optimal placement of VM's across physical architectures is the key to meet the business goals. Re-shaping and re-sizing the footprints of VM's dynamically in real time is also a hot research topic. Management of VM's also involve migration to other physical hosts, monitoring performance which can be done centrally. Many open issues also exists in VM Migration, such as synchronization problems and security issues.

Thus, the virtualization life cycle poses many challenges and many groups in the Industry and Academia are grappling to solve these issues. If executed well, it has a promising future in the context of cloud computing. For more resources on Virtualization and Cloud Computing, see 'Virtualization / Cloud Computing Blogs, Websites, Resources, Articles'.

Thursday, April 29, 2010

Resources for Starting with Ruby on Rails 3

Ruby and Rails (that together form the well-known Ruby on Rails -RoR- combination) have become a de-facto standard in the industry in order to develop small and not-so small Web applications rapidly (RAD).

The Rails framework has contributed directly to the increasing popularity of the Ruby programming language. Rails is based on the MVC design pattern and allows the developers to implement efficiently Web applications incorporating many of the recommendations and best practices of agilists. Rails 3 is the result of the fusion of Merb -another famous MVC framework for Ruby- with the previous Rails version.

In this post I include some links to resources related to Rails 3, the forthcoming version of Rails currently in Beta:
  • This post includes a compendium of links to blog posts, tutorials, presentations and conference talks on Rails 3.
  • This tutorial allows the reader to learn how to use the GIT version control system and deploy Rails applications on the Cloud using Heroku at the same time you learn Rails 3.
  • This, this and this resources will teach you how to implement and manage associations between the models of Rails applications.
  • This is a very useful site for learning Rails through screencasts, including lots of examples, tricks and useful libraries/add-ons/pluggins (e.g. Formtastic, RSpec, etc.).
  • Finally, this book on the Pragmatic Programmer series (currently in Beta) will be one of the main references for exploiting all the new features of Rails 3.

Hope you find these resources useful for putting hands on Rails 3 :-)

Thursday, April 22, 2010

Two Upcoming Books on Programming Languages

Recently, I've paid attention to the following books on programming languages that are going to appear in the following months:
The first one is from Bruce A. Tate -a well known author in Java and Ruby communities- and introduces the reader inside the most important features of 7 different programming languages that are nowadays relevant or will be relevant in the next few years: Ruby, Io, Prolog, Scala, Erlang, Clojure and Haskell. The second one is being written by the omnipresent Martin Fowler and talks about DSLs, that are gaining a lot of attention in the development communities.

Tuesday, April 20, 2010

Useful Commands for PostgreSQL DBMS

The following commands are very useful to get some meta-information about the data repository in PostgreSQL. In the psql command line put...

# select datname from pg_database;

to get a list of databases in Postgres in the current repository (e.g. the one pointed by $PGDATA env. variable), ...


to get a list of the tables of the current database, and...

#select pg_size_pretty(pg_database_size('DBNAME'));

in order to get the size of the database in user readable format.

Wednesday, April 7, 2010

Technical Bookshops on Computer Science

Today I want to write a post about technical bookshops (both physical and online) and book sites on the web.

Despite everyone knows about Amazon as a very good bookshop and source of information about computer science books, there is another online bookshop that looks very interesting:
With regard online free technical books, is a very good option.

Moreover, I like to touch the physical books (I like their format and appearance), so from time to time I use to visit (some of the few) physical libraries in Madrid. These are the ones I like to go:
  • Cocodrilo Libros (Madrid): This is my favorite bookshop. They have a good and extensive catalogue of technical books in english in their bookshop and, the most important thing, they offer you a very good personal treatment.
  • Librería Diaz de Santos (Madrid): They also have a good catalogue of english books, but it is not close to the city center.
  • Casa del Libro (Madrid): They used to have books in english but since three years ago, they just have books in spanish.
  • FNAC (Madrid): Just books in spanish, mainly for beginners.
Can you recomend any other library/resource you known in your town or in the web?

Tuesday, March 23, 2010

To Err is Human

To err is inherent to human beings. Even the human being may be considered an error in itself.  Unfortunately, in these days nobody wants to recognize his/her errors. We just have to take a look at politicians making wrong decisions in government, managers loosing money due to bad strategical policies, referees in sports, our relatives and friends in their personal decissions, etc. And of course, ourselves in our daily context. The reason is that errors are usually perceived by a majority of people as a weak point of the person that caused the error, and... everybody wants to be/appear flawless.

But, IMHO, that attitude is also an error, because in every role we play in our lives (as managers, students, engineers, doctors, programmers, architects, etc.) we are supposed to err sometimes. No exceptions. So, to me it is very important to recognize the errors we make. So, in this post I'm going to post some references to resources that are related to errors in software.

First of all, it is good to know what are the most frequent errors we can make when developing software. This web page describes a catalogue of software weaknesses in source code and operational systems. It also offers the current list of the 25 most common errors in software that lead to most part of vulnerabilities in programs.

Once we are aware about what are the errors we can make, the next step is try to avoid them when developing. Test Driven Development (TDD) is a well-known technique to follow when reaching the development phase in software construction. It is encouraged by agilists, but of course it can be applied to any software development process. Basically, it consist on repeating the following development cycle:
  1. Write the necessary test cases that define a new functionality that is going to be added to the software in development;
  2. Implement the required functionality;
  3. Pass the test cases developed in the first step;
  4. Refactor the code to make it adequate with regard to code quality standards.

As we have seen in the fourth step of TDD, when trying to avoid errors in the development phase, it is a requirement to program in a professional way. Books such as Code Complete, The Pragmatic Programmer or Clean Code can teach you about how to program properly to produce quality code.

OK, after a lot of efforts, our program code compiles well and all the test cases are passed. So, it is ready to be deployed in (pre)production environments... However, after some use of the program, several anomalies use to arise in the form of unknown/unpredictable behaviors. Of course, these are our well-known "bugs". This book, guides the practitioner in the art of debugging code introducing techniques and tools used in the academia and the industry.

Finally, even when we think that we have smashed all the bugs in our programs, we have to deal with other kind of errors (e.g. in the form of hardware failures, such as power outages, hard disk failures etc.) That's the reason why we design fault-tolerant systems. However, I'll talk about them in other posts. In the meantime, I'll try to not make so many errors... But, you know... that's life!

P.S. How many errors are in this post?