Tuesday, March 31, 2009

Virtualization of Share Point

I read an interesting article this evening from the April edition of Windows IT Pro magazine. The article "The essential Guide to Deploying Moss" by Michael Noel outlined the architecture of Microsoft Office Share Point Server (MOSS). What I found interesting was the discussion around scalability options (horizontal or vertical) for the various components (Web, Query, Index, Database, and Application Roles) for both the physical and virtual options.

The article was based on HP hardware (and sponsored by HP)and took the reader through the configurations of a small farm supporting less than 400 users, all the way up to a "Highly Available Farm" supporting 1000+ users.

The article finishes up with an entire section on "Virtualization of MOSS" where the recommendations for virtualization start with the Web role and move onto the Query role. The issue is which roles may fit best in a virtualized environment to make the best use of the hardware and to provide scalability as the site grows. As with most three tiered applications, the queuing should begin at the outside of the application and gradually narrow down to the database layer. Having more web server roles queuing up request for the index or query roles makes sense. These roles require less resources (CPU, memory, I/O and network) than does the database servers on the back end.

My perspective on this article and the need for virtualization performance and capacity planning is; as your site grows, the ability to go back and fix architectural details like the clustering of your database and the number of servers and their utilization, becomes more and more difficult.

I found it interesting that the article stated "SQL Servers that are heavily utilized may not be the best candidates for virtualization, because their heavy I/O load can cause some contention and they may require a large amount of the resources from the host, which reduces the efficacy of the setup.

The contention of resources in a virtualized world should be the number one point of monitoring and testing for new applications. The IOPS, memory and CPU are the most contentious of the core four resources with networking being added to that list in the case of iSCSI and NAS. Proper segmentation of the network load can prevent contention of the network (VLAN's on separate physical NICS as an example or on HBA's). That leaves memory and CPU as points of possible contention. Understanding the OS requirements for Cores and addressable memory as well as the application load can help properly size these resources per virtual machine and per host. Monitoring the key applications (remember the 80/20 rule) provides the awareness of potential problems as the usage grows.

In the case of web applications, a close tie to the business side of the house is also important. If a new web page or process is placed on a web server or any number of increased usage processes happen without proper sizing, the whole site can come down.

Happy reading and remember to plan for and monitor your vital applications.

Source: http://www.hp.com/solutions/activeanswers/sharepoint and http://www.hp.com/go/sharepoint

Saturday, March 28, 2009

Cisco Announcement UCS - Chargeback

This past week (Mar 17), Cisco announced the release of their Unified Computing Service, which equates to their entry into the server market. These boxes from what I have read are some very beefy boxes including half or full width servers (dual or quad socket) and up to 384 Gb of memory per blade. Their networking components supplied with this product include iSCSI and FC networking to connect up to 320 servers.

I see this entry as a great solution for a hosting company that is focusing on offering virtual solutions where the density, networking and security (segmentation) provides the best possible cost points (Capex and Opex).

This brings me to point of this entry. What and how are customers dealing with chargeback for their virtual infrastructures? This is primarily for the ESX world today, however I have a little hunch that the HyperV world will be coming on strong with their latest release. In the grand scheme of charging customers for infrastructure (intra-enterprise, hosting model or the cloud), there are micro and macro ways of doing things. In the physical world, the customer is charged for the whole box, the setup and administrative time and for storage. They pay for the whole box and can use as much or as little of the computing resources as their application needs. In the virtual world, this changes dramatically because the VM's vary in size and more importantly vary in the amount of resources they use. One cannot and should not place all virtual servers of the same size on a given box or data center and assume all VM's of the same size will perform together nicely!

From the VMware perspective, their memory management algorithms allow for the paging of VM's depending on usage and can therefore use the resources defined or less depending on the needs of the application within the virtual machine itself.

So my question is; how important is chargeback becoming in the new virtualized world?

Let's flashback about 25 years to the main frame days, when the main frame was divided up into partitions (sound familiar :-)) and multiple applications (CICS, IMS, DB2, batch workloads, etc) were using the compute resources all at the same time. Each of these applications (i.e. business units) were measured to determine how much of the main frame resources were consumed by each application. This is where capacity planning, chargeback, and performance and tuning all come together to monitor, from the business application perspective, what resources they use and how many resources they need going forward to maintain SLA's and performance expectations.

Now, lets branch over to an analogy of the phone company billing systems. Remember when phone call rates were lower at night and on weekends? The phone companies did this to entice users (via costing models) to move resource requirements off prime time to other times. Now let's take another analogy from the main frame days with something called workload manager. This was a process of assigning a priority to a work load (batch jobs were less important than online transactions and financial reporting was more important than inventory at the gym), and controlling which workload received more resources from the available resources at any given time.

Now, let's bring it all back together. Measuring resource consumption provides business units and infrastructure managers the ability to know what resources are required, and how they are being used. Beginning with measurement, the collected dat allows companies to charge for the resource consumption that is actually being used vice a macro view of you are being charged a flat rate, if the customer is actually using the resources or not.

There are some products out there that are breaking into this market space, including VAlign, VKernel and others.

More on the importance of CMDB

I was reviewing some other blogs this morning and came upon the announcement from Microsoft that Win server 2003 SP0 is no longer supported. Now I may have heard a chuckle there for a minute from some of you as to your thoughts of "who in the world would still be running Win2003 SP0?". Well let me share with you a story of an assessment I worked on prior to a virtualization project. The goal of the assessment was to prepare the organization for virtualization. The first step is to determine how many physical servers they have, measure their performance and through the wonderful world of capacity planning, determine how many ESX hosts they would need (loaded to a pre-determined level).

Well the assessment went off without a hitch and the results were presented to the project manager. He went through the roof at the results! The problem was not the report, but what the report told him. You see, they had just finished up a rather nasty project of upgrading all their servers from Win2000 to Win2003. The project was wrapped up and they had reported the completion of the project to the CIO. The assessment report showed the existence of 12 more Win2000 servers that had not been converted.

The point being, without a complete inventory of computing resources (CPU, memory, Network and storage) both physical and virtual, the fires just keeping being lit like the trick birthday candles. Put one fire out and it simply starts back up. You quickly run out of breath or remove the candle right? To turn your organization around from a tactical mode to a strategic mode, consider completing a through inventory of your organization.

Now, let's move on to how to conduct the inventory. There are usually three different means of conducting an computing resource inventory.
1. Have your agents (Tivoli, HP Openview, BMC Patrol and many others on the market) tell you what you have. The problem is you most likely do not have agents on every machine in your environment (think test/dev/qa) so this can not possibly be complete.
2. Send out the email to all business managers asking them to tell you what they have. Um, think about that. What would you report and how would you collect the information?
3. Utilize an agent less tool that can scan LanMan directories, perform an IP scan of your sub nets AND interview business units for possible hidden assets behind firewalls or on stand alone networks (you know they exist out there).

The inventory should then be validated with procurement and any provisioning/software distribution processes to ensure a complete listing. Only then can an organization start down the strategic road of knowing what they have and where they are going.

Keep in mind, an inventory can and most likely should include software, security patches and services installed or not.