Thursday, August 26, 2010
Wednesday, August 25, 2010
Friday, August 13, 2010
"VMware ESX. VMware vSphere 4.1 and its subsequent update and patch releases are the last releases to include both ESX and ESXi hypervisor architectures. Future major releases of VMware vSphere will include only the VMware ESXi architecture.
- VMware recommends that customers start transitioning to the ESXi architecture when deploying VMware vSphere 4.1."
Within ESX, we have the Service Console which manages any connection points within the VMkernel (SSH, CIM, vCenter agents, etc.). As many of you know, ESX is about a 2GB install.
Within ESXi, the Service Console is no longer used for management - integrated VMkernel agents are utilized (VMware API such as the Management Framework and CIM). Contrary to competing hypervisors, ESXi and ESX have full operability within the VMware environment - each of these can utilize the same exact tools and advanced features. Compared to ESX's 2GB install, ESXi runs around 70MB.
Majority of our customers are comfortable with ESX - this is due to advanced troubleshooting within the Service Console. Previous versions of ESXi had limited functionality within their "Tech Support Mode" (emulated, stripped down *nix shell). So what's new within 4.1 for advanced logging and troubleshooting? (from VMware)
"- vCLI Enhancements. vCLI adds options for SCSI, VAAI, network, and virtual machine control, including the ability to terminate an unresponsive virtual machine. In addition, vSphere 4.1 provides controls that allow you to log vCLI activity. See the vSphere Command-Line Interface Installation and Scripting Guide and the vSphere Command-Line Interface Reference.
- Lockdown Mode Enhancements. VMware ESXi 4.1 lockdown mode allows the administrator to tightly restrict access to the ESXi Direct Console User Interface (DCUI) and Tech Support Mode (TSM). When lockdown mode is enabled, DCUI access is restricted to the root user, while access to Tech Support Mode is completely disabled for all users. With lockdown mode enabled, access to the host for management or monitoring using CIM is possible only through vCenter Server. Direct access to the host using the vSphere Client is not permitted. See the ESXi Configuration Guide.
- Tech Support Mode Enhancements. In ESXi 4.1, Tech Support Mode is fully supported, and is enhanced in several ways. In addition to being available on the local console of a host, it can also be accessed remotely through SSH. Access to Tech Support Mode is controlled in the following ways:
- Both local and remote Tech Support Mode can be enabled and disabled separately in both the DCUI as well as vCenter Server.
- Tech Support Mode may be used by any authorized user, not just root. Users become authorized when they are granted the Administrator role on a host (including through AD membership in a privileged group).
- All commands issued in Tech Support Mode are logged, allowing for a full audit trail. If a syslog server is configured, then this audit trail is automatically included in the remote logging.
- A timeout can be configured for Tech Support Mode (both local and remote), so that after being enabled, it will automatically be disabled after the configured time."
In my opinion, the biggest feature here is the advancement within Tech Support Mode - we are now able to remotely log in via SSH and perform advanced features. Granted the command-set will not be as robust as ESX, core functionality is still there.
Due to this transition, I would suggest everyone start baking ESXi in their environments. One other suggestion is start getting used to the vCLI/vMA tools from VMware. Each of these tools will make your life a little easier for advanced diagnostics and configurations within ESXi. Some other thoughts:
- Ease of install: No need to layout disk slices within the Service Console. I've seen potential issues due to mis-configurations of either the /var or /home directories when Administrators have used this to store VM files or patches. Furthermore, ESXi installs rather fast compared to ESX.
- Security: I think this will be a driving factor around the security community. Without the Service Console, there will be less entry points within the hypervisor. ESX is still secure, but it's a distro of Linux - people can exploit certain vulnerabilities to gain unauthorized entry.
- No need for RAID 1 local SC Install: Due to ESXi's small footprint, why not use an internal USB key? Most blade systems and servers are now coming with internal USB dongle functionality. Rather than shell out funding for a pair of 36 or 72GB drives with a RAID controller, let's use something small and efficient.
Wednesday, August 4, 2010
I am sure that everyone reading this is being bombarded with the term VDI or Virtual Desktop as much as they are the buzzword "Green". Both of them are similar in that they are used a lot together, and they are both concepts that cannot be defined specifically, but are easy to throw around. When I use the Virtual Desktop term, I use it as a concept. I see it as a disconnect of multiple major pieces of the desktop experience stack. That is to say that I see the OS as an engine or platform that just facilitates the overlay of applications and personality (profiles). So if we look at a desktop as a stack of transparencies we have the OS at the bottom, on top that we have the personality and the settings, then we can overlay applications. When layered correctly these parts look like a traditional desktop, however they have the advantage of being switched in and out independently without affecting the other layers. This is basic tenant of VDI. Many people do it and do it well. What is normally the last piece to be discussed is the physical user interface. Most solutions use either a traditional desktop to link to a virtual desktop or virtual app pool, a thin client, or a zero client.
Tangent alert! Remember my aversion to certain technologies mentioned in the previous section, well the same thing happened with Apple products but for a different reason. I always thought that Wintel products were for productivity, and Apple products were for people who had nothing better to do with their time than to ogle the great graphics and to play with a device that was devoid of customization. This was short sighted thinking on my part.
What is wrong with the thin client, etc. solutions? Nothing they work great, there are even thin laptops. So where is the world going today with personal computing? Either we have giant quad core beasts that can heat an office in winter, or we have netbooks, smartphones, or tablets. Those of you who use a smartphone or iPad for business know that you can do 90% or what you want to do, but not efficiently. Those of us with netbooks can surf the web, or read email, but not at the same time. The horsepower is not there for you daily task worker.
This is when it clicked for me. The physical user interface of the future is going to be multi-purposed and severely underpowered for daily business use; it is going to be the cable box to your VDI broadcast center in your organization. Think about it, you sit at home or in a meeting with your IPad (or Android tablet for you anti-Apples), and have full access to all your business applications via broadcast desktop (VDI). Then you move to your desktop and login to your wireless thin client and get to the same desktop. For an IT professional this could mean we can get to our desktops to troubleshoot issues via our cell phones while yachting (everyone in IT has a yacht right?).
So in closing (finally, right?) I think that most technologies will eventually embrace the fact that we all have multiple communications devices today that can be used to access a corporate virtual desktop infrastructure, just for right now I see Citrix already having this capability. One day we will all be using low power devices, of our choosing, to access the "Man Behind the Curtain", our datacenter based virtual desktops.
Quote and Tip from Daniel:
"Cisco Nexus 1000v brings the value-add to any VMware virtualized environment while providing optimal performance. Using the Nexus, network teams can use technologies such as EtherChannel or LACP (Link Aggregation Control Protocol) to provide guaranteed performance to virtual machines. Using DynTek's methodology, our team can assist any environment from design concepts to full implementation and cutover of all guest virtual machines. Our technique allows us to cut over guest virtual machines to the Nexus 1000v environment with missing less than one ping.""Setting up system uplink traffic is sometimes tricky. When establishing the uplink port profile, verify that only Control and Packet traffic are allowed through this port profile. If other port profiles allow Control and Packet traffic, this can potentially cause port flapping issues since these heartbeat networks have multiple routes."
KACE isn't the first solution to do Inventory, Asset Management, Software Distribution, Patch Management, Software Distribution, Patch Management, Service Desk, Network OS Install, Disk Imaging, Centralized Deployment Library, and Inventory Assessment, however it is the first one that does it all in one set of appliances (physical or virtual), without breaking the bank. The thing that surprised me the most was the cost. Now every sales book you read (don't you read those?) says to sell on value not on cost, well I know that when you go to your CIO / CFO they look at cost, it doesn't matter what the value is. That being said, the value is ridiculous compared to a software only package (think Unicenter), and KACE comes with hardware. This is a very tight package that is designed to fit into an organization under 10,000 seats and provide a single administrator (or group) the ability to manage, maintain, deploy, report, patch, and monitor. It is designed to remedy the "Too Many Hats" syndrome that plagues many IT specialists in the age of "Do More with Less". What I found ingenious was the fact that they offer this as a set of virtual machines, or a set of physical appliances. Either way it is the same features, functionality, price, etc. without any add-on costs. Everything is out of the box included. I highly recommend viewing one of their 3 demos they put on every week. Click to Sign Up for Demo
In the beginning.. Most organizations have tackled desktop virtualization the same as server virtualization, in that they treat the virtual desktop the same as the physical. They give the desktop the same amount of RAM, disk space, applications, etc. Then they start creating virtual desktops from templates and deploy as needed, manually, and have users connect via RDP. This works surprisingly well for up to about 25 users. Users like the portability, the techs like not having to run to desktops for a broken coffee cup holder (aka DVD-ROM drive), and everyone is happy.
And then... What seems to happen anywhere from virtual desktop number 10 to 50 is that everything breaks. The blame gets passed around like a hot potato, the server guys blame the network team, the network team blames it on the storage guy, the storage guy says it the applications and if you wear all those hats, it's just confusion.
Why... Well virtual desktops aren't like servers, they act like them, boot like them, but the major differentiator is the human factor! What we generally see is that with servers, they perform totally different tasks at different times. Therefore, when virtualized and sharing resources, things seem to even out on the Disk / Network / CPU / RAM side. With desktops, we have higher densities of machines, with users doing the same thing (ie. watching the oil well not be capped, checking the Gators, Tigers, Noles, preseason news, using Excel to fill out their timesheet). In any event, events tend to compound contention of resources and impact the environment. Also add into this human factor the fact that these initial virtual desktops are floating around with servers, so they are impacted by the contention as well.
Why are we so slow? That is what you will hear right before thing get worse. Why does this happen? The major reason for systems slowdown in the virtual desktop space is because of one major culprit... Disk I/O. One thing about desktops is that they are very chatty when it comes to the disk. Compare it to a server, even a database server; they almost always do one or two things the same all the time. Whereas, a user usually is running 10 different apps and the OS is constantly reading and writing information (surfing web, typing report, watching video). Therefore, the inexpensive SATA disk you bought for your SAN for VDI because everyone needed 80GB of space is now limiting how many virtual desktops you can support.
How do we fix this? The best way is to follow the 5P's (depending on where you grew up it could be 6), Proper Planning Prevents Poor Performance. We use tools, knowledge and methodologies taken from previous endeavors to assess the environment and design affordable, high-performance, self-service, VDI Environments.