Sunday, February 26, 2012

Client4Cloud: New Markets, Viewpoints, and Requirements

Embrace the Shift: From Machines & Systems to Users and Services
Client4Cloud is all about the 4 vectors in our world that are converging :

1) Users & Devices
2) Apps & Data
3) Management
4) Cloud (infrastructure)

Binding Layers
Within each layer of the stack there are parallel process, skills, and technology that needs to be in place to bind each layer together. Similar to how eggs can bind the layers of a cake either as part of the frosting or the actually cake itself.

These binding layers in their own right will create new market opportunities and challenges across Software Producers, Enterprise, Cloud Providers and Service Brokers (aka managed service providers) to fill the gaps.

What are some of the binding layers that need to be addressed and why?
  • Licensing as a Service - automating the license tracking, usage, and authorization across cloud service providers, cloud service brokers, software producers and enterprise customers across heterogeneous clouds, environments, and models. The electronic "postal service" for the Cloud.

  • Transformation Automation from Client to Cloud - Planning, Migration, Orchestration and Analysis to enable visibility, clarity and control in identifying, on boarding and maintaining a bifurcated systems during the transition from current to App/User centric paradigm.

  • Process Transformation & Automation - convergence requires shifting current legacy product and technology thinking to Users and Services they are consuming. As an industry we will have to rethink how we package, create, price, and deliver "Services Not Systems (ITPI 2011) across Users, Apps, and Data.

Evolution Not Revolution

Throughout my books and blogs - I have reiterated that the convergence process is an evolution not a revolution. Many companies are jaded because they were caught in the hype cycle of either Desktop Virtualization or Cloud Clutter - here many jump on the bandwagon without understanding the layers, how they are impacting customers, and requirements for proper convergence.

Like all evolutions - this one will take time to embrace, understand, educate and develop for. So where does one start? The first step to success is understanding the layers in the stack and the interdependency each has on the success of the other. The convergence means that the siloed desktop, server, network, and mobile approach will need to finally be torn down like the Berlin Wall to truly have the freedom, automation, and control required to realize the advantages of the user centric paradigm while mitigating risks.

The biggest gaps today are in understanding. The first round of 1.0 products should be carefully viewed for ability to adapt and expand with the market as it matures. It is still fairly nascent and the requirements are evolving (new regulations, business directives, licensing paradigms).

The first book in the Client4Cloud series focuses on process automation and best practices for shifting from a machine centric to a user centric approach. The print copy will be available by April and electronic copy is currently available on Amazon.com


Saturday, August 13, 2011

Client4Cloud: From Vision to Reality

I apologize for not posting for a short while. I have been heads down working on the Client4Cloud (I speak client, cloud, virtualization) series in my free time and hard at work at creating a new market niche with my company (Flexera Software) and industry thought leaders - Licensing as a Service. More to come on this front. Our first webinar with be in September...

After several weekends working to collate and update the information - I am pleased that the project is moving along... stay tuned for review and post dates. Client4Cloud: Desktop Transformation to universal clients is well under way.

The timing is perfect given the current buzz around user virtualization. Client4Cloud is all about the paradigm shift from the machine to the user. Truly a different way of "rethinking" and hopefully retooling IT to adjust to the Digital Native revolution. More to come soon....Citrix's acquisition of Ringcube is truly a step in the universal client direction...

The Visible Ops Private Cloud series has had quite a bit of positive feedback and traction - thank you all. I have appreciated the time that a few individuals have taken to review, comment, and provide feedback. My co-authors and I truly do appreciate all the kindness for those that contributed and those that have read our book. We will be hosting a book signing at CA booth at VMworld on Wednesday at 11:30 - would love to meet you and introduce you to my coauthors - Andi Mann & Kurt Milne.

Stay tuned more to come....

Regards,
Jeanne


Monday, April 11, 2011

Journey to Client, Cloud, and Virtualization

Like a canvas the blank page stares back waiting to be filled with vibrant colors to create a picture perfect view of how the artist views their world. Although many pictures have been painted of the Client, Cloud and Virtualization with vibrant variation they are still a bit blurred by vendor bias.

The journey I set out a year ago was to create the picture of the Client, Cloud, and Virtualization painted through the clarity of the eyes of the customer. For those that truly know me - I will always be the first to admit that although my opinions may be interesting they are not as relevant as the customers I serve as they are the ones that are the true unsung heroes. The architects, IT Admins, CIOs, CTOs, FTEs, Audit and Security teams that keep our technology dependent world humming.

For those of us that have been around as our society has become more dependent on technology we have seen the sleepless nights, long weekend upgrades, countless hours missed from our families, and problems solved. For those that think it is not critical - think about it the next time you are in the hospital. Look around - see how dependent we are on technology and not just the inventors of it but those that are supporting what we create.

Like so many journeys, mine has been filled with surprises, twists, turns, and mini-destinations along the way. I originally set out to publish a vendor neutral guide for Universal Clients (connecting Client, Cloud, and Virtualization) for and by customers.

Detour: Visible Ops Private Cloud
Suffice to say while I was busy on my journey - I was asked to take a slight detour to work with some old friends and colleagues on creating the next book in the Visible Ops series. Being a big fan of Visible Ops and IT Process Institute, Andi Mann, and Kurt Milne (my co-authors) - it was a worth while detour to explore.

I have rather enjoyed working with Andi and Kurt on creating what we hope to be a culmination of collective customer and implementer feedback on the challenges and lessons learned from some of the leading IT people we know. Visible Ops Private Cloud: From Virtualization to Private Cloud in 4 Practical Steps - was a joy to work on because it was for customer by customers. We stopped at Private Cloud because there appears to be a bifurcation in the market currently. Meaning that public clouds are largely being consumed by Small to Medium Businesses that lack the internal IT, are less regulated, and complicated than their larger Enterprise counterparts. The SMB is full force on public clouds because it provides agility and ability to implement services at a fraction of the costs.

However the Enterprise is proceeding with caution at the moment because neither the Software Producers nor Cloud Providers have all the kinks worked out to provide compliant, cost effective systems for Cloud Bursting and utilization of Public Clouds for highly regulated environments. It is not to say that they are not testing the waters with test and development. They are just not jumping in full force...yet...

Visible Ops Private Cloud captures the much needed foundation of people, processes and technology to enable IT to build to the next level: Cloud Bursting/Hybrid Cloud and eventually Universal Clients.

Final Destination: I Speak Client, Cloud, & Virtualization Series Coming Soon...
Suffice to say..prior to writing the Visible Ops Private Cloud, I logged quite a bit of time with customers, conducting interviews, research and writing about the next major paradigm shift that is currently under way: Desktop Transformation to Universal Clients (Client, Cloud, and Virtualization). Although the heavy lifting is done...there are still promises to keep and miles to go before it comes out into the wild.... but rest assured it is not too far off.

With a little help from former colleagues and family expect to see the "I Speak Client, Cloud, and Virtualization" series coming soon. We are working on details of publication. Stealth is about "I Speak Series" created for customers by customers. The series will provide small "bytes" of clarity around people, processes, and technology to help IT cut through the chaos surrounding Desktop Transformation from static systems to Universal Clients in the Cloud. Stay Tuned....

Saturday, March 26, 2011

Creating Clarity out of Cloud Fog

For my readers I apologize for not posting more frequently. For the past 6+ months I have been finalizing a book with IT Process Institute on Private Cloud Computing. It will be coming out Mid April. The final copy just was sent to edit YESTERDAY!!! Check out www.itpi.org to order in a couple of weeks. Will also be on Amazon.

First and foremost I would like to thank all the 50+ participants from both Vendors and Customers in interviews, reviews, and the overall edit and review process. Most importantly - we want to thank the 30+ customers that did qualitative interviews. The inspiration and purpose of the book was to create something with customers and for customers to reduce the clutter and confusion that has been known as the "Cloud".

Customers struggle and frankly have had enough of vendors being so introspective that we loose sight of what is really important: them. Those that sign the checks and use the products in production are sophisticated, smart and really know what they need. It is refreshing to know that as with Vista - customers just are not buying it. They are looking at what are the benefits for their business and taking baby steps to figure out how to get there. They are NOT buying into any platforms or solutions that promote vendor lock-in so they do not have control over their destiny or costs associated with hosting a web 2.0 solution. They do have options and they know it.

How does one find Clarity in the "Foggy" Cloud?
People and processes not technology at this point that will assist in understanding the various layers and what truly is involved. Meaning - when there are so many different architectures, committees, limited standards across solutions, and opinions that of course point back to the technology of the day - it is best to start with requirements and determine the right blueprint based on that.

  • Juice Worth the Squeeze? Need to understand the true TCO (OpEx and CapEx) to determine if value is really there at this point in time. Will it give you competitive advantage, help transform IT from a Cost Center to a Profit center through extending services, or will you sink millions of man hours and dollars without ever realizing results?

  • Betamax or VHS? There are so many different architectures and new layers for the Platform and from OS stack to the Virtual App that it is really hard at this point to determine what is the best way to go. Most top performers - do pilots with their ISP provider to achieve at least a Software as a Service offering while trying to determine how this will all shake out.

  • Public or Private? Real answer for most is BOTH. Security and Compliance is making it harder to burst into the Public Cloud but not impossible. There are some really interesting technologies out there that enable encryption of the application and data or signing a digital fingerprint to the OS but it doesn't come without some work and lots of planning.

  • Hip or Hype? Part of the reason for the "Fog" is way too much hype and over marketing around cloud. Hard to buy in to one vendor's vision or another when they are attaching for the coolness factor and in some cases may not understand the business problems trying to be solved. Come armed with a list of requirements and demand to talk to architects that can paint a clear picture before even doing a pilot.

  • Vision or Hallucination? Some areas have GREAT ideas but without execution they are merely hallucination. How experienced are these visionaries? Have they actually done a deployment of your magnitude, have the battle scars and lumps to make sure they are not taking your business so far past the bleeding edge that you hemorrhage? There is no crystal ball but when going into new terrain is always better to bring an experienced guide that has been there/done that before.
Start Where You ARE, Know Where You want to Be and Then Jump...Much easier to course correct when you know your destination than to get lost in the fog of marketing, new products, different platforms, and vendor lock in. There are A LOT of new standards coming - stay tuned to www.dmtf.org...understanding the standards will help you shape both custom and out of the box decisions.

Good Luck!

Regards,
Jeanne Morain

Wednesday, September 8, 2010

Citrix & Cisco UCS - Makes Perfect Sense

One of the best posts I have seen coming out of VMworld is from Harry Labanna - CTO of Citrix. What a few may not know is that prior to joining Citrix - Harry was a critical contributor to the overall desktop architecture of Goldman Sachs (nearly 80,000 desktops at one time). In starting - I must be the first to admit - as a long time Marimba customer - I have admired Goldman Sachs for their top notch Architecture and Innovation. They were one of the first to do a virtual desktop solution before it was the in thing to do.

His latest blog posts articulating the value of the Cisco & Citrix relationship is refreshing because it brings the customer experience and reality into the overall virtualization media hype. For those that have not read it yet - http://community.citrix.com/display/ocb/2010/09/07/Cisco+and+Citrix+partner+to+further+enhance+the+desktop+virtualization+ecosystem

Of course it has the "Vendors" glasses on - as we all would expect but to further Harry's point - Cisco UCS & Citrix combined with NetApp - do make perfect sense in order to bring Desktop Virtualization from a point solution to the mainstream. Why?

UCS brings a host of different technology (not just the network and the routers) but also key integrations with existing management frameworks (like BMC, HP, MS, others) to help simplify the transition from legacy hardware/software to virtual environments and the cloud.

UCS was designed around Policy Orchestration, Templates, and ITIL - key success factors in Systems Management (and BSM) that the biggest and best datacenters have recognized and adopted over the years.

NetApp has been a key VMware partner for years and has built up the credibility and IP around optimizing storage access and control not only in virtual server environments but also virtual desktop environments almost since their inception.

Starting with Presentation Server - Citrix has tried to solve the "problem" application issues for desktops delivered in the server realm for years.

Many Johnny-Come-Lately vendors speak of the revolution, suggesting rip and replace existing systems without understanding the true implications from a customer perspective or in some cases without understanding what it takes to build out and maintain the infrastructure needed to manage a large number of distributed endpoints. Not just in regards to technology but also the overall business impact.

Why is this important?
  • Customers don't have a greenfield - they have legacy systems management, hardware, OS, and apps that in some cases they can not easily swap out. But where they can - it makes sense to look at the whole solution (Network, Storage, Desktop Experience). Their existing systems help them report on and prove compliance (HIPAA, PCI, SOX, etc). Whatever they add needs to work with what they have today versus where they will be eventually in a few years.

  • Desktops have a bigger impact on the business - Many companies are starting with niche deployments because they can not afford a minute (hour, week) of down time across their entire call center, work group, or other key individuals that rely on the system to sustain their primary job function. This will be even more prevalent as EMR and other regulatory requirements around technology are forced to become more mainstream. For example - A doctor with out patient history (meds, ailments) is like a fish out of water. Same rings true for a Marketer without Power Point, Lawyer without case law/contracts, etc. It is even bigger for the Small to Medium Business Owner obtaining services from the Cloud without on site IT.

  • Rising Energy Costs & Impact on Datacenter - Many companies I have worked with over the years moved to server virtualization because they were running out of power in the datacenter. Energy costs have continued to climb (particularly in areas like Phoenix that have a high number of data centers) and in the down economy - many IT shops can not justify adding another POP or expanding power consumption. Remember IT usually is not their primary business but a means to do business...

    The key take away that I saw in this article is that they are looking at it from a different perspective (the one that counts) the customers..

Monday, August 23, 2010

Client Virtualization - Hard Look at ROI

Client Virtualization at first blush can have compelling impact on ROI when you view it through the eyes of the vendor. The real truth lies within the details and overall impact on your install base. When trying to determine what the true ROI and/or TCO is from both a CAPEX and OPEX perspective it is best to understand the total impact of the solution selected.


How do you realistically calculate TCO or ROI?
People often ask me - do I start with CAPEX or OPEX. The real answer lies within both. Depending on the type of virtualization being implemented for clients you will want to look at the entire lifecycle of the client, application and overall business directives. Remember - that sometimes TCO/ROI is not enough to build a case. This particularly rings true when for example the implementation would have a significant impact on end users ability to perform their job function (Road Warriors, Doctors, Teachers) and their level of connectivity.

Watch Out for Shifting Costs
Virtual Desktops and Applications do have a significant value in certain situations to aid with compliance, reduce application lifecycle costs, and eliminate down time. However there are additional costs that are added that must be considered when building the case to determine if a particular solution or architecture is right for your business.

Each vendor will provide a nice TCO or ROI calculator based on what is "known" today for a typical desktop deployment. However, virtualization of Desktops and/or Applications is anything but typical. It adds additional overhead and complexity that must be added to the calculations such as:

  • Reduces Systems Management Ability to Diff (Byte level) updates of applications - increases application data load on networks. This varies per Virtual Application and Systems Management Vendor - needs to be included in your selection test.

  • Increases Storage Requirements both in the data center and on the endpoint - this requires additional hard drive/NAS/SAN, etc and computing processing power. For example, prior to Application or Client Virtualization the user typically only had a single copy of the OS or Application on the endpoint. Now they can have multiple copies of the OS, different versions of the same application, and/or programming framework (.Net or JVM). This in turn will also increase storage requirements in the Data Center for storing those multiple copies & impact on network for download, patch and update.

  • Increases Management Overhead in other areas - while Virtualization decreases some of the areas of management overhead (Packaging/repackaging) and Test - it does increase the complexity of the overall management overhead. Why? Because prior you had only a single application to patch, update, inventory and manage on a single OS. Now there are multiple OS for single users and multiple apps. Each application will need the same level of care for Patch, Update, Inventory and Management in order to ensure compliance with regulatory, business, and security directives. Part of the ROI should be calculating what the maximum number of applications and/or OS you will support in your client environment and what the costs (with new virtualization factors) will really be.

  • Increases Operational Expenses in Other Areas: Before the line between server and desktop was very clear and well drawn with the exception of Citrix Presentation Server (Now XenApp). With the introduction of Desktop Virtualization - many companies are coming to realize that they will need more seasoned experts in the troubleshooting cycle to assist the help desk (solution centers). These individuals have to be Virtual Host experts and be able to determine what server originated what version of what applications and OS to troubleshoot individual issues, audit application access, and understand total impact. Network, Database, Server Virtualization Experts etc will all need to be part of level 2 support (not just escalations any more) and/or Service Desk will need more of these experts. This in turn will drive up Operational Costs.

  • Impact to End User Productivity: Depending on the application this one can have mixed results. It is critical to understand who the target users are and what the overall impact this type of technology will have on their job function prior to deciding to virtualize their desktops or applications. For example, their is a big push in Healthcare to provide Clinical Desktops for Physicians, Nurses, and RTs as either virtual desktops or applications. This has had mixed success depending on the implementation and stability of technology. For high bandwidth, high throughput scenarios - it typically works great until the network or electricity is down and/or the Physician tries to access the application remotely from a clinic or home office that has low bandwidth. There needs to be back up procedures and access built into the equation for critical applications as part of the overall DR plan for each user. AND the costs of down time needs to be calculated not just from an hourly dollar amount but overall business impact (liability, customer care, and employee satisfaction). Don't forget - it was the Users not IT that killed the Vista deployments due to overall impact on their job performance...
There is more to reviewing the overall ROI/TCO than what you can extract from a vendors calculator. Remember - they are not going to build in any factors that do not reflect their solution in anything but a positive light (they want to get your business). It is up to YOU the customer to determine what the hidden factors are and calculate them into the overall equation. I have seen customers that depending on their business model - have elected to only move a subset to virtualization and/or selected just a component once they realized that the traditional model was still less expensive from People, Processes, and Technology perspective (for both CAPEX and OPEX).

Regards,
Jeanne Morain
jmorain@yahoo.com

Tuesday, July 20, 2010

Malware Attacking SCADA Systems - from USB Device

A really interesting article that I think we should all be aware of -Microsoft Investigating Windows Zero Day Trojan brings to light an even bigger threat to our overall ecosystem and economy from Cyber Terrorism.

For those that may not be aware of the importance of SCADA systems - you may want to recall the brown out a few years ago that took out the electrical grid from Ohio to New York. Many do not know that it was believed to be caused by a virus that was infecting the reporting system. These systems power nuclear plants, electrical grids, oil pipelines, etc.

This article brings to light very clearly that as a Global economy we have to think about the technologies we put in place and their impact. These types of viruses should not only be a concern for USB devices on SCADA systems but also those embarking on their Journey into client virtualization.

Why worry? Virtualization exponentially increases the threat of security risks to companies and our underlying infrastructure. How? VM sprawl and undetected/unregistered virtual applications that have security holes in their virtual operating systems. While SCADA systems are pretty locked down - if a USB device can communicate with the rootkit of the underlying operating system what about virtual operating systems that can go undetected by traditional inventory programs?

For VMs in the wild - they may not have inventory installed or be accessible on the client systems (not like VSphere in the datacenter) when the VMs are offline. Application virtualization poses an even greater threat here.

Typically inventory searches the registry for key elements that identify there is an application installed and Patch Management tools will apply the patch to the underlying OS. But if the OS is virtual unless it is specifically integrated or programmed to do so - the traditional tools will not see the virtual OS or be able to patch it. If the person using the virtual application has administrative rights to their machine - then the virus can continue to exploit the vulnerability within the virtual operating system and pass through to the underlying PC.

What are ways around this?
  1. Lock down the PC - disallow administrative rights. This is hard to do of course for some organizations as many legacy applications still require administrative rights to function.

  2. Register Virtual Application - ensure the virtual application allows you to register it with the underlying Operating system (For example with ThinApp they use ThinReg). Do not use technology from vendors that do not provide some mechanism for alerting the physical system that the application is there.

  3. Ask you Inventory & Patch Management Vendors if they support that application type - some vendors do have integration with traditional tools such as SCCM, or BMC. Tools like BMC Bladelogic for Clients (Marimba) have the ability to provide inventory for applications deployed through their system. This is useful to at least provide base inventory when there is no clear out of the box integration. I would also recommend requesting support from the Systems Management Patch Vendors to provide some type of hook into these solutions to quickly patch them without repackaging. This last part is one of the biggest inhibitors to broad scale adoption of application virtualization beyond just a handful of applications.

  4. Create Process with Service Level Agreements to patch the Virtual OS - Many companies I have worked with over the years have set SLAs to quickly apply patches to their many computers out there. How do they do it across dozens of virtual applications? It depends on the architecture of the virtual application. Make sure you work with your Vendors Services team to create a Disaster Recovery plan for Zero Day viruses such as this to ensure the Virtual OS receive the same patches on a monthly basis as part of your overall patch process.

  5. Only run virtual applications in User Mode - When possible eliminate the administrative rights. Most of the SCADA systems are pretty locked down. What makes the USB trojan even more worrisome. Companies that are choosing to leverage application virtualization should take their overall imaging and rights management process to the next level. Now that you have technology that can lock down access rights - use it.

Some virtualization vendors will claim anti-injection etc. Which is great but you are only as strong as your weakest link. It is important to really think through the security ramifications prior to deploying virtualization technology (Virtual Machines or Applications) on clients. Make sure they fit into your existing SLAs and don't put your company at risk.

Regards,
Jeanne Morain
jmorain@yahoo.com