Cameron van Orman writes for the SD TimesWhen asked to write a Guest View article for the SD Times, Cameron van Orman knew exactly which subject he would tackle. Recent application launches has “forced [the relationship between Dev and Ops] to the forefront of our IT discussions. And it’s about time,” he says. Cameron writes [...]
Ditch the spreadsheets for a more reliable capacity planning processThe importance of capacity planning is more prevalent than ever in today’s fast-paced mobile age where being first to market with business services can determine a company’s success or failure. When infrastructure bottlenecks wreak havoc with a day 1 application launch, the repercussions of rolling back [...]
I’d like to start off by highlighting two recent IT mishaps to really set the tone here. I’m sure you have heard of one, but perhaps not the other.
The less publicized event was the recent SimCity 5 game launch. On Day 1, users were only allowed to play the game online while connected to host servers. Playing offline was not an option. What wasn’t expected was the amount of users buying and downloading the game when it launched, increasing the SimCity host server’s utilization and customer wait time. Players couldn’t sign in or were getting booted out mid-game, losing their progress, and the servers were taken down for maintenance for a period of time on Day 1. Customers didn’t get what they paid for and were looking for a refund. You can imagine the negative reviews that followed.
The second IT mishap was Healthcare.gov. With 8.6 million visitors and 250,000 concurrent users within 3 days of launching the website, the site and customers experienced slow performance, error messages and complete outages at times. Like the SimCity incident, the application was rolled-back for maintenance. Some of the key issues that were identified post launch were application testing needs as well as infrastructure scalability to meet the explosive Day 1 demand.
I suspect that anyone reading this blog didn’t win the recent $648 million Powerball jackpot. If you did, then shut down your laptop and head straight to the beach. Unfortunately, for the rest of us, it’s back to reality.
The reality is that there are data centers all over the world that have rows and rows of hardware in them – collecting dust and wasting valuable IT budget. Industry analysts report that the average data center utilization is around 20%, which further backs up this claim. Long gone are the days of purchasing additional server horsepower as an insurance policy to support peaks in workload. Today, IT has to be able to extract the most out of their existing infrastructure while ensuring that the business load can be supported reliably and with solid performance levels.
If that’s the case, why risk not having insight into current utilization and capacity of data center infrastructure that supports your critical business initiatives? Do you really need to purchase millions of dollars worth of additional hardware when it comes to refresh time? Or can you use the under-utilized resources you have in your data centers and save some bucks? Some CA Technologies customers call this resource harvesting. Without infrastructure capacity insight, those questions will go unanswered and you are gambling with application performance and customer satisfaction.
This article originally appeared in Network World.
Capacity management was relatively easy when workloads grew incrementally, but those days are gone. Customer-facing Web and mobility services can spike unpredictably, and Big Data workloads can quickly overwhelm existing capacity. In this new world of wild workload fluctuations, IT has to do a better job of managing capacity.
Today many IT groups are manually collecting performance data from siloed tools and systems. A virtualization administrator will gather metrics from their virtualization vendor’s operations platform, a hardware administrator will leverage a hardware-monitoring platform, and so on. Capacity management tends to be done on an ad hoc, inconsistent basis. As a result, if a CIO is looking to make plans or report on capacity, he or she may have multiple teams delivering many different reports.
Ultimately, without comprehensive, effective capacity management, IT organizations are flying blind and working in reactive mode. This not only makes it difficult to manage current infrastructure and capacity demands, but significantly hinders the organization’s ability to support emerging requirements and initiatives.
Editor’s note: Robert Limbrey originally authored this post for his own Cloud Capacity blog in response to a Wall Street Journal Blogs item. Given the recent news surrounding the not-so-smooth roll out of healthcare.gov, we thought this advice would be applicable.
Before you decide on your strategy for investing in IT assets, whether on-premise or public cloud, there are a couple of important correlations I think it’s important for you to consider.
Clearly, your decision ultimately will be based on:
- The fit with your longer-term business plans
- The measurable benefit to your business
- The investment needed
Investments in IT capacity enable your business to transact a certain amount of business. In similarity with other areas of your business, investments in capacity should be made to alleviate bottlenecks and increase the ability to transact business. However, in IT there are certain complexities to factor in.
Seems like just yesterday when I was talking about the latest release enhancements from CA Capacity Management that provides customers advanced scalability and capacity analysis for their business services. Well, maybe a couple months ago but its been a busy summer for our developers and once again they aim to impress.
Our end of summer service pack release includes new versions of the CA Capacity Management suite as well as solution kits for Microsoft SCOM and VMware.
Improvements in scalability and performance include improved data source and navigation tree load times as well as improvements in migration paths when upgrading from previous versions.
Usability enhancements include new wildcard support for faster resource selection when creating data sources and a streamlined Virtual Placement Manager definition panel that greatly simplifies placement exercises. The new at-a-glance view identifies which options need attention via quick links for easy navigation throughout the placement definition scenario. Online help topics are now included in a single point of entry for Capacity Command Center, Data Manager and Virtual Placement Manager for an improved navigation and search experience.
Wherever you are on your virtualization path, whether just starting out with physical to virtual initiatives or considering cheaper alternatives to high license fees you may be paying now, deciding which platform will best support business success and IT initiatives is a top priority. Any change in IT, especially virtualizing critical business services or migrating applications from one virtualization platform to another can have catastrophic effects on the customer experience if not planned for appropriately. IT organizations today need to ask themselves question like “What are the appropriate sized VM templates to build for this workload?”, “What is the affect to host utilization when virtualizing this application?” and “Is it cheaper to host this virtualized environment on platform A or platform B?”
The latest release of CA Capacity Management easily lets users understand the impact of any virtualization effort, and, it’s resulting effect on the end-user experience. By having a deep understanding of the current utilization metrics of the existing workload and the scalability factors of the platform you’re considering migration to, the solution offers valuable insight, enabling you to make more informed IT decisions that support the business and the customer experience.
If your IT organization is struggling with inadequate capacity management capabilities, it has a lot of company. In fact, 72% of respondents indicated they didn’t have the capabilities they needed to make accurate forecasts. Further, 62% reported that they didn’t have sufficient visibility into their data center assets. To stay on top of market trends and customer experiences, CA Technologies recently conducted two surveys, one looking at the enterprise IT market at large, and one specifically polling CA Capacity Management customers and found some very interesting results.
Survey respondents leave no doubt why the status quo in many organizations isn’t cutting it. While 85% indicated that rationalizing investments was critical, 40% use no formal tools for optimizing resource allocation of their physical and virtual servers—and that’s not including the 27% that use Excel spreadsheets for this effort.
We’ve all done it. Try to get that window or emergency exit row seat on that long flight or even request the ocean-view suite in our favorite hotel. Usually I’m the one that’s procrastinated way too long and gets stuck with whatever’s left – leaving me with that I should have planned better feeling.
The same goes with successful IT initiatives like application rollouts or upgrades. A well oiled capacity planning process can always help you understand the current capacity levels of the supporting infrastructure as well as future needs to support the business. But many times when additional capacity is predicted and then needed, that infrastructure has been gobbled up by another project. The early bird gets the worm, right? You don’t want to be stuck with that middle seat in the last row on your flight – so you need to plan better.
The latest release of CA Capacity Management easily lets users efficiently reserve capacity for upcoming projects, application rollouts, and users or companies being added to the environment. The reservation system is extremely useful for planning requirements when deploying large tiered application or staged rollouts of large-scale production systems.
As organizations depend more on IT to run their business, it’s critical to deliver on the performance of key business services, especially those that are customer or partner facing. The pressure is intense, since the quality of business service delivery determines how customers and other constituents perceive the organization.
IT organizations are also anxious to reap the tremendous savings promised by hybrid cloud hosting environments. However, there are real and perceived risks to putting mission-critical business services into an environment that isn’t fully understood. You wouldn’t hand over your baby to a stranger would you? Then why gamble with your applications when considering cloud solutions without doing the proper planning and research.
The latest release of CA Capacity Management provides comprehensive analysis of public cloud vendor offerings for customers to preview how their applications will perform and how much they will cost when considering hybrid cloud migration initiatives. Not all public hosting platforms are architected the same. While hosting your applications at Provider A will give you certain performance levels at a certain price, hosting at Provider B may deliver the same or even better performance levels at a cheaper price.
Now, more than ever, IT shops need to deliver on the promise of business innovation and an exceptional end-user experience. The race to deliver this experience to customers is a never-ending marathon of who can be the first out of the gate while meeting the ever-demanding expectations of today’s “I want it now” user.
And, not making it any easier is that fact that today’s business services have evolved into very complex environments. In the old world, applications ran within an enterprise, even within the same rack of servers. Today, the places where a single business service is executed can span hardware platforms, hypervisors and geographical locations – leaving IT shops to wonder, “what went where, when and what happened?”
What is needed is one end-to-end capacity planning solution that understands the scalability of your supportive infrastructure – from the mainframe to distributed and even into the cloud. You can’t be expected to deliver on service levels for an application that spans all these platforms without first understanding their current utilization levels as well as capacity limitations. This understanding provides you the insight needed to predict future behavior of that application when change occurs like, demand increases, hardware upgrades or new application rollouts.
- Why Our Free IT Monitoring Software is Making a Big Splash January 16, 2014
- 6 Sure-Fire Ways to Unify VoIP and Video Management January 14, 2014
- Delivering Business Service Performance Insights on the Go January 29, 2014
- The Importance of Continuous Capacity Planning January 23, 2014
- Is your network optimized for application performance? January 27, 2014
- The Proof is in the Value March 3, 2014
- If Network Performance is Critical, Asking for More Budget will be Easier February 25, 2014
- Video: Facebook Shares DCIM Solution Update at the OCP Summit February 21, 2014
- In a world of Big Data for APM, IT can’t live by green, yellow and red lights alone February 20, 2014
- End User Expectations and IT’s Ability to Meet Them – A Bridge Too Far? February 18, 2014
- Bradford Fuller: This converged approach will provide simple, near ...
- The Importance of Continuous Capacity Planning ...: [...] The importance of capacity planning is more ...
- Big Data Analytics, Business Processes & Ap...: [...] IT orgs need to continually improve ability ...
- Issues with Data Center Energy | DatacenterHQ.com: [...] Global Energy Prices and Demand Drive Need...
- CA Nimsoft Monitor Product Release Update: September 2013 | Service Assurance Daily: [...] 2. CA Nimsoft Monitor 7.0 was released in O...
Recent Tweets From @CAsvcAssur
Follow @CAsvcAssur on Twitter