Data center infrastructure management
||A major contributor to this article appears to have a close connection with its subject. (August 2010)|
Data Center Infrastructure Management (DCIM) is an emerging (2012) form of data center management which extends the more traditional systems and network management approaches to now include the physical and asset-level components. DCIM leverages the integration of information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center's critical systems. Essentially it provides a significantly more comprehensive view of ALL of the resources within the data center.
The deployment of a successful DCIM solution is achieved through the implementation of specialized software, hardware and sensors. The promise of DCIM is to enable a common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures. Over the longer term, a great deal of intelligence will be added upon this structure as well as highly specialized automation capabilities to create a dynamic infrastructure that can actually self-adjust or tune itself to more closely match data center resource supply with workload demand. With over 100 vendors now claiming that they offer components that fit within the DCIM landscape, the rapid evolution of the DCIM category is leading to the creation of many associated data center performance management and measurement capabilities, including DCeP - Data Center Energy Productivity and DCPM - Data Center Predictive Modeling with the intention of providing increasingly cost-effective operations support for certain aspects of the data center.
Since its identification as a missing component for optimized data center management, the broad DCIM category has been flooded with a wide range of point-solutions and hardware-vendor offerings intended to address this void. The analyst firm Gartner Research has started using a set of terms to try and segment this population of DCIM vendors. DCIM Suite vendors number around a dozen in 2012, and consist of software offering which are comprehensive and integrated in nature. These suites deal with lifecycle asset management, and touch upon IT and Facilities. A second term, DCIM Specialists is used to describe the rest of the DCIM vendors. In general, these specialists can be viewed as enhancements to the DCIM Suite offerings. (The term DCIM Ready is also used by some to describe this same group of vendors providing enhancement solutions)
The large framework providers are re-tooling their own wares and creating DCIM alliances and partnerships with various other DCIM vendors to complete their own management picture. The inefficiencies seen previously by having limited visibility and control at the physical layer of the data center is simply too costly for end-users and vendors alike in the energy-conscious world we live in. These large framework providers include Hewlett-Packard, BMC, CA and IBM/Tivoli and have promised DCIM will be part of their overall management structure and are scrambling to do so through these in-house and partnership efforts.
While the physical layer of the data center has historically been viewed as a hardware exercise, there are a number of DCIM Suite and DCIM Specialist SOFTWARE vendors such as (alphabetically) Altima, APC by Schneider Electric, Cormant, Emerson, FieldView, Nlyte, Rackwise, RFcode and Sentilla who boast varied DCIM capabilities including one or more of the following; Capacity Planning, high-fidelity visualization, Real-Time Monitoring, Environmental/Energy sensors, business analytics, Process/Change Management and integration well with various types of external management systems and data sources.
Clearly, data center management domains are converging across the logical and physical layers. This type of converged management environment will allow enterprises to use fewer resources, eliminate stranded capacity, and manage the coordinated operations of these otherwise independent components.
Driving factors 
According to most of the major IT analyst firms, the use of some form of DCIM is expected to grow to over 60 percent market penetration by 2015, versus less than 10% percent market penetration seen in 2012. There are several trends driving the adoption of DCIM. These drivers include:
- Increased power and heat density
- Data center consolidation
- Virtualization and cloud computing
- Increased reliance on critical IT systems
- Energy efficiency or Green IT initiatives
At a high level, DCIM can be used to address data center availability and reliability requirements, DCIM can identify and eliminate sources of risk to increase availability of critical IT systems. DCIM tools can be used to identify interdependencies between facility and IT infrastructures to alert the facility manager to gaps in system redundancy.
Worth noting is a segmentation that is beginning to occur at the end of 2012. The roster of DCIM suppliers is becoming grouped (in many public forums) into two buckets, or segments in an attempt to reduce the customer confusion when researching DCIM solutions. The first bucket is the integrated software suites, where a comprehensive set of lifecycle asset management features are brought together and share a common view of the data center. Integrated repositories, reporting and connectivity are all expected to exist within these suites. Suites share a common look and feel and leverage all of the underlying asset knowledge where appropriate. A single source of truth exists across the entire suite for any given attribute.
The second group of DCIM suppliers includes all of the remaining 100+ vendors. These vendors enhance the DCIM suites and can exist as stand-alone solutions as well. These solutions are also referred to as 'specialists' or 'DCIM-ready' components. These include sensor systems, power management solutions, analytics packages, and monitoring. One of more of these enhancement solutions will likely be deployed or coupled with a single selected DCIM suite.
To reduce energy usage and increase energy efficiency, DCIM enables data center managers to measure energy use, enabling safe operation at higher densities. According to Gartner Research, DCIM can lead to energy savings that reduce a data center's total operating expenses by up to 20 percent. In addition to measuring energy use, CFD is used to create a Virtual Facility to maximize the use of airflow, which further drives down cooling infrastructure costs.
Certain vendor implementations of DCIM Suites will allow optimal server placement with regard to power, cooling and space requirements and there is a US Patent (7,765,286) which provides a discussion about this type of intelligent placement based upon one or more existing data center conditions.
DCIM software is used to benchmark current power consumption through real-time feeds and equipment ratings, then model the effects of "green" initiatives on the data center's power usage effectiveness (PUE) and data center infrastructure efficiency before committing resources to an implementation.
Evolution of tools 
Traditional approaches to resource provisioning and service requests have proven to be ill suited for virtualization and cloud computing. The manual handoffs between technology teams were also highly inefficient and poorly documented. This initially led to poor consumption of system resources and an IT staff that spent a lot of time on activities that provided little business value. In order to efficiently manage data centers and cloud computing environments, IT teams need to standardize and automate virtual and physical resource provisioning activities and develop better insight into real-time resource performance and consumption.
Data center monitoring systems were initially developed to track equipment availability and to manage alarms. While these systems evolved to provide insight into the performance of equipment by capturing real-time data and organizing it into a proprietary user interface, they have lacked the functionality necessary to effectively monitor and make adjustments to interdependent systems across the physical infrastructure to address changing business and technology needs.
More sophisticated integrated monitoring and management tools were later developed to connect this equipment and provide a holistic view of the facility's data center infrastructure. In addition to enabling the comprehensive real-time monitoring, these tools were equipped with additional modeling and management functionality to facilitate long-term capacity planning; dynamic optimization of critical systems performance and efficiency; and efficient asset utilization.
In response to the rapid growth of business-critical IT applications, server virtualization became a popular method for increasing a data center's IT application capacity without making additional investments in physical infrastructure. Server virtualization also enabled rapid provisioning cycles, as multiple applications could be supported by a single provisioned server.
Modern data centers are challenged with disconnects between the facility and IT infrastructure architectures and processes. These challenges have become more critical as virtualization creates a dynamic environment within a static environment, where rapid changes in compute load translate to increased power consumption and heat dispersal. If unanticipated, rapid increases in heat densities can place additional stress on the data center's physical infrastructure, resulting in a lack of efficiency, as well as an increased risk for overloading and outages. In addition to increasing risks to availability, inefficient allocation of virtualized applications can increase power consumption and concentrate heat densities, causing unanticipated "hot spots" in server racks and areas. These intrinsic risks, as well as the aforementioned drivers, have resulted in an increase in market demand for integrated monitoring and management solutions capable of "bridging the gap between IT and facilities" systems.
In 2010, analyst firm Gartner. Inc. issued a report on the state of DCIM implementations and speculated on future evolutions of the DCIM approach. According to the report, widespread adoption of DCIM over time will lead to the development of "intelligent capacity planning" solutions that support synchronized monitoring and management of both physical and virtual infrastructures.
Intelligent capacity planning will enable the aggregation and correlation of real-time data from heterogeneous infrastructures to provide data center managers with a common repository of performance and resource utilization information. It also will enable data center managers to automate the management of IT applications based on server capacity—as well as conditions within a data center's physical infrastructure—optimizing the performance, reliability and efficiency of the entire data center infrastructure.
- Cappuccio, David J. (2010-03-29). "DCIM: Going Beyond IT". Gartner, Inc.
- Huff, Lisa (2011-08-18). "The Battle for the Converged Data Center Network". Data Center Knowledge.]
- Oestreich, Ken (2011-11-15). "Converged Infrastructure". The CTO Forum.
- "Put DCIM into Your Automation Plans". Forrester Research. December 2009.
- "Infrastructure Monitoring and Management Tops List of Data Center User Issues". Information Management. 2010-06-03.
- US Patent 7,765,286
- Improving Datacenter Operational Efficiency Using Self-Service Provisioning and Advanced Performance Analytics
- "Data Center Management and Efficiency Software", 451 Group
- Preimesberger, Chris (2010-10-19). "Emerson Power Bringing Its Perspective to Data Center Management". eWeek.
- Marko, Kurt (2010-07-02). "A Look At Data Center Infrastructure Management Software & Its Impact". Processor Magazine 32 (14): 18.
- Harris, Mark (2010-06-08). "Bridging the Gap between IT and Facilities". Data Center Knowledge.
- Cole, Dave (June 2010). "The Infrastructure Management Elephant". PTSDCS.