Jump to content

Dynamic infrastructure

From Wikipedia, the free encyclopedia

Dynamic Infrastructure is an information technology concept related to the design of data centers, whereby the underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. In other words, data center assets such as storage and processing power can be provisioned (made available) to meet surges in user's needs. The concept has also been referred to as Infrastructure 2.0 and Next Generation Data Center.


The basic premise of dynamic infrastructures is to leverage pooled IT resources to provide flexible IT capacity, enabling the allocation of resources in line with demand from business processes. This is achieved by using server virtualization technology to pool computing resources wherever possible, and allocating these resources on-demand using automated tools. This allows for load balancing and is a more efficient approach than keeping massive computing resources in reserve to run tasks that take place, for example, once a month, but are otherwise under-utilized.

Dynamic Infrastructures may also be used to provide security and data protection when workloads are moved during migrations, provisioning,[1] enhancing performance or building co-location facilities.[2]

Dynamic infrastructures were promoted to enhance performance, scalability,[3] system availability and uptime, increasing server utilization and the ability to perform routine maintenance on either physical or virtual systems all while minimizing interruption to business operations and reducing cost for IT. Dynamic infrastructures also provide the fundamental business continuity and high availability requirements to facilitate cloud or grid computing.

For networking companies, infrastructure 2.0 refers to the ability of networks to keep up with the movement and scale requirements of new enterprise IT initiatives, especially virtualization and cloud computing. According to companies like Cisco, F5 Networks and Infoblox, network automation and connectivity intelligence between networks, applications and endpoints will be required to reap the full benefits of virtualization and many types of cloud computing. This will require network management and infrastructure to be consolidated, enabling higher levels of dynamic control and connectivity between networks, systems and endpoints.[citation needed]

Early examples of server-level dynamic infrastructures are the FlexFrame for SAP and FlexFrame for Oracle introduced by Fujitsu Siemens Computers (now Fujitsu) in 2003. The FlexFrame approach was to dynamically assign servers to applications on demand, leveling peaks and enabling organizations to maximize the benefit from their IT investments.[4]


Dynamic infrastructures take advantage of intelligence gained across the network. By design, every dynamic infrastructure is service-oriented and focused on supporting and enabling the end users in a highly responsive way. It can utilize alternative sourcing approaches, like cloud computing to deliver new services with agility and speed.

Global organizations already have the foundation for a dynamic infrastructure that will bring together the business and IT infrastructure to create new possibilities. For example:

  • Transportation companies can optimize their vehicles' routes leveraging GPS and traffic information.
  • Facilities organizations can secure access to locations and track the movement of assets by leveraging RFID technology.
  • Production environments can monitor and manage presses, valves and assembly equipment through embedded electronics.
  • Technology systems can be optimized for energy efficiency, managing spikes in demand, and ensuring disaster recovery readiness.
  • Communications companies can better monitor usage by location, user or function, and optimize routing to enhance user experience.
  • Utility companies can reduce energy usage with a "smart grid."

Virtualized applications can reduce the cost of testing, packaging and supporting an application by 60%, and they reduced overall TCO by 5% to 7% in our model. – Source: Gartner – "TCO of Traditional Software Distribution vs. Application Virtualization" / Michael A Silver, Terrence Cosgrove, Mark A Margevicious, Brian Gammage / 16 April 2008

While green issues are a primary driver in 10% of current data center outsourcing and hosting initiatives, cost reductions initiatives are a driver 47% of the time and are now aligned well with green goals. Combining the two means that at least 57% of data center outsourcing and hosting initiatives are driven by green. – Source: Gartner – "Green IT Services as a Catalyst for Cost Optimization." / Kurt Potter / 4 December 2008

"By 2013, more than 50% of midsize organizations and more than 75% of large enterprises will implement layered recovery architectures." – Source: Gartner – "Predicts 2009: Business Continuity Management Juggles Standardization, Cost and Outsourcing Risk"). / Roberta J Witty, John P Morency, Dave Russell, Donna Scott, Rober Desisto / 28 January 2009


Some vendors promoting dynamic infrastructures include IBM,[5][6] Microsoft,[7] Sun,[8] Fujitsu,[9] HP,[10] Dell,.[11]

See also[edit]


  1. ^ "Computation on Demand: The Promise of Dynamic Provisioning". 10 December 2007.
  2. ^ An overview of continuous data protection.
  3. ^ Amazon Elastic Compute Cloud.
  4. ^ "IDC White Paper Building the Dynamic DataCenter: FlexFrame for SAP" (PDF). Archived from the original (PDF) on 2017-11-07. Retrieved 2017-10-30.
  5. ^ IBM patent: Method For Dynamic Information Technology Infrastructure Provisioning
  6. ^ IBM's dynamic infrastructure taking shape at TheRegister
  7. ^ Microsoft's view of The Dynamic Datacenter covered by networkworld.
  8. ^ Dynamic Infrastructure at Sun
  9. ^ Fujitsu Dynamic Infrastructures
  10. ^ Dynamic Infrastructure and Blades at HP Archived 2011-06-14 at the Wayback Machine
  11. ^ Dell Converged Infrastructure