Cycle Computing

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Cycle Computing
privately held company
United States
Area served
Key people
Jason Stowe (CEO)

Cycle Computing is a company that provides software for orchestrating computing and storage resources in cloud environments. The flagship product is CycleCloud, which supports Amazon Web Services, Google Compute Engine, Microsoft Azure, and internal infrastructure. The CycleCloud orchestration suite manages the provisioning of cloud infrastructure, orchestration of workflow execution and job queue management, automated and efficient data placement, full process monitoring and logging, within a secure process flow.


Cycle Computing was founded in 2005.[1] Its original offerings were based around the HTCondor scheduler and focused on maximizing the effectiveness of internal resources. Cycle Computing offered support for HTCondor as well as CycleServer, which provided metascheduling, reporting, and management tools for HTCondor resources. Early customers spanned a number of industries, including insurance, pharmaceutical, manufacturing, and academia.

With the advent of large public cloud offerings, Cycle Computing expanded its tools to allow customers to make use of dynamically provisioned cloud environments. Key technologies developed include the ability to validate that resources were correctly added in the cloud (patent awarded in 2015[2]), the ability to easily manage data placement and consistency, the ability to support multiple cloud providers within a single workflow, and other technologies.

On August 15, 2017, Microsoft announced its acquisition of Cycle Computing[3].

Large runs[edit]

In April 2011, Cycle Computing announced “Tanuki”, a 10,000 core Amazon Web Services cluster used by Genentech.[4]

In September 2011, a Cycle Computing HPC cluster called Nekomata (Japanese for "Monster Cat") was renting out at $1279/hour, offering 30,472 processor cores with 27TB of memory and 2PB of storage. An unnamed pharmaceutical company used the cluster for 7 hours, paying $9000, for a molecular modeling task.[5][6][7]

In April 2012, Cycle Computing announced that, working in collaboration with scientific software-writing company Schrödinger, it had screened 21 million compounds in less than three hours using a 50,000-core cluster.[8]

In November 2013, Cycle Computing announced that, working in collaboration with scientific software-writing company Schrödinger, it had helped Mark Thompson, a professor of chemistry at the University of Southern California, sort through about 205,000 compounds to search for the right compound to build a new generation of inexpensive and highly efficient solar panels. The job took less than a day and cost $33,000 in total. The computing cluster used 156,000 cores spread across 8 regions and had a peak capacity of 1.21 petaFLOPS.[9][10][11][12][13]

In November 2014, Cycle Computing worked with a researcher at HGST to run a hard drive simulation workload. The computation would have taken over a month on internal resources, but completed in 7 hours running on 70,000 cores in Amazon Web Services, at a cost of less than $6,000.[14][15]

In September 2015, Cycle Computing and the Broad Institute announced a 50,000 core cluster to run on Google Compute Engine.[16]

Media coverage[edit]

Cycle Computing has been covered by GigaOm,[8][11] Ars Technica,[7] ExtremeTech,[5] CNet,[12] and[10]

Cycle Computing was also mentioned by Amazon CTO Werner Vogels in the 2013 Day 2 Keynote of AWS re:Invent.[17]

External links[edit]


  1. ^ "About Us". Retrieved February 5, 2015.
  2. ^ "Method and system for automatically detecting and resolving infrastructure faults in cloud infrastructure".
  3. ^ "Microsoft acquires Cycle Computing to accelerate Big Computing in the cloud".
  4. ^ "Cycle Computing fires up 10,000-core HPC cloud on EC2".
  5. ^ a b Anthony, Sebastian (September 20, 2011). "Rent the world's 30th-fastest, 30,472-core supercomputer for $1,279 per hour". ExtremeTech. Retrieved January 26, 2014.
  6. ^ "New CycleCloud HPC Cluster Is a Triple Threat: 30000 cores, $1279/Hour, & Grill monitoring GUI for Chef". Cycle Computing. September 19, 2011. Retrieved January 26, 2014.
  7. ^ a b Brodkin, Jon (September 20, 2011). "$1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud A supercomputer built on Amazon's cloud is used for pharma research". Ars Technica. Retrieved January 26, 2014.
  8. ^ a b Darrow, Barb (April 19, 2012). "Cycle Computing spins up 50K core Amazon cluster". GigaOm. Retrieved January 26, 2014.
  9. ^ "Back to the Future: 1.21 petaFLOPS(RPeak), 156,000-core CycleCloud HPC runs 264 years of Materials Science". Cycle Computing. November 12, 2013. Archived from the original on February 1, 2014. Retrieved January 26, 2014.
  10. ^ a b Yirka, Bob (November 12, 2013). "Cycle Computing uses Amazon computing services to do work of supercomputer". Retrieved January 26, 2014.
  11. ^ a b Darrow, Barb (November 12, 2013). "Cycle Computing once again showcases Amazon's high-performance computing potential". GigaOm. Retrieved January 26, 2014.
  12. ^ a b Shankland, Stephen (November 12, 2013). "Supercomputing simulation employs 156,000 Amazon processor cores: To simulate 205,000 molecules as quickly as possible for a USC simulation, Cycle Computing fired up a mammoth amount of Amazon servers around the globe". CNet. Retrieved January 26, 2014.
  13. ^ Brueckner, Rich (November 13, 2013). "Slidecast: How Cycle Computing Spun Up a Petascale CycleCloud". Inside HPC. Retrieved January 26, 2014.
  14. ^ "HGST buys 70,000-core cloud HPC Cluster, breaks record, returns it 8 hours later". Retrieved February 5, 2016.
  15. ^ "Cycle Helps HGST Stand Up 70,000 Core AWS Cloud".
  16. ^ "Google, Cycle Computing Pair for Broad Genomics Effort".
  17. ^ Vogels, Werner. "AWS re:Invent 2013 Day 2 Keynote with Werner Vogels". AWS re:Invent 2013. Retrieved January 30, 2014.