Loop scheduling: Difference between revisions
Appearance
Content deleted Content added
Phantomsteve (talk | contribs) m Cleaned up using AutoEd |
adding a reference |
||
Line 1: | Line 1: | ||
{{ |
{{refimprove|date=February 2008}} |
||
In [[parallel computing]], '''loop scheduling''' is the problem of assigning proper iterations of parallelizable loops among ''n'' processors to achieve [[Load balancing (computing)|load balancing]] and maintain [[Locality of reference|data locality]] with minimum dispatch overhead. |
In [[parallel computing]], '''loop scheduling''' is the problem of assigning proper iterations of parallelizable loops among ''n'' processors to achieve [[Load balancing (computing)|load balancing]] and maintain [[Locality of reference|data locality]] with minimum dispatch overhead. |
||
Line 6: | Line 6: | ||
* dynamic scheduling: a chunk of loop iteration is dispatched at runtime by an idle processor. When the chunk size is 1 iteration, it is also called self-scheduling. |
* dynamic scheduling: a chunk of loop iteration is dispatched at runtime by an idle processor. When the chunk size is 1 iteration, it is also called self-scheduling. |
||
* guided scheduling: similar to dynamic scheduling, but the chunk sizes per dispatch keep shrinking until reaching a preset value. |
* guided scheduling: similar to dynamic scheduling, but the chunk sizes per dispatch keep shrinking until reaching a preset value. |
||
== References == |
|||
* {{cite book|author1=Thomas Rauber|author2=Gudula Rünger|title=Parallel Programming: for Multicore and Cluster Systems|url=https://books.google.com/books?id=UbpAAAAAQBAJ&printsec=frontcover#v=onepage&q=%22Loop%20scheduling%22&f=false|date=13 June 2013|publisher=Springer Science & Business Media|isbn=978-3-642-37801-0}} |
|||
==See also== |
==See also== |
||
* [[OpenMP]] |
* [[OpenMP]] |
Revision as of 14:15, 3 August 2019
This article needs additional citations for verification. (February 2008) |
In parallel computing, loop scheduling is the problem of assigning proper iterations of parallelizable loops among n processors to achieve load balancing and maintain data locality with minimum dispatch overhead.
Typical loop scheduling methods are:
- static even scheduling: evenly divide loop iteration space into n chunks and assign each chunk to a processor
- dynamic scheduling: a chunk of loop iteration is dispatched at runtime by an idle processor. When the chunk size is 1 iteration, it is also called self-scheduling.
- guided scheduling: similar to dynamic scheduling, but the chunk sizes per dispatch keep shrinking until reaching a preset value.
References
- Thomas Rauber; Gudula Rünger (13 June 2013). Parallel Programming: for Multicore and Cluster Systems. Springer Science & Business Media. ISBN 978-3-642-37801-0.