In electrical engineering the load factor is defined as the average load divided by the peak load in a specified time period.[1] It is a measure of the utilization rate, or efficiency of electrical energy usage; a high load factor indicates that load is using the electric system more efficiently, whereas consumers or generators that underutilize the electric distribution will have a low load factor.

${\displaystyle f_{Load}={\frac {\text{Average Load}}{\text{Maximum load in given time period}}}}$

An example, using a large commercial electrical bill:

• peak demand = 436 kW
• use = 57200 kWh
• number of days in billing cycle = 30 d

Hence:

• load factor = { 57200 kWh / (30 d × 24 hours per day × 436 kW) } × 100% = 18.22%

It can be derived from the load profile of the specific device or system of devices. Its value is always less than one because maximum demand is never lower than average demand, since facilities likely never operate at full capacity for the duration of an entire 24-hour day. A high load factor means power usage is relatively constant. Low load factor shows that occasionally a high demand is set. To service that peak, capacity is sitting idle for long periods, thereby imposing higher costs on the system. Electrical rates are designed so that customers with high load factor are charged less overall per kWh. This process along with others is called load balancing or peak shaving.

The load factor is closely related to and often confused with the demand factor.

${\displaystyle f_{Demand}={\frac {\text{Maximum load in given time period}}{\text{Maximum possible load}}}}$

The major difference to note is that the denominator in the demand factor is fixed depending on the system. Because of this, the demand factor cannot be derived from the load profile but needs the addition of the full load of the system in question.