# Talk:Inexact differential

WikiProject Physics (Rated Start-class, Mid-importance)
This article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start  This article has been rated as Start-Class on the project's quality scale.
Mid  This article has been rated as Mid-importance on the project's importance scale.

## Inexact differential?

What *is* an inexact differential. I believe I was the person that put that redlink up a long while ago in Entropy (thermodynamic views), and I had hoped that someone would fill that space with an explanation. I still have no idea what the inexact differential symbol δ *means*. Does anyone know? Fresheneesz 09:34, 5 May 2006 (UTC)

Yes, loosely it means that the function may not exist; also in thermodynamics it means that that function is not a state function and that it is path dependent. This is just from memory, I may be off on a few technical points. We are working to build the inexact differential article. Help if you want. Thanks:--Sadi Carnot 13:54, 25 July 2006 (UTC)

I think "inexact differential" just means a differential (i.e. an expression of the form ${\displaystyle Mdx+Ndy}$) which is not exact. For example, ${\displaystyle ydx-xdy}$ is an inexact differential. Line integrals of this will depend on path. Cjao (talk) 03:48, 22 December 2009 (UTC)

## No merging

Please do not attmept to merge this article into exact differential. These are seperate but related terms. See: Talk:First law of thermodynamics for example. Thanks:--Sadi Carnot 05:48, 25 July 2006 (UTC)

## Inexact differential

I stumbled over the first sentence of Inexact differential, because it does not say at all how the "inexact differential" of a function f is defined, and I could not make any sense of it. Since the rest of the article only says that the properties of an inexact differential are the opposite of the properties of an exact differential, which is obvious, I thought it would be best to merge it with exact differential. However, you reverted this. Why? -- Jitse Niesen (talk) 05:54, 25 July 2006 (UTC)

Just saw your comment on Talk:Inexact differential. Will have a look later. -- Jitse Niesen (talk) 05:57, 25 July 2006 (UTC)
Jitse, I assumed your merge was well intentioned in that you were just trying to help organize the math articles. Next time, however, if you feel a merge is needed please put merge tags on both articles and let your written proposal sit on the talk page for a month or so. For the record, I started the exact differential article last year but someone else started the inexact differential article recently. This latter article, has been a redlink on many Wiki pages, such as Entropy (thermodynamic views), for sometime now. It has even been a request on talk pages such as Talk:Entropy (thermodynamic views), Talk:entropy, Talk:heat, and Talk:First law of thermodynamics, to name a few. Certainly, the article may need cleaning and smoothing, but not merging. Please help with this improvement if you will. I will try as well. Thanks: --Sadi Carnot 14:22, 25 July 2006 (UTC)
I will also add some external links; I hope this helps?--Sadi Carnot 14:36, 25 July 2006 (UTC)

i will change "mathematics" to "physics" in the intro, as it is not accurate. it is not a math term or object. as above conversation indicates, a trained mathematician may very well not recognize it. Mct mht 20:29, 19 August 2006 (UTC)

Do we believe an "inexact differential" corresponds to a mathematical object at all? We know that it's mainly thanks to Clausius that thermodynamics as it is traditionally taught employs a mathematics and notation all of its own. But hey, that was in 1880's. Marc Goossens 16:33, 25 October 2006 (UTC)
It is a very common term in thermodynamics; although "exact" probably is used more. Renowned chemical engineer Kenneth Denbigh states in his 1984 The Principles of Chemical Equilibrium, 4th Ed., that "the word 'exact' merely means that the integral is independent of path." (pg. 21). Although, in the notes, he says that this is not the same thing as saying that a differential, or differential expression, is 'complete'. He says, "for example:
${\displaystyle dU=\left({\frac {\partial U}{\partial T}}\right)dT+\left({\frac {\partial U}{\partial p}}\right)dp}$
would be complete only if T and p completely determine the state of the system; in general, U depends also on the size and composition and extra terms must be added." Later: --Sadi Carnot 16:36, 5 November 2006 (UTC)

## Imperfect Differential

I'm not sure how to make a re-direct, so could someone make a redirect from imperfect differential to here? Over in this locale we call it an "imperfect differential". Fephisto 21:36, 24 August 2006 (UTC)

to make a redirect, create a new page, in which you type #REDIRECT [[inexact differential]] . Mct mht 14:08, 26 August 2006 (UTC)

## d-bar or delta?

it says: The symbol Inexact differential symbol.PNG (d-bar), or δ (in the modern sense)

what is the diff between the two? What is meant by "in the modern sense"? I was taught the d-bar symbol in my physical chemistry course... —Preceding unsigned comment added by Mikejones2255 (talkcontribs) 03:34, 17 December 2009 (UTC)

## Is this an operation?

Would it be correct to say (as I conclude from the scattered comments above) that ${\displaystyle \delta }$ is not a symbol of its own, but just part of a mnemotechnic convention for constructing symbols? It seems that "${\displaystyle \delta Q}$" must be understood as a single atomic symbol; it cannot be understood as some operation applied to ${\displaystyle Q}$. It is implied that we can write ${\displaystyle Q=\int \delta Q\,dt}$, but this defines ${\displaystyle Q}$ in terms of ${\displaystyle \delta Q}$ (and an implied path through the state space), not the other way around. –Henning Makholm (talk) 22:19, 28 March 2010 (UTC)

${\displaystyle \delta Q}$ is an infinitesimally small path-dependent (and thus inexact) change in U (as in internal energy). The total change is ${\displaystyle Q=\int \limits _{S}\delta Q}$. There is no ${\displaystyle \,dt}$. The integral is essentially a sum (along a well defined path ${\displaystyle S}$) of all the small path-dependent changes. IlyaV(talk) 02:08, 31 March 2010 (UTC)

Would it be correct to say (as I conclude from the scattered comments above) that ${\displaystyle \delta }$ is not a symbol of its own, but just part of a mnemotechnic convention for constructing symbols? It seems that "${\displaystyle \delta Q}$" must be understood as a single atomic symbol; it cannot be understood as some operation applied to ${\displaystyle Q}$. (Apparently nitpick-inducing clarification of question omitted)Henning Makholm (talk) 06:19, 3 April 2010 (UTC)

I think "${\displaystyle \delta Q}$" is in fact atomic, however there can be an operator like "${\displaystyle \delta \over \delta \xi }$". See Differential operator IlyaV (talk) 21:38, 3 April 2010 (UTC)
If ${\displaystyle \delta Q}$ is atomic, then "${\displaystyle \int \delta Q}$" is formally equivalent to "${\displaystyle \int X}$" with no integration variable. Whence does such an expression get meaning?
${\displaystyle \delta \over \delta \xi }$ appears to be defined nowhere on differential operator (not to mention the present article). What are its differences from ${\displaystyle d \over dx}$ and ${\displaystyle \partial \over \partial x}$? –Henning Makholm (talk) 22:00, 3 April 2010 (UTC)
Um, ok perhaps I'm not sure what you mean by atomic, as I come from physics background, not math. From what I understand, the "inexact differential" at least as used in thermo (again, I don't know how it is used in rigorus math) simply means a differential for which the integral is path specific. Sort of like ${\displaystyle \mathbf {f} \cdot \mathbf {dl} }$ for a non-conservative vector field ${\displaystyle \mathbf {f} }$. As such the differential behaves similar to exact ones. Certainly something like ${\displaystyle \delta Q \over dt}$ is well defined. IlyaV (talk) 00:26, 4 April 2010 (UTC)

I'm afraid "a differential for which the integral is path specific" does not really tell me, operationally, what to do with an equation that asks me to construct one. I can grasp the meaning "a function whose path integrals depend on the path", but a free-floating differential that is not being quotiented into a derivative or used to complete an integral makes me queasy enough (being primarily a mathematician) that I'm going to insist on finding an explicit definition what an equation that involves one means. So let me go back a few steps and start anew:

I came here from the Enthalpy article. It presents a long equation with a legend that goes like:

δ represents the inexact differential,
U is the internal energy,
δQ = TdS is the energy added by heating during a reversible process,
bla bla bla

I do not know how what an inexact differential is, so I come here and expect to find an explanation. Finding no explanation in the article itself, I go to the talk page and try to piece one together. I reach the tentative conclusion that the Enthalpy article is in error because there is no mathematical object or operation named "the inexact differential" that the symbol ${\displaystyle \delta }$ alone can represent in the equation. And now I'm asking: Is this tentative conclusion correct?

If it is correct, then I'll go and remove the misleading legend from enthalpy. But if it's not correct, then I'm trying to figure out what is, such that I can make this say it.

In other words, my hypothesis was that "${\displaystyle \delta Q}$" is just one compound symbol that cannot be understood as the ${\displaystyle \delta }$ operation applied to the quantity ${\displaystyle Q}$. If this hypothesis is true, if we pick a new letter, say X and replace all instances of "${\displaystyle \delta Q}$" with "X", then the mathematics means exactly what it did before, and the only loss is that it is less intuitive to remember that X and Q are related by a certain equation.

But this cannot be true, because ${\displaystyle \int X}$ is meaningless (it lacks a variable of integration) and you sound like ${\displaystyle \int \delta Q}$ has meaning. So apparently the ${\displaystyle \delta }$ does mean something by itself, and I'm back to trying to understand what its meaning is. And presently it seems that there must be two definitions – first one that applies when ${\displaystyle \delta \Box }$ appears in an ordinary algebraic context, and a definition that says how to compute an integral where the variable is introduced with a ${\displaystyle \delta }$, as in ${\displaystyle \int ...\delta \Box }$.

Hmm... or is ${\displaystyle \int \delta Q}$ just sloppy notation for ${\displaystyle \int {\frac {\delta Q}{dt}}dt}$? That would seem to save the integral from meaninglessness – but then we'd need a definition for ${\displaystyle {\frac {\delta Q}{dt}}}$. How does it differ from plain old ${\displaystyle {\frac {dQ}{dt}}}$?

Does this make my question clearer? –Henning Makholm (talk) 01:48, 4 April 2010 (UTC)

A different wild guess: "${\displaystyle \delta }$" means exactly the same as "d"; when understanding an equation it should be read as a purely typographical variant of "d". But as a matter of habit/aesthetics, physicists will prefer to write ${\displaystyle \delta X}$ over ${\displaystyle dX}$ in contexts where the function ${\displaystyle t\mapsto X}$ cannot be factored through the "instant state of the physical system".
This hypothesis would be consistent with most of what is written above, except it seems to imply that ${\displaystyle t}$ itself should take a ${\displaystyle \delta }$ rather than a d. I've been sort of avoiding it, because it is a bit discourteous to physicists to accuse them of such a voodoo notation. But it begins to seem inescapable.
Closer to the truth? Half marks? Not even wrong? –Henning Makholm (talk) 10:00, 4 April 2010 (UTC)
I think that's closer to truth. The ${\displaystyle \delta }$ is just telling you that for this particular differential the fundamental theorem of calculus doesn't hold (i.e, things are path dependent). Also, given all the state variables at some time t, you can't compute what Q is at that time (or what is W for that matter) without knowing how things got to that point. If you're interested in thermo in larger context, why not pick up a book? Or, here is a link to a short (~100 pages) primer on thermo from a grad class I'm taking [1] (pp 13-14 for definition of inexact differentials. pp 17 for what it actually evaluates in terms of exact differentials for a single compartment system) IlyaV (talk) 16:37, 4 April 2010 (UTC)
The fundamental theorem of calculus always holds. That's why it is a theorem. What may not hold is some kind of half-thought-out quasi-implied perhaps-generalization of the theorem, which then turns out not to be a theorem after all.
You say my formulation is "closer to truth". How would it change in order to become actual truth?
No, I'm not going to discard Wikipedia and "pick up a book" instead. This is a Wikipedia talk page. It is here for discussing how we can improve the encyclopedia it attaches to. That there are books to pick up is not an excuse for letting the encyclopedia explain things badly or not at all. The explanations should be fixed, not merely amended with references to outside-Wikipedia material. –Henning Makholm (talk) 21:32, 4 April 2010 (UTC)

Your PDF proposes this definition (on page 9, not 10):

"The differential
${\displaystyle dF=\sum _{i=1}^{k}A_{i}dx_{i}}$
is exact if there is a function ${\displaystyle F(x_{1},...x_{k})}$ whose differential gives the right hand side of (the above equation)."

This seems to me to be mathematically vacuous, because according to it every differential ${\displaystyle dF}$ is exact, because for every choice of F, we can simply take ${\displaystyle k=1}$, ${\displaystyle A_{1}=1}$, ${\displaystyle x_{1}=F}$ and ${\displaystyle F(x_{1})=x_{1}}$.

This is false. dF=y dx + x dy cannot be expressed as a sum of C(x)dx + C(y)dy ...... Also, you cannot say that the integral of dF is F(x2,y2)-F(x1,y1), so the Fundamental Theorem of Calculus is not really applicable here. These two aspects make dF (as I defined it) inexact while dx and dy are exact. IlyaV (talk) 22:35, 4 April 2010 (UTC)
IlyaV, this "counterexample" doesn't make sense in the context of the fundamental theorem of calculus. The aforementioned theorem applies to single-valued functions of a single variable. The theorem that would guarantee that the path integral of your two-dimensional vector field be path-independent is called the Fundamental Theorem of Line Integrals (or some variation), and it states that the line integral (or path integral) of a vector field is independent of path if and only if the curl of said vector field is identically zero. Since any such vector field is the gradient of a scalar field (so-called scalar potential), it makes sense to talk about f(x2,y2)-f(x1,y1). The thing you are calling dF=y dx + x dy should probably be thought of as a vector field whose x-component is y, and whose y-component is x. This vector field certainly has non-zero curl, thus is not the gradient of any scalar field and therefore does not fit the requirements of the Fundamental Theorem of Line Integrals. A great deal of confusion and miscommunication could be avoided in this subject if we all just take a step back and be mathematically precise in what we are talking about.
I truly believe this whole business about exact and inexact differentials merely comes down to shortcuts of notation. Think back to freshman physics - work is defined as the line integral of the path of a particle through a force field. The genius of the concept of "energy" was that it was shown that if the force field has zero curl, you can integrate it to find the (scalar) potential energy field and easily relate work to changes in potential energy. If there's friction, then the force field will have curl and no potential energy field exists, and the work done has to be computed using a line integral. Trying to talk about an infinitesimal amount of friction and calling it an "inexact differential" does nothing more than introduce murkiness and mysticism into the world-altering discoveries of physics. Ryan ChemE (talk) 10:02, 29 January 2015 (UTC)
A correction to my earlier remarks: IlyaV, your function (vector field) actually does have zero curl, and thus is the gradient of a scalar field F = xy. It therefore makes perfect sense to talk about F(x2,y2)-F(x1,y1), so this function is not an example of what you would call an inexact differential. I assume the function you meant to use was dF = y dx - x dy, or in other words f = <y,-x>, which is a classic example of a vector field with nonzero curl.Ryan ChemE (talk) 19:54, 29 January 2015 (UTC)

Is it meant to define a notion of "exactness" only relative to some already (implicitly) given set of ${\displaystyle x_{i}}$'s? If that is so, it seems rather unfair to blame on the innocent ${\displaystyle dF}$ a "failure" that is wholly caused by your choice of ${\displaystyle x_{i}}$'s. –Henning Makholm (talk) 21:56, 4 April 2010 (UTC)

Hi, Henning Makholm
Let me try to explain about the maths behind this.
First you're right, ${\displaystyle \delta Q}$ should be seen as an atomic symbol. Sentences like "δ represents the inexact differential," or "But for an inexact differential δf, ${\displaystyle \int _{a}^{b}\delta f\neq f(b)-f(a)}$" are terribly confusing (and to me nonesense). δ cannot be thought as something on its own.
So yes, mathematically, an expression like ${\displaystyle \int X}$ can perfectly be meaningful. The whole point is what is the nature of X here ?
Calculus has deeply evolved since Leibnitz and now several possible viewpoint of the notion of integration exists. However I think I'm correct to say that the notion of infinitesimal quantities is now never used in maths (except in Non Standard Analysis) because it is hard to give rigorous definitions. Several tools have been developped. For example the most common notion of integration is based on measure theory.
But in the current context it has more to do with Differential geometry and Exterior calculus. In this theories, the things on which integration are defined are differential forms. If X is a differential form, ${\displaystyle \int X}$ is almost a valid expression. Almost because you must also indicate the domain where the integration is done. So the completely expression is something like ${\displaystyle \int _{D}^{}X}$.
To understand more about this, you must figure out what is a differential form. I try not to mention too complex details: a differential form of degree 1 (or 1-form) is a function from a certain kind of geometrical space ${\displaystyle {\mathcal {M}}}$ to a set of algebric objects called covectors (or linear maps).
Suppose we have a real function f on ${\displaystyle {\mathcal {M}}}$ (meaning that in every point of the geometrical space ${\displaystyle {\mathcal {M}}}$ we put a real number). As often, it is interesting to study how fast the function varies (= how fast the real quantity f(P) changes when the point P moves in ${\displaystyle {\mathcal {M}}}$). We can clearly see that all this have strong links with the notion of derivation. However the simple notion of real derivative cannot be applied in this context. No problem, there is still an object that can represent the speed of variation of f. And the exact nature of this object is precisely a differential form. We use notation ${\displaystyle df}$ and call it the differential of f. Symbol d is an operator called the exterior derivative. It has a role similar to ' in the simple derivative f'.
However given some differential form ${\displaystyle \omega }$, one may ask "Is ${\displaystyle \omega }$ the differential of some function ?" or equivalently "Does ${\displaystyle \omega }$ admits any anti-derivative ?" or equivalently "Is the differential form ${\displaystyle \omega }$ exact ?". The answer is "not always".
If not, physicist generally like to call it ${\displaystyle \delta Z}$ rather than ${\displaystyle \omega }$, just to keep in mind that this object has the same nature than df, the differential of f. But sometimes it's worse ! They call it ${\displaystyle dZ}$ which just botches any attempt to see ${\displaystyle d}$ as an operator. Asking if something written ${\displaystyle dZ}$ is exact or not is just as consistant as asking if the number (1/x) is invertible or not.
Now let's come back to integration. Any differential form of degree 1 (exact or not) can be integrated on a path. A path ${\displaystyle \gamma }$ is informally a curve drawn on the geometric space ${\displaystyle {\mathcal {M}}}$ with a starting point and a end point. In thermodynamics this is precisely a thermodynamic transformation. The result ${\displaystyle \int _{\gamma }^{}\omega }$ is a real number. So a differential form allows you to associate a number to any transformation. For example the number ${\displaystyle Q=\int _{\gamma }^{}\delta Q}$ is the heat exchanged during transformation ${\displaystyle \gamma }$. Similarly the number ${\displaystyle \Xi =\int _{\gamma }^{}dU}$ is the variation of internal energy during transformation ${\displaystyle \gamma }$. But as dU is the differential of U, the operators d and ${\displaystyle \int }$ disappear and we can express ${\displaystyle \Xi =U(\gamma _{end})-U(\gamma _{begin})}$. The number I have called ${\displaystyle \Xi }$ has a role equivalent to ${\displaystyle Q}$. But as we can express it with the function U we generally don't mention it (or sometime it is called ${\displaystyle \Delta U}$ to show it is a variation; here ${\displaystyle \Delta U}$ is also an atomic expression).
To finish, expressions like ${\displaystyle {\frac {\delta Q}{dt}}}$ has no meaning. If you read this somewhere, either it's a mistake or it's a very clumsy way to express something else. I hope my english is readable. It is not my mother tongue. --213.144.210.105 (talk) 13:31, 10 June 2010 (UTC)

The only place I've ever seen this talk of "exact" vs "inexact" differentials is in the study of thermodynamics. However, I only saw this way of framing it in my book targeting chemists. In Sandler's "Chemical, Biochemical, and Engineering Thermodynamics," Stanley Sandler dispenses with the notion altogether. Personally the idea of talking about "differentials" makes me uncomfortable and it seems less than rigorous, and frankly in thermodynamics you don't need them, just plain old calculus.

Rather than talking about some infinitesimal change ${\displaystyle dU}$, specify a control volume and talk about the accumulation of internal energy in time, ${\displaystyle {\frac {dU}{dt}}}$. To avoid the added complexity of energy associated with flow of mass, assume it's a closed system, and you can write the first law as ${\displaystyle {\frac {dU}{dt}}=Q-P{\frac {dV}{dt}}}$, where Q represents the rate of heat flowing into the system. Similarly, a closed-system entropy balance is written as ${\displaystyle {\frac {dS}{dt}}={\frac {Q}{T}}+S_{gen}}$, with the second law stating that ${\displaystyle S_{gen}=0}$ in a reversible process, and ${\displaystyle S_{gen}>0}$ in an irreversible process. If the process is reversible, i.e. no entropy is generated, then you can solve for Q and insert into the first law to obtain ${\displaystyle {\frac {dU}{dt}}=T{\frac {dS}{dt}}-P{\frac {dV}{dt}}}$, the familiar fundamental equation for internal energy expressed using derivatives that actually make sense in the context of calculus. Also, you can evaluate the change in internal energy of the system by simply invoking the 2nd fundamental theorem of calculus, so ${\displaystyle \Delta U=\int {\frac {dU}{dt}}dt=\int Qdt-\int P{\frac {dV}{dt}}dt}$

I think this formulation is much better, as you don't have to talk about such things as δQ and δw, everything just makes sense both physically and mathematically. Unfortunately, the nonsense about differentials is as old as the science itself and will probably never go away. Ryan ChemE (talk) 09:12, 29 January 2015 (UTC)

## Rewrite

I have (almost) rewritten the article to make it more accessible to non-engineers (I am one myself though).

I have shortened the description part (it was way too long and thereby confusing). Also the part with integrals was quite confusing:

${\displaystyle \int _{a}^{b}\delta f\neq f(b)-f(a),}$

This expression really does not make sense ("it is not even wrong"). Assuming that a and b are the initial and final states of the system, the point of the inexact differential is that you cannot get a unique primitive f. Rewriting in variables of the first principle of thermodynamics:

${\displaystyle Q\neq \int _{U_{1}}^{U_{2}}\delta Q}$

The reason this does not make any sense is that we cannot figure out how Q changes with the state U without knowing W. You can phrase it like there is an unsaturated degree of freedom, I suppose.

This concept is really simple, but it seems that thermodynamics textbooks are written by some scribes that love keeping everything difficult to understand. 193.175.53.21 (talk) 11:11, 4 August 2010 (UTC)

## Vanuatuan?

I don't see the relevance of the Vanuatuan movie to this topic. It seems nothing more than a slightly comical (lewd) video, with a post-hoc rationalisation. — Preceding unsigned comment added by 129.67.105.62 (talk) 13:18, 9 June 2011 (UTC)

## Definition?

This article looks weird. What is the exact mathematical definition of inexact differentia? ––虞海 (Yú Hǎi) 17:45, 17 January 2012 (UTC)

Let me have another go at explaining it. In thermodynamics, U is a scalar potential, while Q and W are not. This means that any vector field that is a gradient of U is a conservative vector field, while vector fields that are gradients of Q and W are not conservative. When we call differentials of Q and W inexact, and use a ${\displaystyle \delta }$ or đ symbol for them in place of a d, this expresses the fact that any integral of them will be path dependent because their vector fields are not conservative. Dezaxa (talk) 05:34, 15 April 2014 (UTC)
This isn't quite right. You can't say that "vector fields that are gradients of Q and W are not conservative," because all vector fields that are gradients of anything are conservative. The point is that it doesn't even make sense to talk about taking the gradient of Q or W. And the real point is that it doesn't make sense to talk about differential amounts of heat or work either. So to answer your question, Yu Hai, there is no exact mathematical definition of "inexact differential" (pun intended?), the people who developed thermodynamics invented their own version of mathematics which you can convince yourself kind of makes sense as long as you don't think too hard about it. Ryan ChemE (talk) 17:20, 4 February 2015 (UTC)

## Integrating Factor ?

It seems to me that the temperature as an integrating factor is a neutral factor in this business. What makes the entropy an exact differential is the specification of the path of integration: a reversible path (Q_rev). With the specification of a path all inexact differentials become exact. Am I totally missing the point or am I right? 109.93.162.246 (talk) 11:31, 4 January 2014 (UTC)

I think you are right. While ${\displaystyle \delta Q}$ is inexact, ${\displaystyle dQ_{\text{rev}}}$ is exact, because a reversible path has been specified. Dezaxa (talk) 05:47, 15 April 2014 (UTC)

## With all due respect, this entire page is nonsense

I would like to propose that the very idea of talking about thermodynamics in the context of differential changes makes absolutely no sense. It may very well be how the concepts are discussed in textbooks (some textbooks), but it forces you into twisted contortions of logic and to abandon the rigorous rules of mathematics. That is why there are so many mathematicians on this talk page who have no idea what the heck we are talking about here. When the mathematics we use to try to describe the physics of reality baffles mathematicians, that should be a red flag.

The first problem we need to address is: heat does not have units of energy, it has units of energy/time. Heat is a flow of energy. It doesn't make sense to talk about an infinitesimal amount of heat δQ. We can certainly talk about a heat flux q, which has units of energy/(time*area), which is what appears in Fourier's Law of Heat Conduction ${\displaystyle q=-k{\frac {dT}{dx}}}$. Or the same quantity appears in Newton's Law of Cooling ${\displaystyle q=-h*\Delta T}$. We can then get a total heat flow by multiplying by the area across which heat flows, or if the temperature gradient varies we can chop the area up into small pieces and apply the Riemann sum definition of an integral. The point is it makes no sense to talk about an infinitesimal amount of heat energy which we would somehow "integrate" through state space in a path-dependent way. We should change our perspective and understand that heat is simply a flow of energy into or out of a system.

The people who talk about trying to find the change in energy of a system by integrating the "inexact differential" δQ are taking a convoluted look at the problem. Internal energy and enthalpy are state functions - all you need to know is where you start and where you end. You can then perform an energy balance and recognize that energy is conserved, so if the system has more energy than it started with, it must have come from somewhere. Equivalently, if you have less energy than you started with, it must have gone somewhere. You don't even have to use the normal variables found in the fundamental equations. For example, if I want to know what is the change in internal energy of a material if I take it from one temperature and pressure to another, I can perfectly well do that. ${\displaystyle {\frac {dU}{dt}}=[c_{P}-P({\frac {\partial {V}}{\partial {T}}})_{P}]{\frac {dT}{dt}}+[-T({\frac {\partial {V}}{\partial {T}}})_{P}-P({\frac {\partial {V}}{\partial {P}}})_{T}]{\frac {dP}{dt}}}$

Formally, you can simply integrate the above equation with respect to time to find the change in U from t1 to t2. It would probably be easier, however, to invoke the substitution rule and integrate the first term with respect to temperature and the second with respect to pressure. You can also view this as an example of the fundamental theorem of line integrals, and break this into a path of constant pressure followed by a path of constant temperature for easy computation. (As an aside, the above equation also proves that the internal energy of an ideal gas only depends on temperature) But if an energy balance shows that the change in the internal energy of a system is equal to the heat flow, now we also know what the heat flow has to have been, without having to figure out what in the world it means to integrate an inexact differential. I am confident that any mathematician would look at the above equations and find them perfectly understandable and justified rigorously, and I would definitely welcome any feedback to the contrary! Just my humble opinion, but I think all the thermodynamics pages that talk in terms of exact and inexact differentials should be completely recast in terms of balance equations. This framework is easier to understand physically and mathematically rigorous. Ryan ChemE (talk) 00:06, 3 February 2015 (UTC)

With all due respect, I suspect it is you who is writing nonsense. Heat (and work) very definitely have units of energy. Heat is a quantity of energy transfered into a system that is not work. It has nothing to do with time. Also there is nothing nonsensical about an infinitessimal quantity of heat δQ or a difference ${\displaystyle \Delta Q}$. The article on heat does a pretty good job of explaining it. Can you give any references at all that define heat as energy over time? Dezaxa (talk) 09:40, 26 May 2015 (UTC)
Any references at all? Sure I've got a few. First of all, you said "energy over time" just now, I'd change that to "energy per time" as the phrase "over time" makes me think of an integral. Anyway - Regina M. Murphy's "Introduction to Chemical Processes," 1st ed., p. 537 - "We denote the rate of heat transfer as ${\displaystyle {\dot {Q}}}$ and the total heat transferred over a specified time interval as Q, where ${\displaystyle Q=\int {\dot {Q}}dt}$"
Stanley I. Sandler's "Chemical, Biochemical, and Engineering Thermodynamics," 4th ed., p. 47 - "We use ${\displaystyle {\dot {Q}}}$ to denote the total rate of flow of heat into the system..." and p. 52 - "${\displaystyle Q=\int \limits _{t_{1}}^{t_{2}}{\dot {Q}}dt}$"
From more of a transport-related point of view, Christie John Geankoplis's "Transport Processes and Separation Process Principles," 4th ed., p. 240 - "When the fluid outside the solid surface is in forced or natural convective motion, we express the rate of heat transfer from the solid to the fluid, or vice versa, by the following equation: ${\displaystyle q=hA(T_{w}-T_{f})}$ where q is the heat transfer rate in W..." Notation is a bit different here than the other two, but same general idea.
These sources support my viewpoint, which I will detail more here. The fundamental concept to think about regarding heat is that it is a rate of energy flow per time, thus has dimensions of energy/time, not energy. You can get an accumulation of heat by multiplying that rate of heat flow, ${\displaystyle {\dot {Q}}}$, by small time intervals Δt and adding them up, or in other words perform an integral over time. It makes sense to differentiate this accumulated heat Q, but only in the sense of the first fundamental theorem of calculus, as follows:
${\displaystyle {\frac {d}{dt}}Q={\frac {d}{dt}}\int \limits _{t_{0}}^{t}{\dot {Q}}dt'={\dot {Q}}(t)}$
My issue here is more-or-less a semantic one, but this inexact differential stuff has bothered me since I took thermodynamics from the chemistry department (which as I've stated elsewhere on this page, is taught totally differently in the chemical engineering department). I just think it would be much better not to invent new pseudo-mathematical objects and formalism when calculus can describe thermo perfectly well. I'm not saying you or anyone else here did that, it was done 200 years ago when they were first figuring this stuff out. I just think the discipline is due for a reformation in notation. Ryan ChemE (talk) 17:06, 26 June 2015 (UTC)
I'll give you another reason why it's better to dispense with the differential stuff and talk about rates and balance equations - they allow you to actually solve real practical problems. For example, suppose I want to design a heat exchanger to warm up a process fluid by contacting it with condensing steam. An energy balance on the process fluid gives me the following:
${\displaystyle {\dot {M}}\left({\hat {H}}_{in}-{\hat {H}}_{out}\right)+{\dot {Q}}=0\Rightarrow {\dot {Q}}={\dot {M}}{\hat {c}}_{p}\left(T_{out}-T_{in}\right)}$
So now I know how much heat I need (energy/time). What flow rate of steam is required to achieve this? That's easy, an energy balance on the steam side gives:
${\displaystyle {\dot {Q}}={\dot {M}}_{steam}\left({\hat {H}}^{vapor}-{\hat {H}}^{liquid}\right)}$
So if I know the specific enthalpy of saturated steam and liquid at whatever pressure there's steam available (which I can look up), I can calculate the operating cost of heating my process fluid. That's why thermodynamics matters, and that's the utility of formulating it in terms of rates and balance equations. If there's an example of a real calculation a person would do integrating δQ along a path through the entropy-volume plane or some such thing, I honestly would be interested to see it. Ryan ChemE (talk) 15:58, 27 June 2015 (UTC)
Not one of your references supports your claim. You said above heat is a rate of energy flow per time and that heat does not have units of energy, it has units of energy/time. All of the references you quote correctly distinguish between heat and the rate of flow of heat. As such, they directly contradict you. Confusing heat with a rate of flow of heat is like confusing energy with power. Dezaxa (talk) 09:14, 15 July 2015 (UTC)
Every one of my references supports my claim. I don't see how it could be any more clear. Geankoplis - "q is the heat transfer rate in W..." Watts is power, or energy/time. As I said. And you are missing the entire point of my argument. I myself distinguished between ${\displaystyle {\dot {Q}}}$, the rate of heat transfer, and ${\displaystyle Q}$, the integral of ${\displaystyle {\dot {Q}}}$ over time. My point is that it makes more sense to think about heat and work as energy flows, and to ask the question of how does the total energy of the system change in time as a function of the material flows and energy flows, which include heat, shaft work, and flow work. That is how heat exchangers, pumping and piping systems, chemical reactors, power generation cycles, refrigeration cycles, distillation columns, and all other continuous flow systems are analyzed in the real world, and I also claim that closed systems can be described easily using energy balances. Not only that, but the energy balance paradigm is superior to "inexact differentials" because you don't have to make up nonsensical mathematics. You can justify every equation in thermodynamics with the material in a textbook covering introductory through multivariable calculus (i.e. Calculus Early Transcendentals, 5th ed by James Stewart) if you use energy balances and not inexact differentials. And as far as I can tell, the question of when a person would actually want to integrate the inexact differential δQ remains unanswered. And should such an example be produced, I'll bet that energy balances can answer the same question, and that every step can be justified by an argument from a mathematics textbook. Please don't get fixated on the difference between ${\displaystyle Q}$ and ${\displaystyle {\dot {Q}}}$, that's not the important issue here. Once more, the quantity we should be interested in is ${\displaystyle {\dot {Q}}}$. We don't ever even have to talk about ${\displaystyle Q}$, so we definitely don't have to talk about ${\displaystyle \delta Q}$. Just ask how does internal energy or enthalpy change over time, and everything else will fall into place. Ryan ChemE (talk) 23:47, 19 July 2015 (UTC)
Excuse me, I should have said "ask how does internal energy change over time..." Not enthalpy. Ryan ChemE (talk) 02:46, 26 July 2015 (UTC)

## Problems with heat and work example

There are a few issues with the "Heat and work" section under "Examples." The first sentence, that a fire requires heat, fuel, and an oxidizing agent, is more related to the "fire triangle" concept in fire safety, and I think is off topic here. Also the discussion regarding activation energy is misleading. The activation energy of a reaction has nothing to do with energy flows into or out of the system. Activation energy simply allows one to relate the speed of a reaction with the system's temperature, and any reaction will occur (however slowly) at any finite temperature, so long as it is thermodynamically favorable. Heat is not consumed in a reaction to overcome the activation energy, it would only be transferred to maintain the temperature of a reaction with a positive enthalpy change of reaction. The reason you need to add heat (or work to generate friction) is because you need to raise the temperature high enough to allow the reaction to proceed at a high enough rate to sustain itself. The physical law relating reactions and activation energy to temperature is the Arrhenius equation: ${\displaystyle k=Ae^{-E_{a}/RT}}$ where k is the reaction rate constant. Finally, the last sentence about integration needing to account for the path taken - what integration? What is being integrated here, and with respect to what? Ryan ChemE (talk) 14:00, 27 June 2015 (UTC)