Moisture analysis

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Moisture analysis covers a variety of methods for measuring moisture content in both high level and trace amounts in solids, liquids, or gases. Moisture in percentage amounts is monitored as a specification in commercial food production. There are many applications where trace moisture measurements are necessary for manufacturing and process quality assurance. Trace moisture in solids must be controlled for plastics, pharmaceuticals and heat treatment processes. Gas or liquid measurement applications include dry air, hydrocarbon processing, pure semiconductor gases, bulk pure gases, dielectric gases such as those in transformers and power plants, and natural gas pipeline transport.

Moisture Content vs. Moisture Dewpoint[edit]

Moisture Dewpoint (the temperature at which moisture condenses out of a gas) and moisture content (how many molecules of water as a fraction of the total) are inherently related. Both can be used as a measure of the amount of moisture in a gas. They are inherently related and one can be calculated from the other fairly accurately.

Unfortunately, both terms are sometimes used interchangeably. It is important to note that these two parameters (e.g. water dewpoint and water content) are NOT the same thing. They are completely different, though related measurements. There are a number of methods to measure water content as listed below. However, to measure the water dewpoint, there is only one class of methods: chilled-mirrors (needs a reference to an article on chilled-mirrors).

Loss on Drying (LOD)[edit]

The classic laboratory method of measuring high level moisture in solid or semi-solid materials is loss on drying (LOD). In this technique a sample of material is weighed, heated in an oven for an appropriate period, cooled in the dry atmosphere of a desiccator, and then reweighed. If the volatile content of the solid is primarily water, the LOD technique gives a good measure of moisture content. Because the manual laboratory method is relatively slow, automated moisture analyzers have been developed that can reduce the time necessary for a test from a couple hours to just a few minutes. These analyzers incorporate an electronic balance with a sample tray and surrounding heating element. Under microprocessor control the sample can be heated rapidly and a result computed prior to the completion of the process, based on the moisture loss rate, known as a drying curve.

Karl Fischer titration[edit]

An accurate method for determining the amount of water is the Karl Fischer titration, developed in 1935 by the German chemist whose name it bears. This method detects only water, contrary to loss on drying, which detects any volatile substances.

Techniques used for natural gas[edit]

Natural gas poses a unique situation since it can have very high levels of solid and liquid contaminants as well as corrosives in varying concentrations.

Water measurements are made in parts per million, pounds of water per million standard cubic feet of gas, mass of water vapor per unit volume, or mass of water vapor per unit mass of dry gas. That is, humidity is the amount of "vapor-phase" water in a gas. If there are liquids present in the gas, they are often filtered out before reaching a gas analyzer to protect the analyzer from damage.

Measurements of moisture in natural gas are typically performed with one of the following techniques:

Other moisture measurement techniques exist but are not used in natural gas applications for various reasons. For example, the Gravimetric Hygrometer and the “Two-Pressure” System used by the National Bureau of Standards are precise “lab” techniques but are not practical for use in industrial applications.

Color indicator tubes[edit]

The color indicator tube (also referred to as the Draeger Tube or Stain Tube) is a device many natural gas pipelines use for a quick and rough measurement of moisture. Each tube contains chemicals that react to a specific compound to form a stain or color when passed through the gas. The tubes are used once and discarded. A manufacturer calibrates the tubes, but since the measurement is directly related to exposure time, the flow rate, and the extractive technique, it is susceptible to error. In practice, the error can be as high as 25 percent. The color indicator tubes are well suited for infrequent, rough estimations of moisture in natural gas; for example, if the tube indicates 30 pounds of water, there is a high degree of certainty that it is over 10 pounds.

Chilled mirrors[edit]

When a gas flows over a chilled surface, or chilled mirror, if the surface is cold enough, the available moisture will start to condense on it. The exact temperature at which this condensation first occurs is known as the dew point. All chilled-mirror devices are based on the same basic method: the temperature of the mirror is reduced from high to low, and the temperature at which condensation is observed is reported as the dew point. By obtaining the dew point temperature, one can calculate moisture content in the gas. The mirror temperature can be regulated by either the flow of a refrigerant over the mirror or by a thermoelectric cooler also known as a peltier element.

The formation of condensation on the mirror can be registered by either optical or visual means. In both cases, a light source is directed onto the mirror's surface and changes in the reflection of this light due to the formation of condensation can be detected by a sensor or the human eye, respectively. The exact point at which condensation begins to occur is not discernible to the unaided eye, so modern manually operated instruments use a microscope to enhance the accuracy of measurements taken using this method.[1][2]

Since the temperature is passing through the dew point rather than stopping exactly at it, the measurement can be a little low as the mirror will have reached a temperature slightly below the dew point before water condensation starts to form. In order to compensate for this, once condensation is detected, the temperature of the mirror is slowly increased until evaporation is observed to occur. The dew point is then reported as the average of these two temperatures.

Chilled mirror analyzers are subject to the confounding effects of some contaminants, however, usually no more so than other types of analyzers. With proper filtration and gas analysis preparation systems, other condensables such as heavy hydrocarbons, alcohol, and glycol need not impair the reliable function of these devices. It is also worth noting that in the case of natural gas, in which the afore mentioned contaminants are an issue, on-line analyzers routinely measure the water dew point at line pressure, which reduces the likelihood that any heavy hydrocarbons, for example, will condense before water.

Chilled mirror combined with Spectroscopy[edit]

This is a more recent method which combines the benefits of a chilled-mirror measurement with the accuracy of spectroscopy. In this method, an optical mirror is cooled while its surface is probed with IR radiation. Upon formation of dew on the surface of the mirror, the IR beam will show absorption in the wavelengths that correspond to the molecular structure of the dew formed on the surface of the mirror. This will allow the unit to distinguish between water dewpoint and other potential dewpoints (such as hydrocarbon dewpoint in natural gas). It will also eliminate interferences from other potential contaminants. This is particularly useful when measuring dewpoints in gases that have more than a few constituents, such as natural gas. [1]

Electrolytic[edit]

The Electrolytic sensor uses two closely spaced, parallel windings coated with a thin film of phosphorus pentoxide (P2O5). As this coating absorbs incoming water vapor, an electrical potential is applied to the windings that electrolyzes the water to hydrogen and oxygen. The current consumed by the electrolysis determines the mass of water vapor entering the sensor. The flow rate and pressure of the incoming sample must be controlled precisely to maintain a standard sample mass flow rate into the sensor.

The method is fairly inexpensive and can be used effectively in pure gas streams where response rates are not critical. Contamination from oils, liquids or glycols on the windings will cause drift in the readings and damage to the sensor. The sensor cannot react to sudden changes in moisture, i.e., the reaction on the windings’ surfaces takes some time to stabilize. Large amounts of water in the pipeline (called slugs) will wet the surface and requires tens of minutes or hours to “dry-down.” Effective sample conditioning and removal of liquids is essential when using an electrolytic sensor.

Piezoelectric sorption[edit]

The piezoelectric sorption instrument compares the changes in frequency of hydroscopically coated quartz oscillators. As the mass of the crystal changes due to adsorption of water vapor, the frequency of the oscillator changes. The sensor is a relative measurement, so an integrated calibration system with desiccant dryers, permeations tubes and sample line switching is used to correlate the system on a frequent basis.

The system has success in many applications including natural gas. It is possible to have interference from glycol, methanol, and damage from hydrogen sulfide which can result in erratic readings. The sensor itself is relatively inexpensive and very precise. The required calibration system is not as precise and adds to the cost and mechanical complexity of the system. The labor for frequent replacement of desiccant dryers, permeation components, and the sensor heads greatly increase the operational costs. Additionally, slugs of water render the system nonfunctional for long periods of time as the sensor head has to “dry-down.”

Aluminum oxide and silicon oxide[edit]

The oxide sensor is made up of an inert substrate material and two dielectric layers, one of which is sensitive to humidity. The moisture molecules pass through the pores on the surface and cause a change to a physical property of the layer beneath it.

An aluminum oxide sensor has two metal layers that form the electrodes of a capacitor. The number of water molecules adsorbed will cause a change in the dielectric constant of the sensor. The sensor impedance correlates to the water concentration. A silicon oxide sensor can be an optical device that changes its refractive index as water is absorbed into the sensitive layer or a different impedance type in which silicon replaces the aluminium.

In the first type (optical) when light is reflected through the substrate, a wavelength shift can be detected on the output which can be precisely correlated to the moisture concentration. Fiber optic connector can be used to separate the sensor head and the electronics.

This type of sensor is not extremely expensive and can be installed at pipeline pressure (in-situ). Water molecules do take time to enter and exit the pores, so some wet-up and dry down delays will be observed, especially after a slug. Contaminants and corrosives may damage and clog the pores causing a “drift” in the calibration, but the sensor heads can be refurbished or replaced and will perform better in very clean gas streams. As with the piezoelectric and electrolytic sensors, the sensor is susceptible to interference from glycol and methanol, the calibration will drift as the sensor’s surface becomes inactive due to damage or blockage, so the calibration is reliable only at the beginning of the sensor’s life.

In the second type (silicon oxide sensor) the device is often temperature controlled for improved stability and is considered to be chemically more stable than aluminium oxide types and far faster responding due to the fact they hold less water in equilibrium at an elevated operating temperature.

Whilst most absorption type devices can be installed at pipe line pressures (up to 130 Barg) traceability to international Standards is compromised. Operation at near atmospheric pressure does provide traceability and offers other significant benefits such enabling direct validation against known moisture content.

Spectroscopy[edit]

Absorption spectroscopy is a relatively simple method of passing light through a gas sample and measuring the amount of light absorbed at the specific wavelength. Traditional spectroscopic techniques have not been successful at doing this in natural gas because methane absorbs light in the same wavelength regions as water. But if one uses a very high resolution spectrometer, it is possible to find some water peaks that are not overlapped by other gas peaks.

The tunable laser provides a narrow, tunable wavelength light source that can be used to analyze these small spectral features. According to the Beer-Lambert law, the amount of light absorbed by the gas is proportional to amount of the gas present in the light’s path; therefore this technique is a direct measurement of moisture. In order to achieve a long enough path length of light, a mirror is used in the instrument. The mirror may become partially blocked by liquid and solid contaminations, but since the measurement is a ratio of absorbed light over the total light detected, the calibration is unaffected by the partially blocked mirror (if the mirror is totally blocked, it must be cleaned).

The Tunable Diode Laser Absorption Spectroscopy (TDLAS) analyzer has a higher upfront cost compared to the analyzers above. However, the TDLAS technology is the only one that can meet any one of the following: the necessity for an analyzer that will not suffer from interference or damage from corrosive gases, liquids or solids, or an analyzer that will react very quickly to drastic moisture changes, or an analyzer that will remain calibrated for very long periods of time.

See also[edit]

References[edit]