Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Monday, 28 September 2015

Contraceptive Efficacy and Combined Probability

When sex education sources quote numbers on the efficacy of different forms of contraception, they usually report a number like "98% effective". What they really mean is that, on average, 98 out of 100 women using that method of birth control for a whole year won't get pregnant. However, most people don't understand how a seemingly small annual risk of pregnancy might translate to a significant risk of pregnancy over a period of several years. In this post I show you how the probability of pregnancy grows over time and why it's commonly recommended that couples use two forms of contraception.

The annual risk of unintended pregnancy for several different contraceptive methods are given in the table below, taken from the 20th edition of Contraceptive Technology. The effectiveness typically reported by sex education sources is simply 100% minus the annual risk of pregnancy.

I'm not sure how typical use differs from perfect use when it comes to male sterilization...

It should be noted that "typical use" is perhaps misleading because it is an average for the people reportedly using that contraceptive method. For most contraceptive methods, including the condom and the pill, the most common deviation from "perfect use" is conscious user non-compliance (i.e. knowingly not using the contraception). Not using the contraception shouldn't really count in the typical use, but I guess they just do the best they can with the data they get.

So, suppose you're interested in calculating the risk of pregnancy within a period several years. From the table above, we can get the probability of not conceiving during the first year of use, equal to 100% minus the number from the table. We'll call this number p. The probability of not conceiving during a period of n years of use can be taken as p raised to the nth power. The risk of pregnancy is then calculated as


This simply estimates the risk of getting pregnant at least once during an n-year period while using a certain contraceptive method. The graph below compares a few common contraceptive methods for n = 1 to 10 years.

Comparing the risk of pregnancy using different contraceptive methods.

The graph shows that if you are not diligent in using the contraceptive method properly (i.e. you stray from "perfect use"), there's a pretty good chance of unintended pregnancy somewhere down the road. There's greater than 50% chance of unintended pregnancy within 8 years of "typical use" with the pill, within 6 years with the Standard Days method, 4 years with the male condom, and 3 years with the withdrawal method. Even with perfect use, if condoms are the only contraceptive method used, there's still about 18.3% chance of an unintended pregnancy in 10 years. That's 1 in 5.47. If you look at a 10-year period, comparing all the methods from the previous table, you'll find that if you use only a single method, only sterilization or IUDs reduce your risk of unintended pregnancy to less than 1 in 2 (i.e. less than 50%), assuming "typical use".



We can estimate the risk of pregnancy when contraceptive methods are combined by combining probabilities as shown in the following equation:


When two events, A and B, are independent, the probability of both A and B occurring is equal to the product of the probability of A and the probability of B. Applied to pregnancy risk, it means the risk of getting pregnant while using both methods A and B is equal to the probability of A failing multiplied by the probability of B failing. This approach to combining probability assumes that the individual probabilities are independent of each other. Before combining probabilities you should think about whether they actually are reasonably independent. For example, birth control pills and male condoms are probably independent (or nearly independent) because they are unrelated methods and it is unlikely that one method will influence the efficacy of the other. Therefore, the probability of an unintended pregnancy in the first year of use is probably approximately 0.18*0.09 = 1.62% when both male condoms and birth control pills are used (assuming "typical use"). A counterexample is the combination of the Standard Days Method and the TwoDay Method, which are probably not independent because they are both methods of achieving "fertility awareness". It's unlikely that one method negatively influences the other, but combining the two probably has little benefit either. The combination of Standard Days Method with the TwoDay Method is probably only slightly better than either one of them their own. That said, I've calculated the risk of pregnancy with some combinations of contraceptive methods which probably satisfy the assumption of independence in order to demonstrate how using two methods simultaneously dramatically reduces your risk of unplanned pregnancy.


Comparing the risk of pregnancy using different combinations of contraceptive methods.

As you can see, combining contraceptive methods reduces the risk of unintended pregnancy by a pretty significant margin. If using a single contraceptive method, assuming "typical use", only IUDs or sterilization were effective enough to reduce the risk of unintended pregnancy to less than 50% in 10 years. Combining 2 methods very effectively reduces the risk of unintended pregnancy without resorting to invasive or irreversible medical procedures.

Sunday, 16 August 2015

Beer Bongs, Volume, and Fluid Mechanics

A beer bong is a very simple device, composed of a funnel and a tube, designed to quickly get beer into the user. While you could go out and buy one, it'd be a pretty big waste of money considering how cheaply and easily you could build your own. Of course, if you do build your own, you're going to want to know a few specs so you can answer all your friends' questions, like "how much beer does it hold?" and "how fast does the beer come out?"

Though store-bought beer bongs ensure there are two scantily clad ladies for every man in attendance...

Let's start with the volume. For calculation purposes, we'll split the beer bong into three parts: the hose, the conical part of the funnel, and the cylindrical top of the funnel (if you have one). To simplify things, we'll assume that the little exit tube at the bottom of the funnel is just part of the hose. The volumes of beer in the hose and the cylindrical portion of the funnel are calculated using the cylindrical volume formula:


where D and L are the diameter and length of the cylinder, respectively. The conical part of the funnel is a cone with the tip cut off, known as a conical frustum. The volume of a conical frustum is: 


where L is the height of the frustum and D1 and D2 are the two end diameters. So, defining our beer bong geometry as in the figure below

Notation for beer bong geometry.
the volume of beer in the beer bong is calculated as


where the D's and L's are in centimetres and V is in millilitres. If your funnel doesn't have a "hopper" portion, Lfh = 0. Divide V by 341 mL/bottle if you want to know how many bottles the full beer bong is equivalent to.

If you prefer US Customary units, use this formula instead:


where the D's and L's are in inches and V is in US fluid ounces. 

So that's the answer to "how much beer does it hold?" Now for some fluid mechanics to calculate the beer velocity. We're just going to calculate the initial velocity, which is when the beer flows fastest. As the beer drains, the flow rate slows down. We start with the energy equation for fluid flow, which is


where the subscripts 1 and 2 represent two points along the flow path, u is the flow velocity (we already used V for volume), g is the acceleration due to gravity (= 9.807 m/s² = 32.18 ft/s²), z is the height above some arbitrary reference point, p is the static fluid pressure, ɣ is the fluid's specific weight, α is the kinetic energy correction factor, and hL1-2 is the loss of hydraulic head from from point 1 to point 2.

For the beer bong, we're interested in the exit velocity, so we'll make the end of the hose point 2. We'll make the surface of the beer in the funnel point 1. Both point 1 and point 2 are exposed to atmosphere, so we can take p1 and p2 to be equal and they cancel out of the equation. Because the reference point for z is arbitrary, we can choose the exit from the beer bong to be the reference point, making z2 equal to zero.


Now the energy equation looks like this


According to the law of conservation of mass for incompressible fluids, the flow rate (unit volume per unit time) of the beer must be constant. We can use this to relate u1 to u2.


We can measure z1, it's just how high the surface of the beer is above the end of the hose. If the hose is fully straightened, z1 is maximized, equal to Lfh + Lfc + Lh. Now if you want to use an easy approximation, you can assume that the head losses are negligible and the energy correction factors are both equal to 1.0. With these assumptions we can solve for the exit velocity directly


where u2 is in m/s if z is in metres and g is 9.807 m/s². You can use any units you want for D as long as you use the same units for both (e.g. you can't put one in inches and the other in millimetres). For velocity in cm/s, z is in cm and g = 980.7 cm/s². For velocity in ft/s, z is in ft and g = 32.18 ft/s².

If you want volumetric flow rate, just add the following calculation step:


If you use centimetres for Dh and cm/s for u2 (1 m = 100 cm), the flow rate will work out in mL/s. If you stick with metres for everything you'll get a pretty small number for Q because the units will be in m³/s (1 m³ = 1000 L).

We can make this approximation even easier by assuming the funnel diameter is much larger than the hose diameter (which is probably true), making the denominator of the above velocity expression very nearly 1.0, and the exit velocity is simply


As you'll see later in an example, ignoring energy losses isn't going to give you a very accurate answer, and your friends aren't going to accept some lousy ballpark estimate. So the math's going to get more intense, but I paid good money for my fluid mechanics course in university and I'll be damned if I don't find a way to put that knowledge to use.

It's safe to assume turbulent flow in the hose, and although the beer is moving much more slowly in the funnel, it's probably also in the turbulent flow regime. The kinetic energy correction factor varies depending on fluid velocity, viscosity, and pipe roughness, but a typical number is about 1.05. I'm going to assume α1α2 = 1.05.


I'm going to assume hydraulic head losses come from two sources, friction inside the hose, and the flow contraction at the funnel cone. I'm ignoring the effect of the bend in the hose, which should be okay if the bend radius is much larger than the hose diameter, but if you put a tight bend in the hose you are introducing additional losses. For the flow contraction,


Where u is the flow velocity leaving the contraction and k is an empirical factor that depends on the shape of the contraction. For typical funnel geometry, k should be around 0.07 to 0.08. I'll use 0.08. We can define a third point on the flow path, the exit of the funnel/entrance to the hose, but we won't need to do much with it. Since we've already assumed the funnel exit has the same diameter as the hose, the velocity at point 3 must be equal to the velocity at point 2 in order to satisfy the mass conservation law. Therefore, the head loss from point 1 to point 3 is


The friction loss in the hose is calculated using the Darcy-Weisbach equation


where f is the Darcy friction factor, L is the pipe length, D is the pipe diameter, and u is the flow velocity in the pipe. For our beer bong, the head loss from point 3 to point 2 is


Now things start to get more complicated. The friction factor is an empirical value that's a function of the pipe roughness, diameter, flow velocity, and fluid viscosity. But your beer bong's probably going to be made with smooth plastic or rubber hose, so we can assume a perfectly smooth pipe (i.e. ignore pipe roughness). If you're doing something crazy like using cast iron or bamboo or hollowed out whalebone, you might want to consider the pipe roughness.

The Colebrook equation is typically cited for calculating the friction factor, but it's an implicit equation, meaning you can't solve the friction factor directly, you need to solve using an iterative process. For a perfectly smooth pipe, the Colebrook equation is


where Re is the Reynolds number, which for the beer bong hose is equal to


 where η is the kinematic viscosity of the beer, typically about 1.8×10⁻⁶ m²/s, or 0.018 cm²/s. Thus,


Since we still don't know what u2 is yet, we have iterations upon iterations on our hands here. Good thing Excel can do all that for you. But if you want to solve it by hand you could cut down on your iterations by using Haaland's approximation of the Colebrook equation.


You could also use the approximate value of u2 to calculate the friction factor. It should get you a friction factor that's fairly close to the "exact" solution, so we can create a beastly-looking equation for u2 that will get pretty close to the same answer as the iterative solution from Excel.


Substituting values of 980.7 cm/s² for g and 0.018 cm²/s for η,


where L, z, and the D's are in centimetres so that u2 comes out in cm/s.

Let's do an example based on a hypothetical beer bong with some clever name like The Brain Cell Slayer, depicted in the figure below.

The Brain Cell Slayer

Volume


Initial velocity and flow rate (approximate solution)


Initial velocity and flow rate (more accurate solution)


Even though it's a smooth hose, friction has a pretty significant effect in this example, reducing the initial velocity of the beer by almost 40%. And just so you can see that our beastly-looking equation really does get you close to the "exact" iterative solution, here's what I get from Excel using the Colebrook equation for the friction factor:


References
Potter, M.C. and Wiggert, D.C. (2002). Mechanics of Fluids, 3rd Edition. Brooks/Cole, Pacific Grove, CA.

Friday, 7 August 2015

Welding, Brazing, and Soldering

Welding, brazing, and soldering are all methods of joining two pieces of metal together, but how do these techniques differ?

Welding involves the use of high temperatures or pressures to cause metals of two distinct parts to coalesce at the joint. A well-executed weld is at least as strong as the surrounding base metal. However, weld processes that are improperly carried out can negatively affect the base metal near the weld site and produce weaker welds. In fusion welding, some of the base metal is melted, often with a filler metal deposited to the pool of molten metal during the process. Fusion welding processes include torch weldingarc welding (comprising several variations, such as SMAWGMAWFCAW, and SAW), laser beam welding, and electron beam welding. Fusion welding processes require that the parts be of similar composition. For instance, you can't join copper or aluminum to steel using fusion welding. In solid-state welding, the base metal is not melted and no filler metal is added.  Solid-state welding processes include the original welding process, forge welding, as well as several modern techniques, such as magnetic pulse welding, explosion welding, and friction-stir welding. Solid-state welding is much more suitable for joining dissimilar metals than fusion welding.

Simple explanation of the explosion welding process.

Brazing bonds two pieces of metal together with a braze alloy that serves as a filler metal in the joint. The braze alloy is melted during the process and bonds the parts together when it cools. Unlike welding, the base metal of the two parts is not melted or otherwise made to coalesce. Thus, braze alloys must have a lower melting temperature than the parts being joined. Brazing can be used to join different metals together, like aluminum, copper, gold, and nickel. Properly brazed joints can be very strong, though generally not as strong as welded joints.

Video showing a copper pipe being joined to a stainless steel pipe by brazing.

Soldering is similar to brazing, but is performed at lower temperatures. The filler metal used in soldering is known as a solder. The American Welding Society has defined 450 °C (840 °F) as the line between soldering and brazing (below 450 °C is soldering, above 450 °C is brazing). Solders in the past often contained lead, but these have since been mostly replaced with lead-free alternatives due to environmental and health concerns. Soldered joints are not as strong as brazed or welded joints.

Video on how to solder copper plumbing.

In summary, welded joints are strongest and typically require the most heat (except for fancy welding techniques that rely on high pressure). Metal from both parts coalesce at the welded joint. Brazing requires less heat than welding and brazed joints are not as strong as welded joints. Parts are joined together with a filler metal that melts at a lower temperature than the base metal. Soldering is essentially the same as brazing, except soldering is performed using filler metals with melting points below 450 °C and soldered joints aren't as strong.

Tuesday, 13 January 2015

A Brief History of Ground Penetrating Radar

Light as an Electromagnetic Wave

The history of radar begins with the history of our understanding that light is an electromagnetic waveIn 1826, André-Marie Ampère discovered that an electric current generated a magnetic field. Five years later, Michael Faraday discovered that an electric field is produced by a changing magnetic field. In 1855, Wilhelm Weber and Rudolf Kohlraush conducted an experiment to calculate the ratio of electromagnetic charge to electrostatic charge from direct measurements; the ratio was calculated to be 3.107×108 m/s. Only a few years before, Armand Fizeau and Léon Foucault had devised experiments to measure the speed of light, obtaining values of 3.149×108 m/s and 2.980×108 m/s, respectively. The significance of the discovery by Weber and Kohlrausch was not realized immediately, and for some time physicists believed it to be nothing more than a coincidence that this ratio agreed so closely with the speed of light. In 1861, James Clerk Maxwell published a correction to Ampère’s Law among his set of electrodynamic equations in On Physical Lines of Force. Ampere’s Law, in its original form, stated that a magnetic field was generated by an electric current. With Maxwell’s correction, it stated that a magnetic field was generated by a changing electric field – in essence, it was a corollary to Faraday’s Law. Starting from the equations published previously in On Physical Lines of Force, Maxwell published a mathematical derivation of the wave equation in his A Dynamical Theory of the Electromagnetic Field. This derivation proved that an accelerating electric field would generate a perpendicular magnetic field (and vice-versa), which together comprise an electromagnetic wave that can propagate through empty space. Solving Maxwell’s electromagnetic wave equation for the wave speed in vacuum reveals that such a wave would travel at the speed of light. Maxwell commented on the results of his derivations and the experiments of Weber and Kohlrausch, Fizeau, and Foucault, stating:
"The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws" (Maxwell 1865).
Maxwell’s equations are at the foundation of our current understanding of optics, electrodynamics, and electric circuits. Maxwell’s electromagnetic wave theory also explains why light travels fastest in vacuum and must slow down when passing through a medium. The electromagnetic wave theory and Maxwell's equations form the theoretical foundation of all radar applications.


Theory is put into Practice

Maxwell had predicted the existence of electromagnetic waves, but it was Heinrich Hertz who demonstrated that radio waves existed and could be transmitted, refracted, and reflected in the same manner as visible light. Alexander Popov, in 1897, while testing his apparatus to detect lightning strikes, observed interference when a ship had passed. Though Popov reported that the phenomena could possibly be exploited to detect objects, he did not explore this further. In 1904, Christian Hülsmeyer used radio waves to detect the presence of ships, but not their range or bearing. In September of 1922, U.S. Navy researchers Albert Taylor and Leo Young, like Popov before them, observed that a passing ship interrupted their radio communication. Taylor and Young realized the potential application and suggested radio transmitters and receivers be used to detect ships in low visibility. However, it wasn’t until Lawrence Hyland observed in 1930 that an airplane flying overhead interrupted radio communication did the U.S. military take serious interest in detecting objects using radio waves. The acronym RADAR, which stands for RAdio Detection and Ranging, was coined in 1934.

Walter Stern, possibly aware of the work of Hülsmeyer, developed the first ground penetrating radar (GPR) and used it to survey a glacier in Austria in 1929. The use of radio waves for subsurface mapping was essentially forgotten for several years, until a few airplanes belonging to the U.S. Air Force gave false altitude readings and the pilots crashed while trying to land on ice in Greenland. The renewed interest sparked investigations into the use of radar to map ice, groundwater tables, and subsoil properties. A GPR system essentially the same as the one used by Stern in 1929 was developed to investigate the lunar subsurface for the Apollo 17 mission. GPR first became commercially available in 1972, and since then there has been much research into the technology and its applications. Today, GPR is used in a wide variety of non-destructive, subsurface mapping applications, including: 

  • detecting buried explosives
  • locating possible archaeological dig sites
  • locating buried pipes
  • locating embedded reinforcing steel
  • inspecting pavements
  • mapping soil strata
  • mapping contaminant plumes
  • mapping groundwater levels
  • mapping ice thicknesses


References

Annan, A. P. (2009). Electromagnetic principles of ground penetrating radar. In Ground Penetrating Radar: Theory and Applications. Edited by Jol, H. M. Elsevier, Amsterdam, Netherlands.

Cassidy, N. J. (2009). Electrical and magnetic properties of rocks, soils and fluids. In Ground Penetrating Radar: Theory and Applications. Edited by Jol, H. M. Elsevier, Amsterdam, Netherlands.


Clarke, G. K. C. (1987). A short history of scientific investigations on glaciers. Journal of Glaciology, special issue: 4-24.


Crease, R. P. (2008). The great equations: breakthroughs in science from Pythagoras to Heisenberg. W. W. Norton & Company, Inc., New York, New York.


Guarnieri, M. (2010). The early history of radar. IEEE Industrial Electronics Magazine, 4(3): 36-42.


Jol, H. M. (2009). Preface. In Ground Penetrating Radar: Theory and Applications. Edited by Jol, H. M. Elsevier, Amsterdam, Netherlands.


Keithley, J. F. (1999). The story of electrical and magnetic measurements: from 500 B.C. to the 1940s. Institute of Electrical and Electronics Engineers, Inc., New York, New York.


Kostenko, A. A., Nosich, A. I., and Tishchenko, I. A. (2001). Radar prehistory, Soviet side: three coordinate L-band pulse radar developed in Ukraine in the late 30's. Proceedings of the IEEE Antennas and Propagation Society International Symposium, Boston, Massachusetts, 8-13 July 2001. Institute of Electrical and Electronics Engineers, New York, New York. 4: pp. 44-47.


Maxwell, J. C. (1861). On physical lines of force [online]. Philosophical Magazine and Journal of Science. Available from http://goo.gl/nfk1Fk [last accessed 13 January 2015].


Maxwell, J. C. (1865). A dynamical theory of the electromagnetic field [online]. Philosophical Transactions of the Royal Society, 155: 459-512. doi: 10.1098/rstl.1865.0008.


Olhoeft, G. R. (1996). Application of ground penetrating radar. Proceedings of the 6th International Conference on Ground Penetrating Radar, Sendai, Japan, 30 September - 3 October 1996. Institute of Electrical and Electronics Engineers, New York, New York. pp. 1-4.


Olhoeft, G. R. (2002). Applications and frustrations in using ground penetrating radar. IEEE AESS Systems Magazine, 17(2): 12-20.


Page, R. M. (1962). The early history of radar. Proceedings of the Institute of Radio Engineers, 50(5): 1232-1236.


Stern, W. (1929). Versuch einer elektrodynamischen dickenmessung von gletshereis. Gerlands Beitrge zur Geophysik, 23: 292-333.


Young, H. D. and Freedman, R. A. (2004). Sears and Zemansky's university physics: with modern physics, 11th edition. Pearson Addison Wesley, San Francisco, California.


Thursday, 18 December 2014

Winter Condensation and Frosty Windows

Another winter is upon us, and for many Canadians, that means having to deal with condensation and frosting on windows. Besides being a nuisance, excessive condensation on windows may eventually cause damage to window frames or finishes near the window opening. What causes condensation and frosting, and what can be done to prevent it?

Though occasionally beautiful, frost on a window is often a problem.

Condensation forms on surfaces that are colder than the dew point temperature (DPT) of the air. The maximum amount of water vapour that can mix with air depends on the temperature: warmer air can hold more moisture. Air at 20 °C can have nearly 23 times more water vapour than air at -20 °C! If air cools below its DPT, water vapour will leave the air to form condensation.

In the past, houses tended to stay dry because they were drafty and moisture in the home was quickly carried outdoors. Modern homes are constructed to minimize air leakage. This saves on heating costs, but also traps more moisture inside the home. At the same time, windows are relatively poor thermal insulators. A significant proportion of heat loss through windows is actually resisted by a thin layer of air that clings to the interior side of the window, rather than by the window material. The window material is analogous to the exterior siding of a wall and the air layer is analogous to the wall’s insulation. Anything outside the insulation gets very cold during winter. When warm, moist air from inside the home reaches one of these cold windows, the air cools and vapour condenses or freezes.

Most approaches to controlling condensation are simply measures to reduce the amount of water vapour in the home, which is equivalent to reducing the DPT. Here are some tips on reducing indoor humidity:

  • Turn off humidifiers.
  • Use a dehumidifier.
  • Use cold water for washing dishes and clothing.
  • Ensure dryers are properly vented to the exterior and that dryer ducts are not leaking.
  • Take quicker showers. Showering produces approximately 2.6 kg of vapour per hour (1 kg of vapour is equivalent to 1 L, or about 34 oz, of liquid water).
  • Use exhaust fans while cooking or showering. Cooking a meal for four people produces about 0.2 to 0.3 kg of water vapour on average.
  • Don’t boil water unnecessarily. Try brewing tea or preparing soup at a few degrees below boiling.
  • Store firewood in the garage or shed. Drying firewood produces 1 to 3 kg of vapour per day per cord of firewood.
  • Reduce the number of plants in the home. A typical house plant releases about 0.05 kg of water per day.
  • Open the windows or doors (at least once in a while) to increase ventilation. This will increase your heating bill, but it will also help remove moisture.
  • Install a heat recovery ventilator (HRV). Direct exchange of warm indoor air with cold outdoor air results in significant heat loss. HRVs perform this air exchange while reducing that heat loss by 75 to 85%.


Guidelines developed at the University of Minnesota recommend indoor humidity below 30% to control condensation when the outdoor temperature is -12 to -18 °C. However, it should be noted that low humidity can pose comfort and even health problems for some people. Complaints of chapped lips, dry skin, and dry nasal passages become increasingly likely as the indoor humidity drops below 30%. Low humidity also causes wood to shrink, which sometimes leads to warping or checking. Most flooring manufacturers recommend keeping humidity between 35% and 55% to protect hardwood floors. Other wooden items in the home like furniture and musical instruments may also be sensitive to low humidity. So if you can’t solve your condensation problems by reducing indoor humidity, there are still some other options that work by stopping moisture from reaching the cold surface or by ensuring the surface temperature doesn’t drop below the DPT. Here are some additional tips for controlling condensation:

  • Open drapes and blinds. This encourages air circulation at the window and keeps the window surface a little bit warmer.
  • Install a plastic film (window insulator kit). It is important to seal the plastic correctly so that air can’t leak around it. If the film leaks air it could actually exacerbate the condensation issues.
  • Install storm windows if not already present.
  • Check the window seals and take corrective action as necessary. In storm window assemblies, the inner window should be as airtight as possible and the outer window should be comparatively leaky, though not so leaky that it allows exterior air to chill the inner window.
  • Replace problem windows with more efficient models. Choose windows with a high Condensation Resistance Factor (CRF).


To summarize, condensation and frost form on windows because the windows are colder than the dew point temperature of the indoor air. Reducing vapour production and removing vapour from the home are often the best ways to eliminate condensation problems. If problems persist, repair or renovation work ranging from weather-stripping windows to replacing windows with more efficient models may need to be considered.

References
ASHRAE. Handbook of Fundamentals. American Society of heating Refrigeration and Air-conditioning Engineers, Atlanta, GA, 2009.

Hutcheon, N.B. and Handegord, G.O.P. Building Science for a Cold Climate. John Wiley & Sons, New York, NY, 1983.

Lohonyai, A.J. Frost fractals on a window, Personal photograph, November 2014.

Straube, J.F. and Burnett, E.F.P. Building Science for Building Enclosures, Building Science Press, Westford, MA, 2005.

TenWolde, A. and Pilon, C.L. “The Effect of Indoor Humidity on Water Vapor Release in Homes” in Thermal Performance of the Exterior Envelopes of Buildings X, American Society of heating Refrigeration and Air-conditioning Engineers, Atlanta, GA, 2007.

Trechsel, H.R. and Bomberg, M.T. (eds.) Moisture Control in Buildings: The Key Factor in Mold Prevention, 2nd edition, American Society for Testing and Materials, West Conshohocken, PA, 2009.

Saturday, 8 November 2014

Effects of Winter on Fuel Economy

If you're pinching pennies, or maybe just nerdy about your fuel consumption, perhaps you've noted that your fuel efficiency gets a fair bit worse during the winter. For example, my classic Oldsmobile averages about 16.0 mpg (14.7 L/100 km) in the summer and only about 12.2 mpg (19.3 L/100 km) in the winter.

There are many factors affecting your fuel economy, like the rolling resistance of the tires and the car's aerodynamic drag. Winter conditions (mostly just the cold temperatures) tend to negatively impact several of these influencing factors, as outlined below. 

Aerodynamic Drag
Aerodynamic drag is the resisting force that air applies on a moving object like a ball, car, or aircraft. Basically, the object crashes into the air molecules in front of it. The air has to be pushed out of the way and the drag force comes from that air pushing back against the object. Large, bluntly shaped objects with rough surfaces experience a lot more drag than smaller objects with smooth surfaces shaped to gently push through the air. And large drag forces mean more energy is needed to overcome them. Think of the difference between slicing through cheese with a brick versus a knife. 

Diagram of lift and downforce from overbody flow
Diagram of the typical aerodynamic forces acting on a moving car

The drag force on an object, like your car, can be quantified using the formula:

F_D\, =\, \tfrac12\, \rho\, v^2\, C_D\, A

where FD is the drag force, ρ is the air density, v is the velocity, CD is a dimensionless drag coefficient, and A is the frontal area. So what's the effect of winter on drag force?

Sure, it could be argued that your car has a bit more area if it's got a layer of snow and ice on it. Similarly, the snow and ice make your car's surfaces rougher, so it could be argued that the drag coefficient increases as well. But those are negligible effects and don't explain the loss of fuel economy when it's cold but your car is clean. The real culprit here is air density. Cold air is denser than warm air, so the drag force is simply higher during winter. At -10 °C, the drag force is about 12% larger than at +20 °C. Here's a figure showing the density of air versus its temperature, assuming 50% humidity and 200 m elevation above sea level. The magnitude of the density will decrease at higher humidity and/or elevation, but the trend as a function of temperature is essentially unchanged.

Cold air is denser than warm air and the aerodynamic drag force is proportional to the air density.


Rolling Resistance
Your tires aren't perfectly circular: they form a flat area where they contact the road. 

contact patch migration
Deformed shape of a tire on a rigid surface during static (constant speed), acceleration, and braking conditions.

As the wheel turns, the tire is continuously being flattened out at the contact point (and returning to circular coming off the contact point). The rubber in tires exhibits an interesting material property known as hysteresis. Hysteresis is when the material behaves differently depending on whether it is being loaded or unloaded.
Stress-strain relation of a material with hysteresis. The path from A to B depends on whether the stress is increasing or decreasing. The shaded area between the two paths represents the energy lost due to hysteresis.

It takes some input of mechanical energy to cause the tire to go from the natural shape to the deformed shape. If rubber had no hysteresis, all of the mechanical energy would be returned during the reverse process, going from the deformed shape to the natural shape of the tire. But what we really see is that more energy is input during loading than is output during unloading. This loss of energy translates to lost fuel economy because some of the work done by the engine has to feed the energy consumption of the tires. Most of the rolling resistance from your wheels is from this energy absorbing behaviour of the material.

If you're wondering where the extra energy disappears to, the answer is heat. As the rubber keeps absorbing energy, receiving more mechanical energy than it gives back, it starts to heat up. This is why your tires are warm after a long drive. Of course, the temperature of your tires cannot increase indefinitely because eventually a steady-state heat transfer rate is achieved. The rate at which heat dissipates to the air increases with both increasing tire temperature and car velocity.

Okay, so how do winter conditions affect hysteresis? Well, the effect is more pronounced when the rubber is cold. So in the winter, tire rolling resistance is higher, especially if you're only taking short trips (remember that your tires begin to heat up as you drive). There's roughly 1% increase in rolling resistance for every 1 °C drop in temperature. But there's another cause. The hysteresis effect also increases as your tire pressure decreases. If your tire pressure is low, more deformation occurs in the tire as it rolls. And when the temperature of a gas decreases at a constant volume, the gas pressure has to drop in the same proportion (see Amontons' Law). That means your tire pressure drops when it's cold. Based on typical operating pressure and typical rolling resistance curves for tires on hard surfaces, rolling resistance increases by roughly 4% for every 1 psi drop in pressure below the recommended operating pressure. A typical car tire will experience approximately 1.5 psi drop in pressure for every 10 °C drop in temperature. So a 30 °C drop in temperature might increase your rolling resistance by about 50% if you don't remember to keep your tire pressure at optimum!

If your tire looks like this, fuel efficiency maybe isn't your top concern.

However, you can easily counter the effect of temperature on your tire pressure by simply adding a bit of air in the winter. 

A related source of rolling resistance comes from the road itself. The road surface deforms slightly under the pressure from the wheels of your car. Because of that deformation, your wheels are actually always moving out of a small depression as you drive, and so the road is a source of rolling resistance.

The contact surface for a rigid wheel on a deformable road. 

Hard, stiff road surfaces like concrete don't offer as much resistance as say gravel or your neighbour's lawn, but there's still some resistance there. Snow and slush on the roads increases rolling resistance to some extent, contributing to further reductions in winter fuel economy.

Aerodynamic drag also contributes to rolling resistance because a layer of air close to the tire has to be dragged around the wheel as it turns. As described in the previous section, drag increases because air density increases as temperature decreases. Therefore, the rolling resistance contributed by aerodynamic drag also increases as the temperature drops in the winter.

Finally, slippage contributes to the total energy losses from rolling. Even on clean, dry roads, your tires experience a bit of slippage as you drive. The applied torque required at the drive wheels to maintain a certain speed increases as slippage increases, wasting energy. Obviously slippage gets worse in winter conditions when roads are wet, snowy, or icy. All of those extra moments where you're spinning the wheels without going anywhere start to add up to a bit of lost fuel economy.

Or maybe a lot of lost fuel economy.

To summarize this section, rolling resistance comes from three main sources: hysteresis, aerodynamic drag, and slippage. Winter conditions tend to increase losses from all three of these sources.

Idling
People idle their cars more in the winter. When it's really cold, people often start their cars and let them run for several minutes to allow the interior to heat up. They also are less likely to shut the car off if they have to wait somewhere briefly (for example, when picking the kids up from school) because they don't want to get cold waiting in the car. For all that extra time your car is idling you're getting 0 miles per gallon.

Lubricant Viscosity
All of the oils and greases keeping your car's various moving parts moving smoothly become more viscous as the temperature drops. Engine oil, transmission fluid, differential fluid (if you've got a rear-wheel or all-wheel drive vehicle), etc. The more viscous the fluid is, the higher the fluid shear resistance. Which is really just a fancy way of saying it gets harder move things through the fluid. So when the automotive fluids are cold, your engine has to work harder to get all the parts moving, and that means more fuel consumption. You can partially counter the effect by switching to synthetic engine oil, or using a lower viscosity oil in winter (switching to 5W-30 instead of 10W-30 for example, as long as it's still okay for your engine).

A typical grade recommendation chart. Make sure you consult the grade recommendation chart or table for your particular vehicle before switching oils. Check your owner's manual or look it up online.

Higher Electrical Loads
Your car's cabin heater and blower fan require electricity to run. Ultimately, that electricity is produced by burning gasoline. So keeping your car all toasty warm on a cold day takes extra energy, and extra fuel. And that's not the only extra load. You've got the rear defroster, the windshield wipers, and the washer fluid pump too. Perhaps you've got some other fancy doodads, like heated mirrors, heated seats, and heated steering wheel. All the extra stuff you use exclusively (or at least more often) in the winter contributes to your reduced fuel economy.

Lower Engine Temperature
Your engine doesn't run very efficiently when it's cold. And the colder it is outside, the longer it takes for your engine to warm up. This doesn't mean you should let your car idle for 5-10 minutes before you head to work in the winter. Idling wastes more fuel than your cold engine does. Even on really cold days, your engine will be sufficiently warm after just 2 minutes of idling. It is recommended that you only idle for 30 seconds and then drive gently for the next 2 minutes in order to warm up the engine. Driving will warm the engine faster than idling. But even if you're not idling excessively, the cold starting temperature means your average fuel efficiency for a given trip will be lower in the winter. The effect is more pronounced for short trips because the engine is cold for a larger proportion of the total trip. You can counter this problem by parking in a garage or using an engine block heater. Research performed in Canada has shown that for short, simulated trips in an urban setting, using an engine block heater improved fuel efficiency (relative to not using one) by up to 10% at -20 °C and 25% at -25 °C.

Lower Operating Speed
When road conditions start to get slippery due to snow and ice, your available traction for stopping and cornering is reduced. Most drivers are at least competent enough to be cognizant of how ice will adversely affect their safety. Hence, they will slow down to compensate for the reduced traction. A car's optimal speed varies from model to model due to many different factors. However, broadly speaking, a car's fuel efficiency is at or reasonably close to optimum when driven at a steady speed within the range of 50 to 90 km/hr (31 to 56 mph) . As you start to deviate from that range, fuel efficiency typically drops off rapidly. So when the snow flies and you're driving only 35 km/hr because you don't much care for car accidents, your fuel efficiency takes a hit.

Winter Fuel
For reasons partly related to safety and partly related to engine operation, gasoline manufacturers sell a different blend of chemicals as gasoline in the winter than in the summer. Gasoline has to be readily vaporized in order to power the engine, but vapor trapped in the fuel delivery system can cause stalling and difficulty starting (known as vapor lock). Accordingly, manufacturers manipulate the gasoline's Reid Vapor Pressure (RVP, a measure of volatility) by changing certain additives, making it more volatile in the winter so that your car will still run in the winter. Common winter additives are ethanol and butane. While this improves the volatility of the fuel, it also decreases the ethalpy of combustion of the fuel because both butane and ethanol have lower energy content than octane (the main ingredient in gasoline). The overall effect of the winter fuel additives is approximately 1 to 3% reduction in fuel economy.

Summary
There are many factors that contribute to a reduction in fuel economy in winter, including:
  • Cold temperatures increase aerodynamic drag on the car.
  • Cold temperatures increase hysteresis losses in tires, increasing rolling resistance.
  • Cold temperatures reduce tire pressure, increasing rolling resistance.
  • Cold temperatures increase aerodynamic drag on the tires, increasing rolling resistance.
  • Slush, snow, and ice increase tire slippage.
  • Drivers typically idle longer when it's cold.
  • Automotive fluids are more viscous at cold temperatures, effectively increasing the net friction force to be overcome by the engine.
  • Drivers typically run heaters and defrosters in the winter, increasing electrical loads; hence, fuel consumption.
  • Cold engines are less efficient than warm engines.
  • For safety reasons, drivers typically slow down to speeds well below optimum when road conditions are poor.
  • Winter grade fuel contains additives that increase volatility but reduce energy content of the fuel.

While you can't escape all of winter's detrimental effects on your fuel economy, there are a few things you can do to mitigate them:
  • Use winter tires in the winter.
  • Maintain the manufacturer's recommended tire pressure.
  • Don't spin your wheels even faster when you want the car to go but you have poor traction.
  • Keep idling to an absolute minimum.
  • Use manufacturer approved synthetic or lower viscosity lubricants in the winter.
  • Minimize the use of your heaters and defrosters. Particularly if you don't have any passengers, it's generally more economical to just use a seat warmer than the cabin heater.
  • Where possible, avoid short, intermittent trips by combining them into longer trips. 
  • Use an engine block heater. To minimize your electricity usage, use a timer or only plug it in when necessary. Two hours is enough time for the block heater to do its job.
  • Park inside a garage when possible.

Additional References
Clark, S. K. and Dodge, R. N. (1979). A Handbook for the Rolling Resistance of Pneumatic Tires.
Wong, J. Y. (2001). Theory of Ground Vehicles, 3rd ed.
Fueleconomy.gov