Jump to content

User:Sbalfour/sandbox

From Wikipedia, the free encyclopedia
This article is about three inter-related phenomena: leakage flux, mutual inductance, and leakage inductance.

Inductance is a derivative aspect of a vector field, the magnetic field surrounding a current-carrying electric conductor. It is a scalar quantity at a given instant, which is the magnitude of opposition to rate of change of current with respect to time. Since rate of change of direct current is zero, inductance does not oppose direct current. Opposition to current results in a potential (voltage) across the conductor. Opposition to current in the electric circuit is proportional to the magnetic flux flowing in the incident magnetic circuit. Magnetic flux flows in the path of least magnetic reluctance ("resistance") to form the magnetic circuit. That path is a magnetic material if it is present, i.e.the magnetic core of an inductor or transformer. Most, but not all, of the flux flows in that path. It is an artifact of spacial geometry that some of the flux has a path of lesser reluctance than that of the magcore: the magnetic field encircling the electric circuit is three dimensional; the magnetic path of the core only partially coincides with the magnetic field lines of force. Where it does, magnetic flux flows in that path forming a magnetic circuit, resulting in self-inductance of the inductor and shared mutual inductance of a transformer. Flux which does not flow in the magnetic circuit of the core does not contribute to mutual inductance, and is called leakage flux. This flux flows in a separate magnetic circuit, a shorter one through the surrounding air. Leakage flux is reactive, not dissipative, so its energy is not lost to the electric circuit: it has an additive opposition to the flow of current in that circuit, called the leakage inductance. The additive nature of this opposition characterizes series inductance. Therefore the total inductance of the electric circuit (ala, a winding) is the series composition of the self-inductance and leakage inductance; their associated inductive reactances, for a sinusoidal alternating electric current of frequency , form a voltage divider, and therefore add according to Kirchoff's voltage law. When current also flows in the secondary, some incremental amount of the magnetic flux of the core will not incite current in that circuit, so a separate leakage inductance will be associated with the secondary.

Transformers may also lack a magnetic core ("air-cored transformer"). In this case, the "core" does not provide a path of lower magnetic reluctance for the flow of flux than any other path, so most flux does not travel through the core. Air-cored transformers therefore have large leakage flux and resultant large leakage inductance. A transformer may be air-cored when strict linearity is more important than mutual inductance, such as for radio-frequency small-signal amplification.

According to the definition, other inductive devices and components like chokes also have leakage inductance, i.e. leakage flux that does not flow in the principal magnetic path of the core, and inductance incident to that flux. However, such leakage inductance simply becomes part of the self-inductance of the component, and does not need to be distinguished for any practical purpose.

The leakage inductance is actually distributed throughout the winding, in a similar way to parasitic interwinding capacitance. Thus, leakage inductance depends on geometry of windings and the core. Some transformers, like coupling transformers for single-ended audio amplifiers, have an air-gap in the magnetic path of the core to raise the saturation current threshold in the presence of a DC offset current. An air-gapped core has a higher magnetic reluctance, so proportionally less flux will flow in the core and more will flow in leakage paths, resulting in increased leakage inductance.

Leakage inductance of a transformer reduces its mutual inductance and therefore also its coupling factor, the ratio of measured mutual inductance to the mutual inductance of an ideal transformer.

Lead

[edit]

Inductance is a property of an electrical conductor which opposes a change in current. It does that by storing and releasing energy from a magnetic field surrounding the conductor when current flows, according to Faraday's law of induction. When current rises, energy (as magnetic flux) is stored in the field, reducing the current and causing a drop in potential (i.e, a voltage) across the conductor; when current falls, energy is released from the field supplying current and causing a rise in potential across the conductor.

The inductance of a conductor is a function of its geometry. A straight narrow wire has some inductance; a conductor which is configured such that sections of its magnetic field overlap (for example a coil) may have more or less inductance than a straight wire. The inductance of a conductor may be greatly increased by its adjacency to a magnetic material like iron. In this case, a magnetic field is induced in the iron, and it also stores and releases energy in response to change in current, so that the opposition to change in current from the combined geometry is much greater than that of the conductor alone.

A conductor with a fluctuating current adjacent to another conductor (or another portion of itself) will induce, via its incident magnetic field, a fluctuating current in the other conductor or portion of itself; the effect is reciprocal, and is called mutual inductance. Mutual inductance is the basis of operation of a transformer. To distinguish this from the inciting inductance, the inciting inductance is referred to as self-inductance.

Inductance is one of three fundamental properties (along with resistance and capacitance) of electric conductors and components. The circuit component intended to add inductance to a circuit is called an inductor. It is usually a coil of insulated wire, and may have a core of iron or other magnetic material. Inductors are also called electromagnets when their magnetic properties are of more concern than their electrical ones. Inductance in circuit analysis is usually represented by a related quantity called inductive reactance which is part of the impedance of the circuit. For AC circuits, inductive reactance is a nearly linear function of frequency, though at high frequencies like RF, nonlinear effects dominate.

Inductance has a variety of functions and effects in electric circuits including filtering, energy storage and current regulation. It may be a favorable or unfavorable property: electric generators and motors depend on it for their operation; but in electric transmission lines, it reduces capacity.

Inductance is measured in units of Henrys, named for Joseph Henry who independently with Michael Faraday discovered inductance in the 1830's; its symbol is customarily designated in honor of physicist Heinrich Lenz.

Geometry of inductance

[edit]

Every physical object through which electric current may flow has inductance which is dependent on its shape. More specifically, the inductance of the object depends on the shape of the path of current through the object. Different kinds of current (i.e.frequency) may take different paths through the object and those paths may (and generally will) have different associated inductances. Inductance also depends on the shape of a current-carrying conductor's magnetic field. Every current carrying electrical conductor has an incident magnetic field; the energy of the magnetic field in the form of magnetic flux travels in an oriented direction in a magnetic circuit. Just as electric current travels from a place of high electric potential along the path of least electrical resistance to a place of low potential, magnetic flux travels from a place of high magnetic tension along the path of least magnetic reluctance to a place of low tension. Since inductance is a result of energy transfer between the electric and magnetic circuits, their relation to each other shapes the inductance of the electric circuit.

The simplest case of inductance is a straight thin wire. Electric current flows in one direction through it, and at least for low frequencies and low currents, it may be considered as a one-dimensional object whose inductance is log-linearly proportional to its length. The magnetic field around such a wire creates a magnetic circuit through the air which may be though of as a "ring" around the wire. The magnetic field propagates at the speed of light outward uniformly in all directions from the current. It has a uniform but steep gradient (the farther from the current, the weaker it gets), so that in practice only the portion of the magnetic field close to the conductor need be taken into account. The flow of flux in the magnetic circuit is according to Lenz's right hand rule: if one's right thumb points in the direction of current flow, and one curls the fingers of that hand, the fingers point in the direction the flux flows in the magnetic circuit.

Consider two points P1 and P2, each with magnetic charge, though not necessarily identical, very much like magnetic monopoles, "ideal point magnets". It may matter whether their charges are same or opposite polarity. As a mathematical abstraction, let's assume they have very nice linear properties: the magnetic force of attraction declines linearly with distance from either point. If we place an ideal perfectly permeable point magnetic object on an axis between the point magnets, the magnetic moment relative to that object will vary depending on its relative distances from each point magnet. The minimum magnetic moment, the only place where the object will be at rest, will be a position where d*f(P1)/[f(P1)+f(P2)] where d is the distance from P1 to P2. The simplest case is when f(P1)=f(P2), then the net magnetic moment is [f(P1)+f(P2)]/d and the object comes to rest midway between the points. The net magnetic moment is f(p1)/d1 + f(P2)/d2 = 2*f(P1)/d1.

But if the magnetic force declines exponentially as the square of the distance from source, then so that d1=sqrt(f(p1)/f(P2))/d2

Applicability of derived units

[edit]

The derived units of the SI are ultimately definable in terms of the seven base units. However, for computation, as well as comprehensibility, their definition is often by convention, in terms of higher-order derived units. For example, a Pascal unit of pressure is usually defined as a Newton/meter2 where Newton is a derived unit itself.

Even though a unit may have a usable definition in terms of base units, it may conventionally be applied only to specific situations.

Force and energy

joule

newton

pascal

Electromagnetic

coloumb

volt

farad

ohm

A siemens is what used to be called a 'mho' in the age of tube electronics and for decades after. It's an inverse ohm, a measure of transconductance. In the age of tubes, emission and transmission of electrons was one-way across the void of the tube; nothing could be transmitted or conducted backwards through it. While the internal components of tubes had as least marginal resistance measurable in ohms, the measure of a tube was its facility in conducting electrons across the void, a "one-way" inverse-resistance vector. Transconductance was the inverse of resistance so units were ohm-1, rather conceptually called 'mhos'. Today, siemens are useful in an analogous way for describing the properties of semiconductor components like transistors. Siemens are not ordinarily used for describing inverse ohmic resistances as "conductance".

watt

henry

weber

tesla

Radiation and luminance

lumen

lux

gray

sievert

becquerel

Other units

katal

A degree Centigrade is a synonym for a Kelvin, but we do need to distinguish, when referring to absolute temperature, whether the scale is Celsius or Kelvin. The degree Centigrade is an anomaly, its their applicability interchangeable in all circumstances with a Kelvin.

A radian is a scalar measure of length properly considered as a proportion of the circumference of a circle. Analogously, a steradian is a scalar measure of area properly considered as a proportion of the surface of a sphere. It is not proper to refer to an arbitrary scalar (i.e.a dimension less constant) as a "radian" if it is not associated with a cyclical phenomenon which can rationally be considered as tracing the circumference of a circle, like a point on the circumference of a rolling circle traces a sine wave in the plane of the circle. This is especially confounding because measurement scales are calibrated in alternate units of degrees, radians, gradians, etc. The radian scale defines a real number line whose values may be used like any other real number with the same result.

A hertz is defined as second-1 but we do not refer to the time taken by a clock tick (1 second) as 1hz-1 even though it is a periodic phenomenon. A hertz is used to refer to something that is periodic and associated with a wave, like sound in air, waves in water, alternating current in an electric circuit or electromagnetic waves in a vacuum or other medium.


This section is a replacement for the existing sections Electrical units and A coherent system. I think I have a much better focus and presentation than the originals. The originals focused on electrical units, how many, derived from what, named what, even the persons they were named for. They were buckets of bolts - it doesn't matter how many nuts and bolts or what sizes they were. If you can't fit them together, you can't build anything. What are we talking about here? It's how we got from three to four dimensions of measurement, and also how to elegantly integrate that fourth dimension. Nice properties fall out of that. Things like the International system and how the volt, ohm, etc got named are dead-end detours - they contribute nothing to the development and should probably be moved into footnotes. I might suggest that the preceding level 2 section, if we get to it, be titled something like The Gaussian second and mechanical systems. Sbalfour (talk) 23:50, 3 January 2018 (UTC)

History of the metric system: The fourth dimension of measurement

[edit]
Electric and magnetic constants

Every physical medium, like air, water and any substance has properties that determine how well it conducts electricity, and how readily it may be magnetized. These properties are constants which depend on how readily the material's atoms 'lend' electrons and how easily its atoms' electrons may be magnetically aligned. The electric constant is called permittivity and the magnetic constant, permeability; they are for practical purposes independent.

In an everyday way, the electric constant may be experienced as the ability of static electricity to leap a small gap in the air when there is no apparent conductor for the charge, and the magnetic constant is seen as iron filings approaching a magnet through the air when there is no mechanical force propelling them.

Analogously, the electric and magnetic forces act across a vacuum. But a vacuum lacks electrons, matter and energy so that these constants of permittivity and permeability with respect to a vacuum are irreducible properties of space itself, properly referred to as spacetime, the fabric of existance.[1] Because an electric charge cannot easily leap across a vacuum, nor can a magnetic force, these constants for a vacuum are tiny.

The electric and magnetic constants of a medium which determine its useful electrical and magnetic properties, in particular, those of a vacuum, were discovered, measured and given defined magnitudes as part of the definition of electrical and magnetic quantities in the early 19th century.

The non-extensibility of mechanical systems

[edit]

In the mid 19th century, contingent on the discovery of Ampere's law in 1823, a system of Electro-Magnetic Units (EMU) was defined embodying the magnetic constant. A unit of electrical power was derived, and equated to an established unit of mechanical power which we now call a watt. At about the same time, another system of Electro-Static Units (ESU) based on the 18th century Coloumb's law was defined which embodied the electric constant. A unit of electrical energy was derived, and equated to an established unit (erg) of mechanical energy. The units of the EMU system, also called the 'absolute' system were designated the abvolt, abohm, etc. The units of the ESU system were designated statvolt, statohm, etc, and they had somewhat different magnitudes than those in the EMU system.

In 1856, a remarkable relationship between the electric and magnetic constants was discovered by Weber and Kohlrausch:[2] the inverse of the product of their (at that time non-dimensional) magnitudes was the speed of light squared.[3][4] where is the speed of light, is the electric constant and is the magnetic constant. This simply reflects the fact that light itself is an electromagnetic wave that propagates through space as both a magnetic and electric field. So there are not two independent properties, but one, with two manifestations.

Another system of electrical and magnetic units embodying the unified electric and magnetic constants was defined, termed the 'gaussian' system. It used EMU units for magnetic quantities and ESU units elsewhere. Though the gaussian system united the EMU and ESU systems units, so that accurate theoretic conversions between them were possible, the more fundamental problem, that the gaussian units as well the predecessor EMU and ESU units could not be plausably expressed in mechanical dimensions remained. Attempts to do so based on the analogies of mechanical work and energy resulted in anomalies like resistance having units of velocity in one system and inverse velocity in another; volts in one system had the same units as amperes in the other; and self-inductance was in units of length.

A fourth system, with electrical units adopted from the EMU system, but scaled for use in telegraphy, was also defined late in the 19th century, termed the "practical" system of units. It went on to become formalized as the International system, and its units were called the international volt, international ohm, etc. It was the most applicable of the early electromagnetic systems, and destined to live on into the middle of the 20th century. The magnitudes of its electrical units were adopted for the SI in 1948.

The electric and magnetic constants in these systems were dimensionless, yet they had magnitudes which must apparently represent something, and that was identified as the root of the problem.

The dimensions and properties of a coherent system

[edit]

In the early 20th century, an Italian professor, Giovanni Giorgi,[5] demonstrated that there must be another dimension of measure, independent of the established system of mechanical measures, and that that dimension was an electromagnetic one. The inclusion of this fourth dimension gave the electric and magnetic constants magnitudes in this dimension, and brought the EMU, ESU and mechanical systems together. The system so defined had the property of coherence, in that derived quantities had ratios of unity, so that awkward conversions and constants of proportionality were unnecessary. The system was based on the metre and kilogram defined by the 1875 Treaty of the Metre and the resulting 1889 physical standards, and Gaussian second along with an unspecified electromagnetic unit, and became known as the MKS or MKSX system.

[Heaviside-Lorentz system as basis of rationalization]

[decimalization and rationalization need to be covered]

[that MKS borrowed decimalization and base/derived unit structure from earlier cgs system needs covered]

The electromagnetic unit of measure

The duality of electric and magnetic constants of spacetime is the dimensional manifestation of the identity of electro- magnetism as one of the four fundamental forces of nature.[6] The magnitudes of these constants are not in mechanical dimensions, but in an orthogonal electromagnetic dimension. That dimension may be quantified in a base unit among any of the electric or magnetic units available (volt, ohm, amp, gauss, maxwell, etc); all other electric and magnetic units are then determined according to the laws of physics, particularly ohm's law. The units are symmetric in that respect; no one will serve better than any other, and no unit is more fundamental than another. But only one is needed to define the dimension.

The dimension of electromagnetism together with the mechanical dimensions of mass, length and time forms an integral system of electromechanical dimensions whose units are capable of measuring any known quantity.

In the mid nineteenth century, telegraphy was the major application of electrical technology, and the principal impediment to its extension, due to low available electro- motive force of voltaic cells, was the resistance of some length of copper cable. So a unit of resistance, the ohm, was defined in terms of a fixed length of cable and the voltage of a Daniell cell, and was regarded as "fundamental". Unit quantities of the other units were derived according to ohm's law, etc. Early MKS systems made the implicit assumption that the ohm was the unit of electromagnetism. Much later, in mid-20th century, the ampere, a unit of electric current, was selected as the base electromagnetic unit for the SI.

  1. ^ spacetime incorporates notions of three spacial dimensions and one time dimension; these dimensions do not describe the same phenomena as the dimensions of measurement
  2. ^ Weber, W., and Kolrausch, R., “Elektrodynamische Maassbestimmungen insbesondere Zurueckfuehrung der Stroemintensitaetsmessungen auf mechanisches Maass”, Treatises of the Royal Saxon Scientific Society, Volume 5, Leipzig, S. Hirzel, (1856)
  3. ^ Kirchhoff, G., “On the Motion of Electricity in Conductors”, Annalen der Physik, Volume 102, p.529 (1857)
  4. ^ Clerk-Maxwell, J., “On Physical Lines of Force”, Philosophical Magazine, Volume XXI, Fourth Series, London, (1861)
  5. ^ G. Giorgi, Unità razionali di electtromagnetismo, Atti dell' Assoc. electtr. Ital. 5, 402-418, 1901
  6. ^ along with gravitation and the strong and weak nuclear forces

Timeline of metric systems

[edit]
Timeline of metric systems[1]
Date Event
3K BC origin of sexagesimal second based on astronomical observations
1656 Christian Huygens invents pendulum clock
1670 Mouton's use of fractional arcs of the earth as unit of length
1687 Newton's gravitational oblate spheroid model of earth
1740 Lacaille's survey of the French meridian
1742 Centigrade scale defined by Anders Celsius
1792+ Mechain and Delambre's survey of earth's meridian (thru 1798)
1799 French metric system adopted by law; first kilogam and metre artefacts
1832 Gaussian second and 1st system of 3-dimensional 'mechanical' units
1830s evolution of EMU system
1848 Lord Kelvin sets bottom of Centigrade scale at -2730 or 00 Kelvin
1860 candlepower defined by English law based on whale wax candle
1861 definition of ESU system by BAAS
1873 definition of CGS by BAAS based upon Maxwell & Kelvin's proposals
1875 Treaty of the Metre and birth of CGPM
1889 standard metre and kilogram artefacts fashioned
1893+ heyday of International electrical units (thru 1948)
1895 Jean Baptiste Perrin defines Avogadro's # of units as gram-molecule (mole)
1901 electromagnetic dimension and definition of Giorgi's MKS system
1948 Practical System of Units working draft of SI
1960 Introduction of the SI by CGPM; standard metre artefact retired
1968 ephemeris second redefined in terms of Cesium 133 microwave frequency
1983 speed of light defined exactly in terms of meter
2019 kilogram redefined by watt-balance and standard kilogram artefact retired

Brief history of the metric system

[edit]

[proposed lead] or A brief history of the metric system

[edit]

In the Age of Enlightenment following the Renaissance, natural philosophers defined units of measure in terms of physical phenomena: the length of a pendulum with a "swing" of one second was the proposed unit of length. The second as a fraction of a fraction of the day, they borrowed from astronomical observations of the ancient Sumerians of Mesopotamia, and their sexagesimal (base 60) counting system. One tenth of a unit length - a ratio that would ever after characterize the decimal metric system - cubed, defined a volume, the litre. The weight of that volume of water was the kilogram. A degree of temperature was a centi-grade, a hundredth of the gradient of the temperature range of liquid water, and the Celsius scale in units of it was named for the 18th century Swedish astronomer who devised it. Later, in pre-industrial England to facilitate uniform gaslamp lighting, a law defined the brightness of a then-common whale wax candle as a standard of illumination, a candlepower.

In the French revolution which initiated the Napoleonic Republic, men of science brought order to the chaos which was the French Ancien régime by defining the metre as a fraction of Earth's circumference, and casting the metre and kilogram in immutable precious metal to provide a durable and accessible standard for the sciences and trades. With these, the first metric system became the law of France at the turn of the century. Thence, Spain, in order to facilitate trade, adopted the French system, and soon thereafter, most of the European community.

But the metre/kilogram units were inconveniently large for the sciences, so grew up another system in the latter half of the nineteenth century, which used a hundredth of the metre - a centimetre - and a thousandth of the kilogram - a gram - along with the sexagesimal second as the units, which was called the centimetre/gram/second or cgs system of units. Many derived units were defined for use in the sciences, such as those for power and energy. That was a time of great advancement in electromagnetism due to the theories of Faraday, Maxwell and others, and necessary units were defined in different ways, so that chaos again rained upon the system of measures.

At the dawn of the industrial era, an ersatz unit of electric current, the ampere, related to the potential of a Daniell voltaic cell and unit resistance of a length of telegraph cable, was defined so that ratios between the various electromagnetic as well as other units were unity. A coherent system of units based on the French artefacts which had since been refashioned in the Treaty of the Metre, along with the second and ampere, came into being, and was the Metre/Kilogram/Second or MKS system of measures. The system lay fallow for half a century, while systems in use were a mixture of customary, like the British Imperial system and U.S common measures, cgs systems in the sciences, and a few MKS systems in international trade.

In mid-century, the international organization responsible for maintaining standards of measurement drafted a new proposal based on the 1901 MKS units, plus degree Kelvin - a degree centigrade of a scale starting at absolute zero - and the nineteenth century candlepower - now renamed the candela - as well as a comprehensive set of 16 derived units of electromagnetism, irradiance, power, energy and others. The system was finalized in 1960, called the International System of Units or SI, and was the birth of the modern metric system. The system was subsequently adopted by most of the industrialized world except the United States.

In the more than half century since the birth of the SI, a seventh base unit, the mole - shorthand for a quantity "molecular weight in grams" long used by chemists to titrate reactions - and six additional derived units have been added to the roster. The base units, except the kilogram, have been defined and redefined in terms of physical constants of nature like the speed of light, which are invariant to a high degree and can be measured with great accuracy. Several relevant constants of nature in addition to the speed of light, have been determined very accurately for this purpose, and may be assigned exact values in the near future. The result will be that the standards of measure so defined provide precision tools for the sciences on both large and small scales, and for industry, trade, and general use.

History of the metric system: overview

[edit]

The history of the metric system is the nearly four-hundred year saga since the Renaissance to devise and deploy integrated and extensible systems of measurement for useful natural quantities. The story includes theories of electromagnetism, mechanics and thermodynamics, actions of standards organizations and governments, and fortuitous events. Science, commerce and useful measures evolved together; revolution and war, and cultural zeitgeist as well as lethargy shaped them. The meaningful events centered around integration of disparate units into rational systems and the realization of standards for them. Over the history, the first intuitive dimensions of mass and length became of convenience and necessity, the current system of electromechanical and human-perceptual dimensions realized as a set of coherent[2] base and derived units.

  1. ^ The table lists scientific experiments or events directly defining units or systems and their realizations. Events relating to adoption of systems and incorporation of units into formal definitions (like incorporation of the mole into the SI in 1971) are omitted; these are administrative or socio-cultural events, not scientific ones. Some dates may be interpolative because the event spanned a period around that time.
  2. ^ ratios of 1 between the units

Overview of history of the metric system

[edit]

The history of the metric system is the history of efforts since the Renaissance to quantify useful dimensions of our physical world and to define unit measures of them, to bring the units together as practical systems, and get the systems into general use to facilitate science, industry and trade.

The early units, into the 18th century, were taken from natural phenomena, like the properties of water, the rotation of day and night, and distances upon the earth. For convenience, man-made objects were fashioned to act as references for some of them: metre bars, weights, and clocks.

Advances of science and the demands of commerce during the 19th century and into the early 20th century brought a deeper understanding of the relationships between diverse types of quantity and the need for a more integral and extensible system of units. This led to a small set of units and rules to combine them, with units of length, mass and time describing all "mechanical" quantities, including the derivation of units like acceleration, power and momentum. The development of electromagnetism resulted in new units being defined, but which did not share in the simple relationships of the system of mechanical dimensions. In the late 19th and early 20th centuries, the need for an electromagnetic unit was recognised. Decimalisation and a concept called coherence allowed the derivation of units of any quantity known, all with regularity and with 1 as the only conversion factor.

Adoption of the new system was slowed by the human resistance to change despite its evident advantages, and took over a half century. A body, the Conférence générale des poids et mesures (CGPM), was created and tasked with standardising the system of units, which became known as the International System of Units (SI) in 1960. It has become the dominant system of weights and measures in use today.

In the half century since adoption, defining precise measures without the vagaries of natural phenomena and man-made artefacts has become possible using only invariant relations between dimensions of the physical world, like the speed of light which relates distance and time, to produce units defined with unprecedented precision and reproducibility, which are realised as everyday units such as seconds and kilograms for science, trade, and general use.

An alternate history of the metric system

[edit]

The history of the metric system is the history of calibrating our world in terms of our heritage as sentient beings in a world of sight and sound and touch.

In the dark times, the dimensions that mattered were the dimensions of life rather than world, those of earth, air, fire and water: that which we breathed and drank, were warmed by, and which provided food and shelter. When the darkness lifted, men looked outward, to the world. Men needed a divorce from alchemy and divinity as witnesses of that world. We wanted to see and hear and reason. For in the first instance, these are the dimensions of our senses and of our minds. And we experience the dimensions of our world in terms of these. And so it was sought, in our bodies and our world, units. The period between heartbeats, and the length of our reach or stride (or a little more); the weight of a 'handful'. Water, the content of our bodies and of our lakes, oceans and rivers, froze and boiled. Carriable volumes of it sustained us away from the great bodies of it.

Science grew around these things; and depended on explaining so much we didn't know in terms of the little we did know so very well. Reason was brought to bear on the relatedness of units, but reason could not stray so very far from that which we knew. When a unit[1] related to the new-fangled invisible substance[2] was defined in terms like a horse runs,[3] it could bear no more. A new force of the physical world was recognized,[4] one whose dimension[5] was a property of the aether of the ancients.[6] Men sighed, but the 'volt' today is a measure of the least potential we can feel on our moist skin. Our tools divine the world ever more finely, but they only matter if we can see, hear or feel what they mean. Our senses reveal the only world we will ever know. The dimensions of that world, rationally related, are properties of those senses - not vise-versa - and that is why they serve us so uniformly[7] today. Sbalfour (talk) 05:47, 7 January 2018 (UTC)

  1. ^ ohm in the 19th century EMU system of electrical units
  2. ^ electricity
  3. ^ meters/second
  4. ^ electromagnetism as one of the four fundamental forces of nature
  5. ^ the familiar dimensions of length, weight and time are sensory; but electromagnetism does not have a sensory dimension; its dimension is ethereal
  6. ^ aether today we call spacetime; the dimensional manifestation of electromagnetism is a dual of properties, permeability and permittivity of spacetime, whose MKS units are respectively, henries/meter and farads/meter; they represent intuitively, the difficulty of a magnetic field or an electric charge to jump across empty space, i.e. a vacuum
  7. ^ the metric system is more universal today than the Spanish language or the U.S. dollar.

Essay Structure for wiki articles

[edit]

A sentence should average 10-20 words, and never be longer than twice that or about 30 words. A novel has sentences about that long. Shorter sentences mean easier to read. They also mean lower level. Early school children write in 3-word sentences. PhD dissertations may have sentences that span a whole paragraph; at the end we go, "gulp!" Given the level of readership, the encyclopedia should lean more toward the novel structure than the PhD structure.

A paragraph should have a 'handful' of sentences. A handful is about countably many or five, because we have five fingers to count them on. Six, or a half dozen is also countably many. One sentence can never be a paragraph. Two sentences should never be a paragraph. That may be a difference without a distinction - don't make 2 sentence paragraphs. A paragraph should not be more than twice that many sentences, or about 10-12. Most paragraphs are 4-8 sentences and average 5-6. A paragraph that's substantially too large often contains some kind of list, be it only a string of loosely related ideas. Often, an overview of the list is more important than its individual items. In that case, the items if they matter, can become a bullet list, effectively removing them from the readable text of the paragraph. If the items are not notable, they can be moved into a footnote, and the text retains only a terse summary of them. Even if an overly long paragraph can't be semantically split, it may still be advantageous to split it textually to increase readability. Divide the paragraph arbitrarily into two or even three textually equal sized paragraphs. Newspapers have small paragraphs because newspapers must be very readable. They often split ideas across multiple paragraphs and we don't even notice.

Structuring subsections, usually level three and four, should never be less text than 3 modest paragraphs of 5-6 lines or two substantive paragraphs of 7-8 lines, or about 150 words of text. Each subsection on average should contain a handful of subunits, be they paragraphs or sections, and not more than twice that number.

Finally, any article should not contain more than a handful of exceptions to the rules (no exceptions). A good article will not contain any.

Metric system#History Picard & Cassini surveys

[edit]

Jean-Félix Picard (1620-1682) Picard's & Giovanni Cassini's toise: In 1668 these two distinguished scientists made a toise standard for their survey of the length of the meridian passing through Paris. 1669-1670 survey.

French-Italian astronomer Jacques Cassini (1677-1756), also known as Cassini II, was head of the Paris Observatory starting in 1712. Between 1713 and 1718 he measured of the arc of the meridian (longitude line) between Dunkerque and Perpignan. In his De la grandeur et de la figure de la terre (1720; “Concerning the Size and Shape of the Earth”), he supported the theory that the Earth is an elongated sphere, rather than flattened. His two separate calculations for a degree of meridian arc were 57,097 toises de Paris (111.282 km) and 57,061 toises (111.211 km), giving results for Earth's radius of 3,271,420 toises (6,375.998 km) and 3,269,297 toises (6,371.860 km), respectively

A toise de Paris from the fathom family of anatomical measurements, was defined for surveying in 1668 as the length of an iron bar standard. It was approx. 1.949m. The fathom originated as the distance from the middle fingertip of one hand to the middle fingertip of the other hand of a large man holding his arms fully extended. A fathom was originally used to measure depths of water. The metre was originally defined in terms of the toise of Paris andits subunits, lignes de Paris or Paris lines.

Somewhere near Barcelona, at the very start of his triangulations, Méchain made a small error of computation. Once this error had entered his system, it was impossible to eradicate it.

Genesis of scientific measure

[edit]

The year 1792 was a watershed in the history of science and the science of measurement. In June of that year, an experiment in measurement, a survey of earth's meridian, was undertaken to ascertain the length of a fractional arc of it for use as a unit of length which would later become part of the metric system as the metre. No artefact or unit of measure before that time except the sexagesimal second, ever became part of the future system. That date forward marks a continuous evolution of the metric system known today as the International system of units.

But the true genesis of that system was a confluence of specific advances in mathematics and the sciences during the Enlightenment, and a cultural backdrop of decadence since the time of Charlemagne among feudal states and kingdoms that were once the Roman Empire.

The ancient Sumerians of Mesopotamia in the third millenium BC had a calendar based on astronomical observations, and an early form of duodecimal counting using the thumb to point to knuckles of the fingers in cycles of 3, hence 12, and counting 12s on the fingers of the other hand, so that 12 or 60 unit divisions of the calendar became units of time, and today we still have sexagesimal seconds and minutes and duodecimal hours.

Before the Kingdom of France, in the ninth century the Holy Roman Emperor Charlemagne had imposed on the realm a uniform system of measures based on the Carolingian system, which came to be known as the Ancien regime. By the dawn of the Revolution in the late 18th century, the system had fragmented into more than 800 unit names and as many as a quarter million definitions that varied from town to town and even from merchant to merchant, impeding trade of emerging merchant-states, and giving rise to corruption.

The introduction of Arabic numerals to the future sea-faring nations of Europe by the Moorish invasion of the Iberian peninsula in the 10th century displaced the inherited Roman system. It was followed in the early 13th century by the development of the decimal system of counting and use of decimal notation. The decimal ratios later displaced the duodecimal ones in common use for among other things, physical measures.

In the 17th century and early 18th centuries of the Age of Enlightenment, several notable advances in the sciences pertaining to measurement occurred: in 1656, Dutch scientist Christiaan Huygens invented the pendulum clock, realizing for the first time, the unit of the ancient astronomical second; in 1669, French astronomers Jean Picard and Giovanni Cassini made a great survey of the earth's meridian, and in 1718, his son Jacques Cassini made another and later they were used to provisionally define the metre; in 1687 English mathematician Sir Isaac Newton's model of earth's gravitation showed earth to be an oblate spheroid, and consequently that gravitational acceleration varied with latitude. This discovery meant pendulum length of clocks varied among geographical locales, and extrapolation of surveyed arcs of the earth to its meridional length was imprecise.

Together, scientific advances in the Age of Enlightenment, the cultural zeitgeist of revolutionary France, and the chaos of the existing system of measures gave birth to a new system, one with but few units and simple rules for combining them into any needed measures.

Measuring a meter

[edit]

Science is so simple that a child should be able to do it, because that is how we teach it to them (and adults, too). If we told a child that we are going to measure a meter using light (the speed of light), he will have no understanding at all of what should be done. He will not be able to do the experiment, but an older child should be able to understand that it is meaningful and reasonable to perform such an experiment. So we will give him that understanding.

The child can do another experiment, which is a meaningful part of that understanding. If given a distance on a long footpath, and a long string of indeterminate length, he is instructed to return a length of string of one meter. He is told the footpath is a kilometer long. The string, by fortunate selection is 10 meters long (but he doesn't know that, and it doesn't matter for the purpose how long the string is). Even if he is unsure what to do, he will soon enough figure out that by laying the string end-to-end along the path, he can divide up the path in lengths of the string, so that by a ratio, he can determine the length of the string in meters. He lays the string down exactly 100 times, so he knows the string is ten meters. He divides it in half by folding it over, and has a five meter string, then lays it out in a zigzag of five segments until the segments are all equal, cuts off one of them, and pronounces it a meter. If he has done his measurements carefully, it will be recognizably a meter long. The procedure is a little more elaborate, but essentially the same, if the string is not such a nice divisor of the footpath.

But it's not so clear what is done when the distance to some celestial object is triangulated to be 3*10^8 meters, and light takes a second to get there. And of course, part of the understanding would be how do we know how far away that object is, and how do we know light takes a second to get there?

Pleurodira
Cryptodira
Trionychia
Durocryptodira
Americhelydia
Testudinoidea

History of definition

[edit]
How an atomic clock works

When exposed to certain frequencies of radiation, such as microwaves, an atom's electrons will "jump" back and forth between energy states. Clocks based on this jumping within atoms can therefore provide an extremely precise way to count seconds.

Atoms of the element cesium, particularly the 133 isotope, have some favorable qualities for this application. Other elements like rubidium and hydrogen are also used in atomic clocks.

When bombarded with radiation of frequency 9,192,631,770 (~9.2Ghz) cycles per second, an atom of cesium vibrates between two energy states. This is a frequency in the microwave region, between radio waves and the infrared region of the visible spectrum. It's about 4 times higher than the frequency of a microwave oven.

Inside a cesium atomic clock, cesium atoms are channeled down a tube where they pass through microwaves. If this frequency is just right, 9,192,631,770 cycles per second, then the cesium atoms "resonate" and change their energy state. The bombarded cesium atoms are sorted magnetically, and those that have changed state are sent on to a detector. The detector counts the number of cesium atoms reaching it.

The more finely tuned the microwave frequency is to 9,192,631,770 cycles per second, the more cesium atoms reach the detector. The detector feeds information back into the microwave generator. It synchronizes the frequency of the microwaves with the peak number of cesium atoms striking it. Other electronics in the atomic clock count the cycles of this frequency. A second is ticked off every time the defined cycle count is met.

The second is and has always been, an interval of clock or astronomical calendar time, which is a division of the day (one rotation of the earth) into successively smaller units.[1] The second has always been the smallest of such units, though it has not always represented the same size unit. The sexagesimal second originated in the 3rd millenium B.C.E. with the Sumerians of Mesopotamia who had a calendar based on astronomical observation and a duodecimal and sexagesimal counting system, so such divisions of the calendar became units of time. By the time of the Babylonian Empire in a few hundred B.C.E., sexagesimal divisions of the day were well established.

A second is a very small amount of time, and until late in the 16th century, it was infeasable to count time in seconds. By the time mechanical clocks became available which were able to count seconds at least crudely, the division of the day into duodecimal hours and hours into sexagesimal minutes and seconds[2] was nearly universal in Europe. Clocks realized the second for the purpose of timing events, but the second was defined by the division of the day. When clocks ticked seconds that were too fast or too slow, so that by the end of the day, more or fewer than 86,400 seconds had passed, clocks were sped up or slowed down to synchronize them with the period of earth's rotation.

In 1956, metrologists knew that earth's rotation was slowing, and that continuing to define a second in terms of a fraction of the solar day meant that a second would not have constant value. Furthermore, very accurate atomic clocks became available about that time, so the difference in earth's rate of rotation could now be measured accurately. Metrologists decided to define a second as 1/31,556,925.9747 of the tropical year 1900 to give it a constant value, even though this value was shorter than the current solar second. This unit was called an ephemeris second. In 1960, this definition of the second became part of the International system of units.

But even the best mechanical, electric motorized and quartz crystal-based clocks develop discrepancies, and virtually none were good enough to realize an ephemeris second and keep time better than the earth. Far better for timekeeping is the natural and exact "vibration" in an energized atom. The frequency of vibration (i.e., radiation) is very specific depending on the type of atom and how it is excited. Since 1967, the official definition of a second is 9,192,631,770 cycles of the radiation that gets an atom of cesium to vibrate between two energy states. This length of a second was selected to correspond exactly to the length of the ephemeris second previously defined. Atomic clocks use such a frequency to measure seconds by counting cycles per second at that frequency. Radiation of this kind is one of the most stable and reproducible phenomena of nature. The current generation of atomic clocks is accurate to within one second in a few hundred million years.

Atomic clocks now set the length of a second and the time standard for the world.[3] A day is still 86,400 seconds plus a tiny fraction, and ordinary clocks still count seconds like they always used to. But over time, those fractions add up, so every few years since 1972, time for all clocks - at least all those calibrated to a timekeeping standard - is adjusted by adding a leap second to clock time to resynchronize clocks to the period of earth's rotation. The second remains constant, but every day gets minutely longer, by a millisecond or two on average.

Short version

[edit]

Since seconds weren't actually realizable until the advent of mechanical clocks in the late 16th century, they were a figurative division of calendar time until then. Sexagesimal divisions of both calendar time and arcs were common since the 3rd millenium B.C. and were adopted as divisions of clock time when clocks became available. So a second, called a mean solar second, was effectively defined as 1/60 minute which was 1/60 hour which was 1/24th day, or 1/86,400 of a day, the period of earth's rotation, since that time. A second was used and recognized as an essential unit of measurement for scientific purposes starting about 1830. In 1952, it was recognized that earth's rotation wasn't so constant as believed, but that's earth's orbit around the sun was relatively constant, so a second, called an ephemeris second, was defined as 1/31,556,925.975 of a year in 1900, which was a little shorter than 1/86,400 of the day in 1952. This definition became the base unit of time in the metric system in 1960. Soon after, atomic clocks became available which kept time better than either, and in 1967, a second of "atomic" time[4] was defined as 9,192,631,770 cycles of a natural vibration frequency of the caesium atom which was the timekeeping mechanism of an atomic clock. Today, atomic clocks keep time accurate to within a second in several hundred million years. While precise timekeeping standards are available where needed, common clock time today still measures seconds as 1/86,400 of the day.

Lake Okeechobee Basin

The Lake Okeechobee drainage basin, including the Kissimmee River drainage basin which feeds it, in south central Florida was naturally or hydrographically, an endorheic basin, one which does not have outflow to another body of water like a river or ocean. Such a basin may form a swamp when water collects. It was altered by anthropogenic activity, specifically the construction of the Okeechobee canal in 1937 which spanned the Atlantic Ocean, the lake and the Gulf of Mexico. Nonetheless, it is not considered by hydrologists to be part of either the Gulf of Mexico watershed or Atlantic seaboard watershed. The combined drainage basin is the southern terminus of the Eastern Continental divide.

Saponin structure: glossary

[edit]
Glycoside

A glycoside is a sugar in which a hydroxyl (-OH) group has been replaced with an organic molecule.

Terpene

A terpene (short for terpentine, an obsolete spelling of turpentine) is a smelly oily hydrocarbon with a structure of joined units each with 5 carbon atoms with alternating single and double bonds and attached hydrogens called an isoprene. The isoprenes may be linear or joined as rings. The simplest terpene is a pair of isoprenes. A “tri-terpine” (triterpene) is comprised of 3 such pairs. A terpenoid is a terpene with oxygen-containing groups substituted for hydrogen.

Steroid

A steroid is a cyclic or ring structured triterpenoid derived from the triterpene squaline by oxidation and cyclization. The basic structure is comprised of 17 carbon atoms in four “fused” rings. One of the most basic such steroids is cholesterol.

Saccharide

A saccharide or sugar is a carbohydrate, an organic molecule containing oxygen, usually in the form of a chain or ring of carbon atoms decorated with hydrogen (-H) and hydroxyl (-OH) groups and a double-bonded oxygen (=O) at one position. An oligosaccharide is a short molecular chain of sugars.

Glycoalkaloid

A glycoalkaloid is an alkaloid base molecule to which a sugar has been attached. An alkaloid is formed from an organic base molecule by substituting a nitrogen atom for one or more carbon atoms.

Ester

An ester is an acid, usually a carboxylic acid, where the H of the carboxyl group (COOH) is replaced with an organic molecule.

The vast heterogeneity of structures underlying this class of compounds makes generalizations fuzzy; they’re a subclass of terpenoids, derivatives of a smelly oily cyclic hydrocarbon, terpene (the alternate steroid base is a terpene missing a few carbon atoms). The derivatives are formed by substituting (usually oxygen-containing) other groups for some of the hydrogens. In the case of most saponins, one of these substituents is a sugar, so the compound is a glycoside of the base molecule. Specifically, the base or fat-soluble portion of a saponin can be a triterpene, a steroid (spirostanol or furostanol) or a steroidal alkyloid (in which nitrogen atoms replace one or more carbon atoms). Another possible base structure is an open (acyclic) side chain instead of the ring structure in the steroid base. One or two (rarely three) water-soluble monosaccharide (simple sugar) chains may bind to the base via hydroxyl (OH) groups, and sometimes there are other substituents such as hydroxyl, hydroxymethyl, carboxyl and acyl groups. The chains may be from 1-11 molecules long, but are usually 2-5, and may include branched chains. The most common such sugars are dietary simple sugars like glucose and galactose, though a wide variety of sugars occur naturally. Other kinds of molecules like organic acids and esters may also attach to the base via carboxyl (COOH) groups. In particular among these are the sugar acids, such as glucuronic acid and galacturonic acid, which are oxidated forms of the sugar.

Monkey and the coconuts: Diophantine sidebar

[edit]
A Diophantine problem

Diophantine analysis is the study of equations with rational coefficients requiring integer solutions. In Diophantine problems, there are fewer equatons than unknowns. The "extra" information required to solve the equations is the condition that the solutions be integers. Any solution must satisfy all equations. Some Diophantine equations have no solution, some have one or a finite number, and others have infinitely many solutions.

The monkey and the coconuts reduces algebraic- ally to a two variable linear Diophantine equation of the form

ax + by = c, or more generally,
(a/d) + (b/d)y = c/d

where d is the greatest common divisor of a and b.[5] The equation is solvable if and only if d divides c. If it does, the equation has infinitely many periodic solutions of the form

x = x0 + t · b,
y = y0 + t · a

where (x0,y0) is a solution and t is a parameter than can be any integer. The problem is not intended to be solved by trial-and-error; there are deterministic methods for solving (x0,y0) in this case (see text).

Area of a (half) circular segment in a rectangle

[edit]

Axiom: a circular arc within a rectangle which intersects diagonally opposite corners encloses an area (half of a circular segment) which is close to two-thirds of the area of the rectangle if the radius of the arc is large relative to the height of the rectangle.

Given the arc radius r, the length c of a chord spanning the arc, segment height h, angle Θ subtending the chord. Two will be independent, and two dependent, though the choice of ehich two is arbitrary. The rectangular box has dimensions h x c/2.

The area of a circular sector is , so the area of the segment cut by chord c is: , where H=r-h is the height of the equilateral triangle formed by the chord and two radii adjacent to Θ. so half the segment has area,

We don’t know arcsin Θ directly, but we know arcsin t=Θ/2,

The Taylor series for for -1<t<1 where Δ is a non-zero positive quantity that needs to be estimated in terms of the available parameters c, h and r (but not Θ), and limited to degree not higher than cubic. Substituting,

But terms cancel, so

; refactoring first term as ,

But the relationship between , and is , so ; substituting,

But we cast away the remainder (5th degree and higher) terms of the arcsin approximation, whose value is very close to , so combining the terms crudely, we get:

The multiplier 12 in the denominator is a bit too large (i.e., the remainder fraction of arcsin was larger than twice h/12r); it should be just about 8.5.

Compare this to within .1% for 0<Θ<2.6 and within .8% 2.6<Θ<π(Stocker and Harris, 1998). From another source, ; the box area is .

So, the area of (half) a circular cap is 2/3 the area of the enclosing box (height x half the chord) plus a small delta which depends on the ratio between the height of the box and the radius of the arc. The maximum ratio π/4 is when the radius is the height of the box, Θ/2 is π/2, the box is square and encloses a quarter circle. The ratio asymptotically declines to 2/3 as the ratio of the radius to the height of the box goes to infinity. Though the substitution for arcsin is valid only for angles<1, the ratio formula is a plausible approximation for angles up to π/2.

In the Goat problem, the radius was 117.267, the height 2.733, half the chord was 25.465, and the actual area was 45.94. The computed area was 46.40. This latter ignores the fact that the arc is not quite circular, the radius was itself originally approximated, and whether the assumed radius of the arc is actually a good fit is speculative.

The following table lists a range of r as a multiple of h, and corresponding ratios of segment area to rectangle area (Q is the actual ratio of the enclosed area to the area of the rectangle):

%r.e.
1 [6]
1.5
2.175
5
10
25
50
100

100 times radius is only included in the table to substantially confirm that the asymptote is 2/3 and not some other rational or irrational number.

We need a better approx. for θ [5/6π, π]:

for h>.75R

For θ = .95π, we have

(r.e. .0056%)

But what we really want here is in terms of c, h:

; subtracting a slice (R-h) high is no longer valid because we are using h instead of r as the height. Here ~a is larger than a; an interpolated scalar is no better than 2.2% even over the restricted range [5/6π,.98π]. A delta linear in c, h must be better, and an obvious one leaps out: (c/2-h)/3. Turns out that π is nearly the optimal denominator. Is there a better ratio between c/2 and h than 1-1?

c/2·h is roughly R^2; I'd speculate that subtracting some fraction like kπ·c·h would get very much closer. Is the estimate actually ?? At c/2=h=R, it reduces to π/2R^2 which is correct!

r.e. r.e. r.e. r.e. r.e. r.e.

If computed directly,

Using for small ‘’t’’, the above reduces to

which is (pre-α) above.

New approx

[edit]

An exceedingly good approximation is:

Starting with: and , want to know what is.

Equate and combine R^2 and R terms:

Let :

Reorder as a a polynomial in (except factors of h):

Now, need to assess relative magnitudes of the terms (δ ranges from 0 at π to .035 at 5/6π):

Dropping the c/2 and δ² terms and refactoring,

As below, , so:

Refactor and combine terms:

Approx

[edit]

Starting with: and , want to know what is.

Equate and combine R^2 and R terms:

Let :

Reorder as a a polynomial in (except factors of h):

Here we need to assess which terms matter; at , , and , so:

Dropping the c/2 and δ² terms and refactoring,

ranges from 0 at π to .0353 at 5/6π. effectively is an expression ~

Separate and combine c^2, c·h and h^2 terms:

; -.07, .57, -.5; how close is this to ?

It's not pretty, unlike the delta for small angle estimates.

What if we just hypothecate , create 3 linear equations at 5/6π, .9π, and π, then solve for i,j,k?

;

This is suspiciously close to , so that

It’s a question of what is κ in terms of, or ? Moreover, the coefficients of the latter terms must total 2/3. So assume the coefficient of c•h is 2/3, and use a pair of linear equations to recompute the other coefficients over a possibly wider range, like 2/3π to π.

New start

[edit]

The plausible approximations have the form: where . evaluates to a non-zero positive number less than . In the range [0,π/4], a plausible approximation is to set ; in the range [π/4,5/6π], ; in the range [5/6π,π],

Goat problem: computing transcendental angle

[edit]

±

The geometry of the problem involves a reciprocal relationship between an angle and a trig function of the angle. However that’s expressed mathematically, it’s inherently transcendental. The angle is here designated Φ. While the accuracy of Φ has only a small impact on the overall solution to the problem, some way of quantifying it must be found. Φ doesn’t depend on the radius of the silo or the length of the tether, at least under the constraints of the geometry presented; it’s more or less a predefined scalar. The analytic relation is . It is a little less than 2/9 and a bit more than 2π/29. for small angles as the first term of the Taylor series for the function, and that yields a quadratic equation in Φ whose solution is .2227, within 1.7% of the actual value. The constant reflects a peculiar relationship of the problem geometry and that relationship is not applicable outside the problem domain, so it is unlikely that it can be found in published sources. There aren’t any effective shotcuts to computing a transcendental number; they’re iteratively defined and iteratively computed; computational effort is proportional to the number of digits of precision required.


If one resorts to trial and error in the first instance, the relational definition of Φ converges linearly: the number of digits of accuracy is n (n+1 or n+2 if you guess plausibly) where n is the number of trials. Φ is rather small, so π/10 or π/20 is a plausible guess. Instead of guessing, one can use the angle with tangent formed by a line from the center of the silo to the circumference of the great circle of radius 2π through point F; its length is 2π-1. So the right angle it forms with the radius r has tangent 1/(2π-1), or . That’s about π/17, not an especially close estimate, but better than a guess. You need about 4.5 decimal places of accuracy to get an answer to the nearest square yard.


From a geometric construction,

is a parameter with an arbitrary value, and has been equated to 1 so it vanishes from the formula. , so the approximation actually depends on the parameters of the problem, which are conveniently related in the instant case.

Astonishing is how accurate the approximation is: , accurate to nearly 4 decimal places. If we ignore the -1 under the radical, the term becomes Φ=asin(2/(3π))=.21383, an error of only 2.5%. The error amounts to only a few square yards finally. Obtaining greater accuracy will involve substantially more computation, some kind of guess and iterative refinement, or transcendental difficulties. An iteratively defined quantity intrinsically means that better approximations involve progressively higher powers of some defined expression. A better estimate will almost certainly be a cubic or quartic function of : the second term in the Taylor series expansion for arctan is , or one could imagine plugging the definition iterative formula into itself twice, then substituting back in for arctan, resulting in a cubic.


The following analytic-geometric approach yields a sextic equation with at most 4 real roots, among which is the one of interest, if it can be properly identified. Two others are evidently imaginary.

Given and is the length of the chord spanning the arc subtended by . This yields 3 equations in 3 variables to solve:

From the right-angled triangle whose sides are x and r, and opposite side y:

1)  

From the components of R, the tether length x + Δ + π/2*r:

2)  

From a trig expression for the chord subtended by Φ, as part of a triangle whose other sides are r:

where

3)  

Substituting x,y into (1) yields

r is a parameter that can be any value, and it simplifies the computation to let it be 1. 120 is really r(R/r-π/2), so factoring out r leaves a constant function of π.

where

This is evidently sextic, after the polynomial is put in normal form, but it is much less useful to do so. It’s usually not productive to try to solve the equation directly for , since sixth degree is non-algebraic (the structure of this equation is confirmation that the problem is indeed transcendental), but we know something about the imaginary roots, and possibly something about the structure of the rest of the factors. The LHS is a parabola facing up with minimum 1 at x=3π/2 and y-intercept of 23.2. The RHS curve is an upward facing double U with asymptotes 0,2 and 0,-2 and minimums of 1 at x=±2 plus two hyperbolic portions in the third and fourth quadrants with asymptotes ±2,0 and vertices (-22,-22) and (22,-22) respectively. The intersection with the parabola yields 4 real roots and two imaginary ones. Re: 3 complementary pairs of roots around ± 2, 0 and K±i.

It is tempting to say the imaginary roots are (K±i) - that makes the LHS 0. f(RHS) can’t be 0, but because the denominator goes to infinity. also, but they are approaching zero at different rates, so there will be a pair of values along the way, infinitesimally different from K±i, where they agree. We imagine an inverted parabola intersecting the asymtotic segment of the RHS in the 4th quadrant at those abscissa to yield the pair of imaginary roots. The Cartesian coordinates of those points are infinitesimally close to: (K-i,-.0297) and (K+i),-.00428). The pairs are:

The real-valued solution pairs by iteration are

They are not quite conjugate because a parabola is not a horizontal line like the x-axis. The sought after root must be , so the the proper root is apparent.

The quadratic components of the sextic equation have the general form:

where satisfy a quadratic relational that simultaneously minimizes the LHS and RHS at a pair of points. It is necessary to carefully distinguish between infinitesimal values and putative round-off errors.

There is no simple way of determining t1 and t2. t1 is larger than t2 because the curve gradients near the asymptotes ± 2 is steeper than those approaching 0. t1 and t2 actually represent pairs of closely valued parameters that may be treated identically to expedite understanding the form of the solutions Δ.

The resulting sextic has the general form:

The sum-of-terms normal form of the equation is (odd powers are present due to not quite conjugate roots):

The actual coefficients are (normalizing the coefficient of the high power to 1):

The complex conjugate roots form a quadratic factor:

This computes to:

whose solutions by the quadratic formula are:

Factoring out this latter (because we know it does factor) yields a quartic with all real roots, among them the desired one:

The nearly zero coefficients of odd powers means that the roots are almost conjugate pairs, so ignoring the odd power terms, the equation can be split into

so that the roots are and . But

and

Substituting and solving for ,

Because we omitted some terms, the roots of the original equation are:

and are unknown but small, possibly a few tenths ( could be negative). But we also know that

Solving, so the final roots of the quartic, and the 4 real roots of the sextic are:

which agree to three decimal places with the iterated solutions found above.

The relevant root is , the length of the chord subtended by Φ. . In summary, given r=25.46: (Δ=5.565, x=114.4, y=117.2, Φ~=.2185+).

The quantity approximated by Δ, the arc subtended by Φ, rΦ=5.57628 (1.002•Δ) and Φ=.21898. So the approximation is off in the fourth decimal place. The issue isn’t the precision of computation, but the limit of the approximation. The lesson is that despite a considered and considerably difficult analytic procedure, the approximation is not better than the geometric construction above.


The two curves here are: a very steep parabola facing up intersecting the x-axis at (120,0) and the y-axis at (0,240), and a mostly horizontal S-curve between asymptotes 0 and 1, multiplied by a large constant. This curve is substantially linear with slope -1/2 between the intercepts with the parabola. The intercepts are at approx x=120±k where k is 5.6.

In the diagram, . But , so

Substituting from (1) and (2) above,

Δ at x=120 comes out to 5.315

Let , (r<<x here)

If designate the RHS as ,

Then the roots of this are by inspection

But is not a constant as implied; it varies a little bit depending on itself. So, say

But where :

      here.

using the same substitution above for the radical,

is negligible compared to the other substitutions, so drop it. We don't have an answer yet, because is an f(x), so substitute back in for it:

This is . But we know that and

Rearrange and solve:

has roots -5.18 and 125.2

has roots 5.67 and 114.3

The original quartic must have been:

and has 2 complementary pairs of roots and .

Stock value

[edit]
Present value of a stock

The present value or value, i.e., the hypothetical fair price of a stock according to the Dividend Discount Model, is the sum of the present values of all its dividends in perpetuity. The simplest version of the model assumes constant growth, constant discount rate and constant dividend yield in perpetuity. Then the present value of the stock is

where

P is the price of the stock

D is the initial dividend amount

r is the periodic discount rate (either annual or quarterly)

g is the dividend growth rate (either annual or quarterly corresponding to r)

The requisite assumptions are hardly ever true in perpetuity, so the computed value is highly hypothetical.

Approximating sin(x)

[edit]

It's pretty well known that for small values of x, sin(x)=x. That equivalence springs from a Taylor series evaluation for sin(x) around 0. For values around pi, a similar equivalence applies: sin(x)=sin(pi-x) where pi-x is again a small quantity. For values around pi/2, sin(x)=1. We may envision tangent lines approximating sin(x) at the specified points.

a better approximation for sin(x) around pi/2 is 1-(x-pi/2)^2.

We can of course, obtain similar approximations for cos(x), tan(x), etc, by use of the standard trigonometric identities.

If we limit maximum relative error to 3%, the bounds on the approximations are 0+.42 and pi-.42, and pi/2+-.24. That covers only about 42% of the range of sin(x) from 0 to pi.

equations like:

(Mrs. Miniver's problem)

That's a large angle, so the small angle formulas for can't be used. But if we substitute for in the function (which leaves the result unchanged), we get:

Then using the first two terms of the Taylor expansion of around 0, we get:

We can't solve the cubic directly, but if we omit the cubic term and solve the resulting linear equation, we get and . Substituting the value of the cubic term back into the equation and solving, we get

, so

which is very close to the correct value (and the one we'd get by solving the cubic) of 2.6053, with relative error of <.026%.

Hatfield-McCoy and other feuds

[edit]

The Hatfield–McCoy feud involved two rural American families of the West Virginia–Kentucky border area along the Tug Fork of the Big Sandy River in the years 1878–1890. Some say the 1865 shooting of Asa McCoy as a "traitor" for serving with the Union, was a precursor event.[7] There was a lapse of 13 years until it flared with disputed ownership of a pig that swam across the Tug Fork in 1878 and escalated to shootouts, assassinations, massacres, and a hanging. Approx. 60 Hatfield and McCoy family members, associates, neighbors, law enforcement and others were killed or injured.[8] 8 Hatfields went to prison for murder and other crimes. The feud ended with the hanging of Ellison Mounts, a Hatfield, in Feb. 1890 after being sentenced to death.[9]

Strategy

[edit]
Simple beginner strategies

Gardner annotates that a simple way to win against a beginner is to play in the center space then extend a chain of adjacent spaces to each of his own sides of the board, with the argument that since that at each step, he has a choice of two open spaces to make his next play, that he cannot be stopped. The strategy fails because the opponent need not play adjacent to the ends of the chain. A second better strategy is to play first in the center, then extend a chain of double-linked stones either vertically or diagonally toward ones own sides. If blocked vertically, move diagonally, and vise-versa. (See diagram.) The strategy can be countered by proper defensive play.[10]

Hex is probably a hard game.[11][12] The strategy is subtle and complex.[13]No constructive winning strategy is known except for small boards.[14][15]. Simple strategies suitable for beginners have been described, though such strategies can be easily defeated by proper play (see sidebar).[16] The components of strategy have been described by one source as: structural, positional, and general.[17] The first two correspond to combinative play and positional play in chess, respectively.[18] When evaluating one's position, a fundamental principle is that one's position is only as good as the weakest link in the strongest path.[19] Because there can be no draws, completing one's own chain necessarily blocks the opponent from completing a chain. Therefore attacking (extending one's own chain) or defending (blocking the opponent's chain) are equivalent structurally.[20][21]

The first player has a strong advantage.[22][23][24]. The "swap rule" reduces the first player's advantage.[25][26]. Because of the swap rule, the first player should choose a weak enough move so that he retains the move (opponent does not swap) but strong enough to retain some advantage .[27] Heuristic swap maps are available for at least the standard 11x11 board.[28] There are strong (good moves) and weak (poor moves) areas of the board initially: strong areas are on or near the short diagonal; weak areas are in or near the acute corners and along one's own edges.[29]

Early in the game, stones are played at widely separated locations.[30] As the game progresses, stones are connected together into groups, and groups are extended to connect to other groups, forming a path. [31]

References

[edit]
  1. ^ While some middle-ages philosophers proposed seconds based on division of the lunar month, no timekeeping system incorporating such a second was ever adopted.
  2. ^ so that a day was 86,400 seconds
  3. ^ That time is called International Atomic Time (TAI). It is set by a consortium, of atomic clocks throughout the world which "vote" on the correct time. All voting clocks are periodically adjusted to the consensus time. A set of time standards adjusted to keep in sync with earth's rotation, called Universal Coordinated Time (UTC, UT1, and a few others) is derived from TAI offset by periodic addition of a leap second.
  4. ^ which redefined the metric unit of time
  5. ^ d can be found if necessary via Euclid's algorithm
  6. ^ The arctan approx is invalid here
  7. ^ Cline, Cecil L. (1998). The Clines and Allied Families of The Tug River Valley. Baltimore, Maryland: Gateway Press.
  8. ^ https://www.nytimes.com/1908/02/24/archives/hatfieldmcoy-feud-has-had-60-victims-it-started-48-years-ago-over-a.html
  9. ^ Alther, Lisa. Blood Feud: The Hatfields And The Mccoys: The Epic Story Of Murder And Vengeance. Lyons Press; First Edition (May 22, 2012). ISBN 978-0762779185
  10. ^ Gardner, p. 80.
  11. ^ Hayward and Toft (2019), The Full Story, CRC Press, Boca Raron, FL, ISBN 978-0-367-14422-7, p.151
  12. ^ Gardner, Martin (1959), in The Scientific American Book of Mathematical Puzzles and Diversions, Simon and Schuster, New York, p.76.
  13. ^ Anshelevich, Vadim (2000), A hierarchical Approach to Computer Hex, Artificial Intelligence 134, p.102
  14. ^ Gardner, p.77
  15. ^ Brown, Cameron (2000, Hex Strategy, A.K. Peters, Ltd, Natick, MA, ISBN 1-56881-117-9, P.17
  16. ^ Gardner, p.80
  17. ^ Brown, p.43
  18. ^ Lasker, Emmanuel (1926), Laskers Manual of Chess p.32
  19. ^ Brown, p.44
  20. ^ Brown p.50
  21. ^ Hayward, p.155
  22. ^ Gardner, p.79
  23. ^ Brown, p.2
  24. ^ Hayward, p.167
  25. ^ Brown, p.2-3
  26. ^ Anshelevich, p.102
  27. ^ Brown, p.152-156
  28. ^ Brown p.170 Summary
  29. ^ Brown, p.153-155
  30. ^ Brown p. 101 Early Game
  31. ^ Brown p.53 Groups, p.56 Steps, p.59 Paths