Saturday, January 9, 2010

Electricity generation

Electricity generation is the process of creating electricity from other forms of energy.The fundamental principles of electricity generation were discovered during the 1820s and early 1830s by the British scientist Michael Faraday. His basic method is still used today: electricity is generated by the movement of a loop of wire, or disc of copper between the poles of a magnet.
For electric utilities, it is the first process in the delivery of electricity to consumers. The other processes, electric power transmission, electricity distribution, and electrical power storage and recovery using pumped storage methods are normally carried out by the electrical power industry.Electricity is most often generated at a power station by electromechanical generators, primarily driven by heat engines fueled by chemical combustion or nuclear fission but also by other means such as the kinetic energy of flowing water and wind. There are many other technologies that can be and are used to generate electricity such as solar photovoltaics and geothermal power
Centralised power generation became possible when it was recognized that alternating current power lines can transport electricity at very low costs across great distances by taking advantage of the ability to raise and lower the voltage using power transformers.Electricity has been generated at central stations since 1881. The first power plants were run on water power or coal,[4] and today we rely mainly on coal, nuclear, natural gas, hydroelectric, and petroleum with a small amount from solar energy, tidal harnesses, wind generators, and geothermal sources
Unlike the solar heat concentrators mentioned above, photovoltaic panels convert sunlight directly to electricity. Although sunlight is free and abundant, solar electricity is still usually more expensive to produce than large-scale mechanically generated power due to the cost of the panels. Low-efficiency silicon solar cells have been decreasing in cost and multijunction cells with close to 30% conversion efficiency are now commercially available. Over 40% efficiency has been demonstrated in experimental systems.[6] Until recently, photovoltaics were most commonly used in remote sites where there is no access to a commercial power grid, or as a supplemental electricity source for individual homes and businesses. Recent advances in manufacturing efficiency and photovoltaic technology, combined with subsidies driven by environmental concerns, have dramatically accelerated the deployment of solar panels. Installed capacity is growing by 40% per year led by increases in Germany, Japan, California and New Jersey.

Friday, January 8, 2010

Heat transfer

Heat transfer is the transition of thermal energy from a hotter object to a cooler object ("object" in this sense designating a complex collection of particles which is capable of storing energy in many different ways). When an object or fluid is at a different temperature than its surroundings or another object, transfer of thermal energy, also known as heat transfer, or heat exchange, occurs in such a way that the body and the surroundings reach thermal equilibrium; this means that they are at the same temperature. Heat transfer always occurs from a higher-temperature object to a cooler-temperature one as described by the second law of thermodynamics or the Clausius statement. Where there is a temperature difference between objects in proximity, heat transfer between them can never be stopped; it can only be slowed.
Conduction is the transfer of heat by direct contact of particles of matter. The transfer of energy could be primarily by elastic impact as in fluids or by free electron diffusion as predominant in metals or phonon vibration as predominant in insulators. In other words, heat is transferred by conduction when adjacent atoms vibrate against one another, or as electrons move from atom to atom. Conduction is greater in solids, where atoms are in constant contact. In liquids (except liquid metals) and gases, the molecules are usually further apart, giving a lower chance of molecules colliding and passing on thermal energy.
Heat conduction is directly analogous to diffusion of particles into a fluid, in the situation where there are no fluid currents. This type of heat diffusion differs from mass diffusion in behaviour, only in as much as it can occur in solids, whereas mass diffusion is mostly limited to fluids.
Metals (e.g. copper, platinum, gold, iron, etc.) are usually the best conductors of thermal energy. This is due to the way that metals are chemically bonded: metallic bonds (as opposed to covalent or ionic bonds) have free-moving electrons which are able to transfer thermal energy rapidly through the metal.
As density decreases so does conduction. Therefore, fluids (and especially gases) are less conductive. This is due to the large distance between atoms in a gas: fewer collisions between atoms means less conduction. Conductivity of gases increases with temperature. Conductivity increases with increasing pressure from vacuum up to a critical point that the density of the gas is such that molecules of the gas may be expected to collide with each other before they transfer heat from one surface to another. After this point in density, conductivity increases only slightly with increasing pressure and density.

Wednesday, December 9, 2009

Passive analogue filter development

Analogue filters are a basic building block of signal processing much used in electronics. Amongst their many applications are the separation of an audio signal before application to bass, mid-range and tweeter loudspeakers; the combining and later separation of multiple telephone conversations onto a single channel; the selection of a chosen radio station in a radio receiver and rejection of others. Passive linear electronic analogue filters are those filters which can be described with linear differential equations (linear); they are composed of capacitors, inductors and, sometimes, resistors (passive) and are designed to operate on continuously varying (analogue) signals. There are many linear filters which are not analogue in implementation (digital filter), and there are many electronic filters which may not have a passive topology – both of which may have the same transfer function of the filters described in this article. Analogue filters are most often used in wave filtering applications, that is, where it is required to pass particular frequency components and to reject others from analogue (continuous-time) signals.
Analogue filters have played an important part in the development of electronics. Especially in the field of telecommunications, filters have been of crucial importance in a number of technological breakthroughs and have been the source of enormous profits for telecommunications companies. It should come as no surprise, therefore, that the early development of filters was intimately connected with transmission lines. Transmission line theory gave rise to filter theory, which initially took a very similar form, and the main application of filters was for use on telecommunication transmission lines. However, the arrival of network synthesis techniques greatly enhanced the degree of control of the designer.Today, it is often preferred to carry out filtering in the digital domain where complex algorithms are much easier to implement, but analogue filters do still find applications, especially for low-order simple filtering tasks and are often still the norm at higher frequencies where digital technology is still impractical, or at least, less cost effective. Wherever possible, and especially at low frequencies, analogue filters are now implemented in a filter topology which is active in order to avoid the wound components required by passive topology.It is possible to design linear analogue mechanical filters using mechanical components which filter mechanical vibrations or acoustic waves. While there are few applications for such devices in mechanics per se, they can be used in electronics with the addition of transducers to convert to and from the electrical domain. Indeed some of the earliest ideas for filters were acoustic resonators because the electronics technology was poorly understood at the time. In principle, the design of such filters can be achieved entirely in terms of the electronic counterparts of mechanical quantities, with kinetic energy, potential energy and heat energy corresponding to the energy in inductors, capacitors and resistors respectively.

Saturday, December 5, 2009

Transistor

A transistor is a semiconductor device commonly used to amplify or switch electronic signals. A transistor is made of a solid piece of a semiconductor material, with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor's terminals changes the current flowing through another pair of terminals. Because the controlled (output) power can be much more than the controlling (input) power, the transistor provides amplification of a signal. Some transistors are packaged individually but most are found in integrated circuits.
History:-

Physicist Julius Edgar Lilienfeld filed the first patent for a transistor in Canada in 1925, describing a device similar to a Field Effect Transistor or "FET".However, Lilienfeld did not publish any research articles about his devices,[citation needed] and in 1934, German inventor Oskar Heil patented a similar device.In 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in the United States observed that when electrical contacts were applied to a crystal of germanium, the output power was larger than the input. Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors, and thus could be described as the "father of the transistor". The term was coined by John R. Pierce. According to physicist/historian Robert Arns, legal papers from the Bell Labs patent show that William Shockley and Gerald Pearson had built operational versions from Lilienfeld's patents, yet they never referenced this work in any of their later research papers or historical articles.The first silicon transistor was produced by Texas Instruments in 1954.[5] This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs. The first MOS transistor actually built was by Kahng and Atalla at Bell Labs in 1960.
Importants:-
The transistor is considered by many to be one of the greatest inventions of the twentieth century. The transistor is the key active component in practically all modern electronics. Its importance in today's society rests on its ability to be mass produced using a highly automated process (fabrication) that achieves astonishingly low per-transistor costs.Although several companies each produce over a billion individually-packaged (known as discrete) transistors every year, the vast majority of transistors produced are in integrated circuits (often shortened to IC, microchips or simply chips) along with diodes, resistors, capacitors and other electronic components to produce complete electronic circuits. A logic gate consists of up to about twenty transistors whereas an advanced microprocessor, as of 2006, can use as many as 1.7 billion transistors (MOSFETs). "About 60 million transistors were built this year [2002] ... for [each] man, woman, and child on Earth.The transistor's low cost, flexibility, and reliability have made it a ubiquitous device. Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery. It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical control function.
Uses:-
The bipolar junction transistor, or BJT, was the most commonly used transistor in the 1960s and 70s. Even after MOSFETs became widely available, the BJT remained the transistor of choice for many analog circuits such as simple amplifiers because of their greater linearity and ease of manufacture. Desirable properties of MOSFETs, such as their utility in low-power devices, usually in the CMOS configuration, allowed them to capture nearly all market share for digital circuits; more recently MOSFETs have captured most analog and power applications as well, including modern clocked analog circuits, voltage regulators, amplifiers, power transmitters, motor drivers, etc.

Electrical impedance

Electrical impedance, or simply impedance, describes a measure of opposition to alternating current (AC). Electrical impedance extends the concept of resistance to AC circuits, describing not only the relative amplitudes of the voltage and current, but also the relative phases. When the circuit is driven with direct current (DC) there is no distinction between impedance and resistance; the latter can be thought of as impedance with zero phase angle.The symbol for impedance is usually \scriptstyle Z and it may be represented by writing its magnitude and phase in the form \scriptstyle Z \angle \theta . However, complex number representation is more powerful for circuit analysis purposes. The term impedance was coined by Oliver Heaviside in July 1886.Arthur Kennelly was the first to represent impedance with complex numbers in 1893.Impedance is defined as the frequency domain ratio of the voltage to the current. In other words, it is voltage–current ratio for a single complex exponential at a particular frequency ω. In general, impedance will be a complex number, but this complex number has the same units as resistance, for which the SI unit is the ohm. For a sinusoidal current or voltage input, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular,

Friday, December 4, 2009

Hall effect sensor

A Hall effect sensor is a transducer that varies its output voltage in response to changes in magnetic field. Hall sensors are used for proximity switching, positioning, speed detection, and current sensing applications.In its simplest form, the sensor operates as an analogue transducer, directly returning a voltage. With a known magnetic field, its distance from the Hall plate can be determined. Using groups of sensors, the relative position of the magnet can be deduced.Electricity carried through a conductor will produce a magnetic field that varies with current, and a Hall sensor can be used to measure the current without interrupting the circuit. Typically, the sensor is integrated with a wound core or permanent magnet that surrounds the conductor to be measured.Frequently, a Hall sensor is combined with circuitry that allows the device to act in a digital (on/off) mode, and may be called a switch in this configuration. Commonly seen in industrial applications such as the pictured pneumatic cylinder, they are also used in consumer equipment; for example some computer printers use them to detect missing paper and open covers. When high reliability is required, they are used in keyboards.Hall sensors are commonly used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing or tachometers. They are used in brushless DC electric motors to detect the position of the permanent magnet. In the pictured wheel carrying two equally spaced magnets, the voltage from the sensor will peak twice for each revolution. This arrangement is commonly used to regulate the speed of disc drives.
History:-
A hall probe contains an
indium compound crystal mounted on an aluminum backing plate, and encapsulated in the probe head. The plane of the crystal is perpendicular to the probe handle. Connecting leads from the crystal are brought down through the handle to the circuit box.When the Hall Probe is held so that the magnetic field lines are passing at right angles through the sensor of the probe, the meter gives a reading of the value of magnetic flux density (B). A current is passed through the crystal which, when placed in a magnetic field has a “Hall Effect” voltage developed across it. The Hall Effect is seen when a conductor is passed through a uniform magnetic field. The natural electron drift of the charge carriers causes the magnetic field to apply a Lorentz force (the force exerted on a charged particle in an electromagnetic field) to these charge carriers. The result is what is seen as a charge separation, with a build up of either positive or negative charges on the bottom or on the top of the plate. The crystal measures 5 mm square. The probe handle, being made of a non-ferrous material, has no disturbing effect on the field.A Hall Probe is sensitive enough to measure the Earth's magnetic field. It must be held so that the Earth's field lines are passing directly through it. It is then rotated quickly so the field lines pass through the sensor in the opposite direction. The change in the flux density reading is double the Earth's magnetic flux density. A hall probe must first be calibrated against a known value of magnetic field strength. For a solenoid the hall probe is placed in the center

Thursday, December 3, 2009

PID controller

A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then instigating a corrective action that can adjust the process accordingly and rapidly, to keep the error minimal.
General:-

The PID controller calculation (algorithm) involves three separate parameters; the proportional, the integral and derivative values. The proportional value determines the reaction to the current error, the integral value determines the reaction based on the sum of recent errors, and the derivative value determines the reaction based on the rate at which the error has been changing. The weighted sum of these three actions is used to adjust the process via a control element such as the position of a control valve or the power supply of a heating element.By tuning the three constants in the PID controller algorithm, the controller can provide control action designed for specific process requirements. The response of the controller can be described in terms of the responsiveness of the controller to an error, the degree to which the controller overshoots the setpoint and the degree of system oscillation. Note that the use of the PID algorithm for control does not guarantee optimal control of the system or system stability.
Some applications may require using only one or two modes to provide the appropriate system control. This is achieved by setting the gain of undesired control outputs to zero. A PID controller will be called a PI, PD, P or I controller in the absence of the respective control actions. PI controllers are particularly common, since derivative action is very sensitive to measurement noise, and the absence of an integral value may prevent the system from reaching its target value due to the control action.
Note: Due to the diversity of the field of control theory and application, many naming conventions for the relevant variables are in common use.
History:-
A familiar example of a control loop is the action taken to keep one's shower water at the ideal temperature, which typically involves the mixing of two process streams, cold and hot water. The person feels the water to estimate its temperature. Based on this measurement they perform a control action: use the cold water tap to adjust the process. The person would repeat this input-output control loop, adjusting the hot water flow until the process temperature stabilized at the desired value.Feeling the water temperature is taking a measurement of the process value or process variable (PV). The desired temperature is called the setpoint (SP). The output from the controller and input to the process (the tap position) is called the manipulated variable (MV). The difference between the measurement and the setpoint is the error (e), too hot or too cold and by how much.As a controller, one decides roughly how much to change the tap position (MV) after one determines the temperature (PV), and therefore the error. This first estimate is the equivalent of the proportional action of a PID controller. The integral action of a PID controller can be thought of as gradually adjusting the temperature when it is almost right. Derivative action can be thought of as noticing the water temperature is getting hotter or colder, and how fast, anticipating further change and tempering adjustments for a soft landing at the desired temperature (SP).Making a change that is too large when the error is small is equivalent to a high gain controller and will lead to overshoot. If the controller were to repeatedly make changes that were too large and repeatedly overshoot the target, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the oscillations increase with time then the system is unstable, whereas if they decay the system is stable. If the oscillations remain at a constant magnitude the system is marginally stable. A human would not do this because we are adaptive controllers, learning from the process history, but PID controllers do not have the ability to learn and must be set up correctly. Selecting the correct gains for effective control is known as tuning the controller.If a controller starts from a stable state at zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that impact on the process, and hence on the PV. Variables that impact on the process other than the MV are known as disturbances. Generally controllers are used to reject disturbances and/or implement setpoint changes. Changes in feed water temperature constitute a disturbance to the shower process.In theory, a controller can be used to control any process which has a measurable output (PV), a known ideal value for that output (SP) and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, flow rate, chemical composition, speed and practically every other variable for which a measurement exists. Automobile cruise control is an example of a process which utilizes automated control.
Due to their long history, simplicity, well grounded theory and simple setup and maintenance requirements, PID controllers are the controllers of choice for many of these applications.