Thursday, November 5, 2009

Digital circuit

Digital electronics are systems that represent signals as discrete levels, rather than as a continuous range. In most cases the number of states is two, and these states are represented by two voltage levels: one near to zero volts and one at a higher level depending on the supply voltage in use. These two levels are often represented as "Low" and "High."
The fundamental advantage of digital techniques stem from the fact it is easier to get an electronic device to switch into one of a number of known states than to accurately reproduce a continuous range of values.Digital electronics are usually made from large assemblies of logic gates, simple electronic representations of Boolean logic functions
Advantages:-
One advantage of digital circuits when compared to analog circuits is that signals represented digitally can be transmitted without degradation due to noise. For example, a continuous audio signal, transmitted as a sequence of 1s and 0s, can be reconstructed without error provided the noise picked up in transmission is not enough to prevent identification of the 1s and 0s. An hour of music can be stored on a compact disc as about 6 billion binary digits.
In a digital system, a more precise representation of a signal can be obtained by using more binary digits to represent it. While this requires more digital circuits to process the signals, each digit is handled by the same kind of hardware. In an analog system, additional resolution requires fundamental improvements in the linearity and noise charactersitics of each step of the signal chain. Computer-controlled digital systems can be controlled by software, allowing new functions to be added without changing hardware. Often this can be done outside of the factory by updating the product's software. So, the product's design errors can be corrected after the product is in a customer's hands. Information storage can be easier in digital systems than in analog ones. The noise-immunity of digital systems permits data to be stored and retrieved without degradation. In an analog system, noise from aging and wear degrade the information stored. In a digital system, as long as the total noise is below a certain level, the information can be recovered perfectly.
Disadvantages:-
In some cases, digital circuits use more energy than analog circuits to accomplish the same tasks, thus producing more heat. In portable or battery-powered systems this can limit use of digital systems. For example, battery-powered cellular telephones often use a low-power analog front-end to amplify and tune in the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexible software radios. Such base stations can be easily reprogrammed to process the signals used in new cellular standards.
Digital circuits are sometimes more expensive, especially in small quantities.
The sensed world is analog, and signals from this world are analog quantities. For example, light, temperature, sound, electrical conductivity, electric and magnetic fields are analog. Most useful digital systems must translate from continuous analog signals to discrete digital signals. This causes quantization errors. Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree of fidelity. The Nyquist-Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal. In some systems, if a single piece of digital data is lost or misinterpreted, the meaning of large blocks of related data can completely change. Because of the cliff effect, it can be difficult for users to tell if a particular system is right on the edge of failure, or if it can tolerate much more noise before failing.
Digital fragility can be reduced by designing a digital system for robustness. For example, a parity bit or other error management method can be inserted into the signal path. These schemes help the system detect errors, and then either correct the errors, or at least ask for a new copy of the data. In a state-machine, the state transition logic can be designed to catch unused states and trigger a reset sequence or other error recovery routine.
Embedded software designs that employ Immunity Aware Programming, such as the practice of filling unused program memory with interrupt instructions that point to an error recovery routine. This helps guard against failures that corrupt the microcontroller's instruction pointer which could otherwise cause random code to be executed. Digital memory and transmission systems can use techniques such as error detection and correction to use additional data to correct any errors in transmission and storage. On the other hand, some techniques used in digital systems make those systems more vulnerable to single-bit errors. These techniques are acceptable when the underlying bits are reliable enough that such errors are highly unlikely.
A single-bit error in audio data stored directly as linear pulse code modulation (such as on a CD-ROM) causes, at worst, a single click. Instead, many people use audio compression to save storage space and download time, even though a single-bit error may corrupt the entire song.
Structure of digital system:-Engineers use many methods to minimize logic functions, in order to reduce the circuit's complexity. When the complexity is less, the circuit also has fewer errors and less electronics, and is therefore less expensive.
The most widely used simplification is a minimization algorithm like the Espresso heuristic logic minimizer within a CAD system, although historically, binary decision diagrams, an automated Quine–McCluskey algorithm, truth tables, Karnaugh Maps, and Boolean algebra have been used. Representations are crucial to an engineer's design of digital circuits. Some analysis methods only work with particular representations.
The classical way to represent a digital circuit is with an equivalent set of logic gates. Another way, often with the least electronics, is to construct an equivalent system of electronic switches (usually transistors). One of the easiest ways is to simply have a memory containing a truth table. The inputs are fed into the address of the memory, and the data outputs of the memory become the outputs.
For automated analysis, these representations have digital file formats that can be processed by computer programs. Most digital engineers are very careful to select computer programs ("tools") with compatible file formats.
To choose representations, engineers consider types of digital systems. Most digital systems divide into "combinational systems" and "sequential systems." A combinational system always presents the same output when given the same inputs. It is basically a representation of a set of logic functions, as already discussed.
A sequential system is a combinational system with some of the outputs fed back as inputs. This makes the digital machine perform a "sequence" of operations. The simplest sequential system is probably a flip flop, a mechanism that represents a binary digit or "bit".
Sequential systems are often designed as state machines. In this way, engineers can design a system's gross behavior, and even test it in a simulation, without considering all the details of the logic functions.
Sequential systems divide into two further subcategories. "Synchronous" sequential systems change state all at once, when a "clock" signal changes state. "Asynchronous" sequential systems propagate changes whenever inputs change. Synchronous sequential systems are made of well-characterized asynchronous circuits such as flip-flops, that change only when the clock changes, and which have carefully designed timing margins.
The usual way to implement a synchronous sequential state machine is divide it into a piece of combinational logic and a set of flip flops called a "state register." Each time a clock signal ticks, the state register captures the feedback generated from the previous state of the combinational logic, and feeds it back as an unchanging input to the combinational part of the state machine. The fastest rate of the clock is set by the most time-consuming logic calculation in the combinational logic.
The state register is just a representation of a binary number. If the states in the state machine are numbered (easy to arrange), the logic function is some combinational logic that produces the number of the next state.
In comparison, asynchronous systems are very hard to design because all possible states, in all possible timings must be considered. The usual method is to construct a table of the minimum and maximum time that each such state can exist, and then adjust the circuit to minimize the number of such states, and force the circuit to periodically wait for all of its parts to enter a compatible state. (This is called "self-resynchronization.") Without such careful design, it is easy to accidentally produce asynchronous logic that is "unstable", that is, real electronics will have unpredictable results because of the cumulative delays caused by small variations in the values of the electronic components. Certain circuits (such as the synchronizer flip-flops, switch debouncers, and the like which allow external unsynchronized signals to enter synchronous logic circuits) are inherently asynchronous in their design and must be analyzed as such.
As of 2005, almost all digital machines are synchronous designs because it is much easier to create and verify a synchronous design—the software currently used to simulate digital machines does not yet handle asynchronous designs. However, asynchronous logic is thought to be superior, if it can be made to work, because its speed is not constrained by an arbitrary clock; instead, it simply runs at the maximum speed permitted by the propagation rates of the logic gates from which it is constructed. Building an asynchronous circuit using faster parts implicitly makes the circuit "go" faster.
More generally, many digital systems are data flow machines. These are usually designed using synchronous register transfer logic, using hardware description languages such as VHDL or Verilog.
In register transfer logic, binary numbers are stored in groups of flip flops called registers. The outputs of each register are a bundle of wires called a "bus" that carries that number to other calculations. A calculation is simply a piece of combinational logic. Each calculation also has an output bus, and these may be connected to the inputs of several registers. Sometimes a register will have a multiplexer on its input, so that it can store a number from any one of several buses. Alternatively, the outputs of several items may be connected to a bus through buffers that can turn off the output of all of the devices except one. A sequential state machine controls when each register accepts new data from its input.
In the 1980s, some researchers discovered that almost all synchronous register-transfer machines could be converted to asynchronous designs by using first-in-first-out synchronization logic. In this scheme, the digital machine is characterized as a set of data flows. In each step of the flow, an asynchronous "synchronization circuit" determines when the outputs of that step are valid, and presents a signal that says, "grab the data" to the stages that use that stage's inputs. It turns out that just a few relatively simple synchronization circuits are needed.
The most general-purpose register-transfer logic machine is a computer. This is basically an automatic binary abacus. The control unit of a computer is usually designed as a microprogram run by a microsequencer. A microprogram is much like a player-piano roll. Each table entry or "word" of the microprogram commands the state of every bit that controls the computer. The sequencer then counts, and the count addresses the memory or combinational logic machine that contains the microprogram. The bits from the microprogram control the arithmetic logic unit, memory and other parts of the computer, including the microsequencer itself.
In this way, the complex task of designing the controls of a computer is reduced to a simpler task of programming a relatively independent collection of much simpler logic machines.
Computer architecture is a specialized engineering activity that tries to arrange the registers, calculation logic, buses and other parts of the computer in the best way for some purpose. Computer architects have applied large amounts of ingenuity to computer design to reduce the cost and increase the speed and immunity to programming errors of computers. An increasingly common goal is to reduce the power used in a battery-powered computer system, such as a cell-phone. Many computer architects serve an extended apprenticeship as microprogrammers.
"Specialized computers" are usually a conventional computer with a special-purpose microprogram.
Automated design tools:-
To save costly engineering effort, much of the effort of designing large logic machines has been automated. The computer programs are called "electronic design automation tools" or just "EDA."
Simple truth table-style descriptions of logic are often optimized with EDA that automatically produces reduced systems of logic gates or smaller lookup tables that still produce the desired outputs. The most common example of this kind of software is the Espresso heuristic logic minimizer.Most practical algorithms for optimizing large logic systems use algebraic manipulations or binary decision diagrams, and there are promising experiments with genetic algorithms and annealing optimizations.To automate costly engineering processes, some EDA can take state tables that describe state machines and automatically produce a truth table or a function table for the combinatorial part of a state machine. The state table is a piece of text that lists each state, together with the conditions controlling the transitions between them and the belonging output signals.
It is common for the function tables of such computer-generated state-machines to be optimized with logic-minimization software such as Minilog.
Often, real logic systems are designed as a series of sub-projects, which are combined using a "tool flow." The tool flow is usually a "script," a simplified computer language that can invoke the software design tools in the right order.Tool flows for large logic systems such as microprocessors can be thousands of commands long, and combine the work of hundreds of engineers.Writing and debugging tool flows is an established engineering specialty in companies that produce digital designs. The tool flow usually terminates in a detailed computer file or set of files that describe how to physically construct the logic. Often it consists of instructions to draw the transistors and wires on an integrated circuit or a printed circuit board.
Parts of tool flows are "debugged" by verifying the outputs of simulated logic against expected inputs. The test tools take computer files with sets of inputs and outputs, and highlight discrepancies between the simulated behavior and the expected behavior.
Once the input data is believed correct, the design itself must still be verified for correctness. Some tool flows verify designs by first producing a design, and then scanning the design to produce compatible input data for the tool flow. If the scanned data matches the input data, then the tool flow has probably not introduced errors.
The functional verification data are usually called "test vectors." The functional test vectors may be preserved and used in the factory to test that newly constructed logic works correctly. However, functional test patterns don't discover common fabrication faults. Production tests are often designed by software tools called "test pattern generators." These generate test vectors by examining the structure of the logic and systematically generating tests for particular faults. This way the fault coverage can closely approach 100%, provided the design is properly made testable (see next section).Once a design exists, and is verified and testable, it often needs to be processed to be manufacturable as well. Modern integrated circuits have features smaller than the wavelength of the light used to expose the photoresist. Manufacturability software adds interference patterns to the exposure masks to eliminate open-circuits, and enhance the masks' resolution and contrast.

Tuesday, November 3, 2009

Magnetism

This article is about magnetic materials. For information about objects and devices that produce a magnetic field, see magnet. For field that magnets and currents produce, see magnetic field. For other uses, see magnetism (disambiguation).In physics, the term magnetism is used to describe how materials respond on the microscopic level to an applied magnetic field; to categorize the magnetic phase of a material. For example, the most well known form of magnetism is ferromagnetism such that some ferromagnetic materials produce their own persistent magnetic field. However, all materials are influenced to greater or lesser degree by the presence of a magnetic field. Some are attracted to a magnetic field (paramagnetism); others are repulsed by a magnetic field (diamagnetism); others have a much more complex relationship with an applied magnetic field. Substances that are negligibly affected by magnetic fields are known as non-magnetic substances. They include copper, aluminium, water, and gases.
The magnetic state (or phase) of a material depends on temperature (and other variables such as pressure and applied magnetic field) so that a material may exhibit more than one form of magnetism depending on its temperature, etc.
History:-
Aristotle attributes the first of what could be called a scientific discussion on magnetism to Thales, who lived from about 625 BC to about 545 BC Around the same time in ancient India, the Indian surgeon, Sushruta, was the first to make use of the magnet for surgical purposes
In ancient China, the earliest literary reference to magnetism lies in a 4th century BC book called Book of the Devil Valley Master (鬼谷子): "The lodestone makes iron come or it attracts it."The earliest mention of the attraction of a needle appears in a work composed between AD 20 and 100 (Louen-heng): "A lodestone attracts a needle." The ancient Chinese scientist Shen Kuo (1031-1095) was the first person to write of the magnetic needle compass and that it improved the accuracy of navigation by employing the astronomical concept of true north (Dream Pool Essays, AD 1088 ), and by the 12th century the Chinese were known to use the lodestone compass for navigation.
Alexander Neckham, by 1187, was the first in Europe to describe the compass and its use for navigation. In 1269, Peter Peregrinus de Maricourt wrote the Epistola de magnete, the first extant treatise describing the properties of magnets. In 1282, the properties of magnets and the dry compass were discussed by Al-Ashraf, a Yemeni physicist, astronomer and geographer.
In 1600, William Gilbert published his De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure (On the Magnet and Magnetic Bodies, and on the Great Magnet the Earth). In this work he describes many of his experiments with his model earth called the terrella. From his experiments, he concluded that the Earth was itself magnetic and that this was the reason compasses pointed north (previously, some believed that it was the pole star (Polaris) or a large magnetic island on the north pole that attracted the compass).
An understanding of the relationship between electricity and magnetism began in 1819 with work by Hans Christian Oersted, a professor at the University of Copenhagen, who discovered more or less by accident that an electric current could influence a compass needle. This landmark experiment is known as Oersted's Experiment. Several other experiments followed, with André-Marie Ampère, Carl Friedrich Gauss, Michael Faraday, and others finding further links between magnetism and electricity. James Clerk Maxwell synthesized and expanded these insights into Maxwell's equations, unifying electricity, magnetism, and optics into the field of electromagnetism. In 1905, Einstein used these laws in motivating his theory of special relativity, requiring that the laws held true in all inertial reference frames.
Electromagnetism has continued to develop into the twenty-first century, being incorporated into the more fundamental theories of gauge theory, quantum electrodynamics, electroweak theory, and finally the standard model.
Sources of magnetism:-
There exists a close connection between angular momentum and magnetism, expressed on a macroscopic scale in the Einstein-de Haas effect "rotation by magnetization" and its inverse, the Barnett effect or "magnetization by rotation".At the atomic and sub-atomic scales, this connection is expressed by the ratio of magnetic moment to angular momentum, the gyromagnetic ratio.Magnetism, at its root, arises from two sources:Electric currents, or more generally moving electric charges, create magnetic fields (see Maxwell's Equations). Many particles have nonzero "intrinsic" (or "spin") magnetic moments. (Just as each particle, by its nature, has a certain mass and charge, each has a certain magnetic moment, possibly zero.) In magnetic materials, the most important sources of magnetization are, more specifically, the electrons' orbital angular motion around the nucleus, and the electrons' intrinsic magnetic moment (see Electron magnetic dipole moment). The other potential sources of magnetism are much less important: For example, the nuclear magnetic moments of the nuclei in the material are typically thousands of times smaller than the electrons' magnetic moments, so they are negligible in the context of the magnetization of materials. (Nuclear magnetic moments are important in other contexts, particularly in Nuclear Magnetic Resonance (NMR) and Magnetic Resonance Imaging (MRI).)
Ordinarily, the countless electrons in a material are arranged such that their magnetic moments (both orbital and intrinsic) cancel out. This is due, to some extent, to electrons combining into pairs with opposite intrinsic magnetic moments (as a result of the Pauli exclusion principle; see Electron configuration), or combining into "filled subshells" with zero net orbital motion; in both cases, the electron arrangement is so as to exactly cancel the magnetic moments from each electron. Moreover, even when the electron configuration is such that there are unpaired electrons and/or non-filled subshells, it is often the case that the various electrons in the solid will contribute magnetic moments that point in different, random directions, so that the material will not be magnetic.However, sometimes (either spontaneously, or owing to an applied external magnetic field) each of the electron magnetic moments will be, on average, lined up. Then the material can produce a net total magnetic field, which can potentially be quite strong.
The magnetic behavior of a material depends on its structure (particularly its electron configuration, for the reasons mentioned above), and also on the temperature (at high temperatures, random thermal motion makes it more difficult for the electrons to maintain alignment).



Monday, November 2, 2009

Laser diode

A laser diode is a laser where the active medium is a semiconductor similar to that found in a light-emitting diode. The most common and practical type of laser diode is formed from a p-n junction and powered by injected electric current. These devices are sometimes referred to as injection laser diodes to distinguish them from (optically) pumped laser diodes, which are more easily manufactured in the laboratory.
Theory of operation:-
A laser diode, like many other semiconductor devices, is formed by doping a very thin layer on the surface of a crystal wafer. The crystal is doped to produce an n-type region and a p-type region, one above the other, resulting in a p-n junction, or diode.
The many types of diode lasers known today collectively form a subset of the larger classification of semiconductor p-n junction diodes. Just as in any semiconductor p-n junction diode, forward electrical bias causes the two species of charge carrier - holes and electrons - to be "injected" from opposite sides of the p-n junction into the depletion region, situated at its heart. Holes are injected from the p-doped, and electrons from the n-doped, semiconductor. (A depletion region, devoid of any charge carriers, forms automatically and unavoidably as a result of the difference in chemical potential between n- and p-type semiconductors wherever they are in physical contact.)As charge injection is a distinguishing feature of diode lasers as compared to all other lasers, diode lasers are traditionally and more formally called "injection lasers." (This terminology differentiates diode lasers, e.g., from flashlamp-pumped solid state lasers, such as the ruby laser. Interestingly, whereas the term "solid-state" was extremely apt in differentiating 1950s-era semiconductor electronics from earlier generations of vacuum electronics, it would not have been adequate to convey unambiguously the unique characteristics defining 1960s-era semiconductor lasers.) When an electron and a hole are present in the same region, they may recombine or "annihilate" with the result being spontaneous emission — i.e., the electron may re-occupy the energy state of the hole, emitting a photon with energy equal to the difference between the electron and hole states involved. (In a conventional semiconductor junction diode, the energy released from the recombination of electrons and holes is carried away as phonons, i.e., lattice vibrations, rather than as photons.) Spontaneous emission gives the laser diode below lasing threshold similar properties to an LED. Spontaneous emission is necessary to initiate laser oscillation, but it is one among several sources of inefficiency once the laser is oscillating.
The difference between the photon-emitting semiconductor laser (or LED) and conventional phonon-emitting (non-light-emitting) semiconductor junction diodes lies in the use of a different type of semiconductor, one whose physical and atomic structure confers the possibility for photon emission. These photon-emitting semiconductors are the so-called "direct bandgap" semiconductors. The properties of silicon and germanium, which are single-element semiconductors, have bandgaps that do not align in the way needed to allow photon emission and are not considered "direct." Other materials, the so-called compound semiconductors, have virtually identical crystaline structures as silicon or germanium but use alternating arrangements of two different atomic species in a checkerboard-like pattern to break the symmetry. The transition between the materials in the alternating pattern creates the critical "direct bandgap" property. Gallium arsenide, indium phosphide, gallium antimonide, and gallium nitride are all examples of compound semiconductor materials that can be used to create junction diodes that emit light.
In the absence of stimulated emission (e.g., lasing) conditions, electrons and holes may coexist in proximity to one another, without recombining, for a certain time, termed the "upper-state lifetime" or "recombination time" (about a nanosecond for typical diode laser materials), before they recombine. Then a nearby photon with energy equal to the recombination energy can cause recombination by stimulated emission. This generates another photon of the same frequency, travelling in the same direction, with the same polarization and phase as the first photon. This means that stimulated emission causes gain in an optical wave (of the correct wavelength) in the injection region, and the gain increases as the number of electrons and holes injected across the junction increases. The spontaneous and stimulated emission processes are vastly more efficient in direct bandgap semiconductors than in indirect bandgap semiconductors; therefore silicon is not a common material for laser diodes.
As in other lasers, the gain region is surrounded with an optical cavity to form a laser. In the simplest form of laser diode, an optical waveguide is made on that crystal surface, such that the light is confined to a relatively narrow line. The two ends of the crystal are cleaved to form perfectly smooth, parallel edges, forming a Fabry-Perot resonator. Photons emitted into a mode of the waveguide will travel along the waveguide and be reflected several times from each end face before they are emitted. As a light wave passes through the cavity, it is amplified by stimulated emission, but light is also lost due to absorption and by incomplete reflection from the end facets. Finally, if there is more amplification than loss, the diode begins to "lase".
Some important properties of laser diodes are determined by the geometry of the optical cavity. Generally, in the vertical direction, the light is contained in a very thin layer, and the structure supports only a single optical mode in the direction perpendicular to the layers. In the lateral direction, if the waveguide is wide compared to the wavelength of light, then the waveguide can support multiple lateral optical modes, and the laser is known as "multi-mode". These laterally multi-mode lasers are adequate in cases where one needs a very large amount of power, but not a small diffraction-limited beam; for example in printing, activating chemicals, or pumping other types of lasers.
In applications where a small focused beam is needed, the waveguide must be made narrow, on the order of the optical wavelength. This way, only a single lateral mode is supported and one ends up with a diffraction-limited beam. Such single spatial mode devices are used for optical storage, laser pointers, and fiber optics. Note that these lasers may still support multiple longitudinal modes, and thus can lase at multiple wavelengths simultaneously.
The wavelength emitted is a function of the band-gap of the semiconductor and the modes of the optical cavity. In general, the maximum gain will occur for photons with energy slightly above the band-gap energy, and the modes nearest the gain peak will lase most strongly. If the diode is driven strongly enough, additional side modes may also lase. Some laser diodes, such as most visible lasers, operate at a single wavelength, but that wavelength is unstable and changes due to fluctuations in current or temperature.
Due to diffraction, the beam diverges (expands) rapidly after leaving the chip, typically at 30 degrees vertically by 10 degrees laterally. A lens must be used in order to form a collimated beam like that produced by a laser pointer. If a circular beam is required, cylindrical lenses and other optics are used. For single spatial mode lasers, using symmetrical lenses, the collimated beam ends up being elliptical in shape, due to the difference in the vertical and lateral divergences. This is easily observable with a red laser pointer.
The simple diode described above has been heavily modified in recent years to accommodate modern technology, resulting in a variety of types of laser diodes, as described below.
Laser diode types:-
The simple laser diode structure, described above, is extremely inefficient. Such devices require so much power that they can only achieve pulsed operation without damage. Although historically important and easy to explain, such devices are not practical.
Applications of laser diodes:-
Laser diodes are numerically the most common type of laser, with 2004 sales of approximately 733 million diode lasers, as compared to 131,000 of other types of lasers.Laser diodes find wide use in telecommunication as easily modulated and easily coupled light sources for fiber optics communication. They are used in various measuring instruments, eg. rangefinders. Another common use is in barcode readers. Visible lasers, typically red but later also green, are common as laser pointers. Both low and high-power diodes are used extensively in the printing industry both as light sources for scanning (input) of images and for very high-speed and high-resolution printing plate (output) manufacturing. Infrared and red laser diodes are common in CD players, CD-ROMs and DVD technology. Violet lasers are used in HD DVD and Blu-ray technology. Diode lasers have also found many applications in laser absorption spectrometry (LAS) for high-speed, low-cost assessment or monitoring of the concentration of various species in gas phase. High-power laser diodes are used in industrial applications such as heat treating, cladding, seam welding and for pumping other lasers, such as diode pumped solid state lasers.Applications of laser diodes can be categorized in various ways. Most applications could be served by larger solid state lasers or optical parametric oscillators, but the low cost of mass-produced diode lasers makes them essential for mass-market applications. Diode lasers can be used in a great many fields; since light has many different properties (power, wavelength and spectral quality, beam quality, polarization, etc.) it is interesting to classify applications by these basic properties.Many applications of diode lasers primarily make use of the "directed energy" property of an optical beam. In this category one might include the laser printers, bar-code readers, image scanning, illuminators, designators, optical data recording, combustion ignition, laser surgery, industrial sorting, industrial machining, and directed energy weaponry. Some of these applications are emerging while others are well-established.Laser medicine: medicine and especially dentistry have found many new applications for diode lasers. The shrinking size of the units and their increasing user friendliness makes them very attractive to clinicians for minor soft tissue procedures. The 800 nm - 980 nm units have a high absorption rate for hemoglobin and thus make them ideal for soft tissue applications, where good hemostasis is necessary.Applications which may today or in the future make use of the coherence of diode-laser-generated light include interferometric distance measurement, holography, coherent communications, and coherent control of chemical reactions.
Applications which may make use of "narrow spectral" properties of diode lasers include range-finding, telecommunications, infra-red countermeasures, spectroscopic sensing, generation of radio-frequency or terahertz waves, atomic clock state preparation, quantum key cryptography, frequency doubling and conversion, water purification (in the UV), and photodynamic therapy (where a particular wavelength of light would cause a substance such as porphyrin to become chemically active as an anti-cancer agent only where the tissue is illuminated by light).
Applications where the desired quality of laser diodes is their ability to generate ultra-short pulses of light by the technique known as "mode-locking" include clock distribution for high-performance integrated circuits, high-peak-power sources for laser-induced breakdown spectroscopy sensing, arbitrary waveform generation for radio-frequency waves, photonic sampling for analog-to-digital conversion, and optical code-division-multiple-access systems for secure communication.

Kirchhoff's circuit laws

Kirchhoff's circuit laws are two equalities that deal with the conservation of charge and energy in electrical circuits, and were first described in 1845 by Gustav Kirchhoff. Widely used in electrical engineering, they are also called Kirchhoff's rules or simply Kirchhoff's laws (see also Kirchhoff's laws for other meanings of that term).
Both circuit rules can be directly derived from Maxwell's equations, but Kirchhoff preceded Maxwell and instead generalized work by Georg Ohm.
Kcl:-
This law is also called Kirchhoff's point rule, Kirchhoff's junction rule (or nodal rule), and Kirchhoff's first rule.The principle of conservation of electric charge implies that:
At any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node. Adopting the convention that every current flowing towards the node is positive and that every current flowing away is negative (or the other way around), this principle can be stated as:
n is the total number of branches with currents flowing towards or away from the node.
This formula is also valid for complex currents:
The law is based on the conservation of charge whereby the charge (measured in coulombs) is the product of the current (in amps) and the time (which is measured in seconds).
Changing charge density:-
Physically speaking, the restriction regarding the "capacitor plate" means that Kirchhoff's current law is only valid if the charge density remains constant in the point that it is applied to. This is normally not a problem because of the strength of electrostatic forces: the charge buildup would cause repulsive forces to disperse the charges.
However, a charge build-up can occur in a capacitor, where the charge is typically spread over wide parallel plates, with a physical break in the circuit that prevents the positive and negative charge accumulations over the two plates from coming together and cancelling. In this case, the sum of the currents flowing into one plate of the capacitor is not zero, but rather is equal to the rate of charge accumulation. However, if the displacement current dD/dt is included, Kirchhoff's current law once again holds. (This is really only required if one wants to apply the current law to a point on a capacitor plate. In circuit analyses, however, the capacitor as a whole is typically treated as a unit, in which case the ordinary current law holds since exactly the current that enters the capacitor on the one side leaves it on the other side.)
More technically, Kirchhoff's current law can be found by taking the divergence of Ampère's law with Maxwell's correction and combining with Gauss's law, yielding:
This is simply the charge conservation equation (in integral form, it says that the current flowing out of a closed surface is equal to the rate of loss of charge within the enclosed volume (Divergence theorem)). Kirchhoff's current law is equivalent to the statement that the divergence of the current is zero, true for time-invariant ρ, or always true if the displacement current is included with J.

Coulomb's law

Coulomb's law, sometimes called the Coulomb law, is an equation describing the electrostatic force between electric charges. It was studied and first published in the 1780s by French physicist Charles Augustin de Coulomb and was essential to the development of the theory of electromagnetism. Nevertheless, the dependence of the electric force with distance (inverse square law) had been proposed previously by Joseph Priestley and the dependence with both distance and charge had been discovered, but not published, by Henry Cavendish, prior to Coulomb's works.
Scalar form:-
The scalar form of Coulomb's law will only describe the magnitude of the electrostatic force between two electric charges. If direction is required, then the vector form is required as well. The magnitude of the electrostatic force (F) on a charge (q1) due to the presence of a second charge (q2), is given by
where r is the distance between the two charges and ke a proportionality constant. A positive force implies a repulsive interaction, while a negative force implies an attractive interaction.
The proportionality constant ke, called Coulomb's constant (sometimes called Coulomb's force constant) is related to the properties of space and can be calculated exactly:
In SI units the speed of light in vacuum, denoted c0[4] is defined as 299,792,458 m·s−1, and the magnetic constant (μ0), is defined as 4π × 10−7 H·m−1,[6] leading to the definition for the electric constant (ε0) as ε0 = 1/(μ0c20) ≈ 8.854187817×10−12 F·m−1. In cgs units, the unit charge, esu of charge or statcoulomb, is defined so that this Coulomb force constant is 1.
This formula says that the magnitude of the force is directly proportional to the magnitude of the charges of each object and inversely proportional to the square of the distance between them. The exponent in Coulomb's Law has been found to differ from −2 by less than one in a billion.
When measured in units that people commonly use (such as SI—see International System of Units), the electrostatic force constant (ke) is numerically much much larger than the universal gravitational constant (G).This means that for objects with charge that is of the order of a unit charge (C) and mass of the order of a unit mass (kg), the electrostatic forces will be so much larger than the gravitational forces that the latter force can be ignored. This is not the case when Planck units are used and both charge and mass are of the order of the unit charge and unit mass. However, charged elementary particles have mass that is far less than the Planck mass while their charge is about the Planck charge so that, again, gravitational forces can be ignored. For example, the electrostatic force between an electron and a proton, which constitute a hydrogen atom, is almost 40 orders of magnitude greater than the gravitational force between them.
Coulomb's law can also be interpreted in terms of atomic units with the force expressed in Hartrees per Bohr radius, the charge in terms of the elementary charge, and the distances in terms of the Bohr radius.

Sunday, November 1, 2009

Eddy current

An eddy current (also known as Foucault current) is an electrical phenomenon discovered by French physicist François Arago in 1824. It is caused when a conductor is exposed to a changing magnetic field due to relative motion of the field source and conductor; or due to variations of the field with time. This can cause a circulating flow of electrons, or a current, within the body of the conductor. These circulating eddies of current create induced magnetic fields that oppose the change of the original magnetic field due to Lenz's law, causing repulsive or drag forces between the conductor and the magnet. The stronger the applied magnetic field, or the greater the electrical conductivity of the conductor,or the faster the field that the conductor is exposed to changes, then the greater the currents that are developed and the greater the opposing field.
The term eddy current comes from analogous currents seen in water when dragging an oar breadthwise: localised areas of turbulence known as eddies give rise to persistent vortices.
eddy currents, like all electric currents, generate heat as well as electromagnetic forces. The heat can be harnessed for induction heating. The electromagnetic forces can be used for levitation, creating movement, or to give a strong braking effect. Eddy currents can often be minimised with thin plates, by lamination of conductors or other details of conductor shape.
Explanation:-
When a conductor moves relative to the field generated by a source, electromotive forces (EMFs) can be generated around loops within the conductor. These EMFs acting on the resistivity of the material generate a current around the loop, in accordance with Faraday's law of induction. These currents dissipate energy, and create a magnetic field that tends to oppose the changes in the field.
Eddy currents are created when a moving conductor experiences changes in the magnetic field generated by a stationary object, as well as when a stationary conductor encounters a varying magnetic field. Both effects are present when a conductor moves through a varying magnetic field, as is the case at the top and bottom edges of the magnetized region shown in the diagram. Eddy currents will be generated wherever a conducting object experiences a change in the intensity or direction of the magnetic field at any point within it, and not just at the boundaries.
The swirling current set up in the conductor is due to electrons experiencing a Lorentz force that is perpendicular to their motion. Hence, they veer to their right, or left, depending on the direction of the applied field and whether the strength of the field is increasing or declining. The resistivity of the conductor acts to damp the amplitude of the eddy currents, as well as straighten their paths. Lenz's law encapsulates the fact that the current swirls in such a way as to create an induced magnetic field that opposes the phenomenon that created it. In the case of a varying applied field, the induced field will always be in the opposite direction to that applied. The same will be true when a varying external field is increasing in strength. However, when a varying field is falling in strength, the induced field will be in the same direction as that originally applied, in order to oppose the decline.
An object or part of an object experiences steady field intensity and direction where there is still relative motion of the field and the object (for example in the center of the field in the diagram), or unsteady fields where the currents cannot circulate due to the geometry of the conductor. In these situations charges collect on or within the object and these charges then produce static electric potentials that oppose any further current. Currents may be initially associated with the creation of static potentials, but these may be transitory and small.
Eddy currents generate resistive losses that transform some forms of energy, such as kinetic energy, into heat. In many devices, this Joule heating reduces efficiency of iron-core transformers and electric motors and other devices that use changing magnetic fields. Eddy currents are minimized in these devices by selecting magnetic core materials that have low electrical conductivity (e.g., ferrites) or by using thin sheets of magnetic material, known as laminations. Electrons cannot cross the insulating gap between the laminations and so are unable to circulate on wide arcs. Charges gather at the lamination boundaries, in a process analogous to the Hall effect, producing electric fields that oppose any further accumulation of charge and hence suppressing the eddy currents. The shorter the distance between adjacent laminations (i.e., the greater the number of laminations per unit area, perpendicular to the applied field), the greater the suppression of eddy currents.
The conversion of input energy to heat is not always undesirable, however, as there are some practical applications. One is in the brakes of some trains known as eddy current brakes. During braking, the metal wheels are exposed to a magnetic field from an electromagnet, generating eddy currents in the wheels. The eddy currents meet resistance as charges flow through the metal, thus dissipating energy as heat, and this acts to slow the wheels down. The faster the wheels are spinning, the stronger the effect, meaning that as the train slows the braking force is reduced, producing a smooth stopping motion.
Applications:-
In a fast varying magnetic field the induced currents, in good conductors, particularly copper and aluminium, exhibit diamagnetic-like repulsion effects on the magnetic field, and hence on the magnet and can create repulsive effects and even stable levitation, albeit with reasonably high power dissipation due to the high currents this entails.They can thus be used to induce a magnetic field in aluminum cans, which allows them to be separated easily from other recyclables. With a very strong handheld magnet, such as those made from neodymium, one can easily observe a very similar effect by rapidly sweeping the magnet over a coin with only a small separation. Depending on the strength of the magnet, identity of the coin, and separation between the magnet and coin, one may induce the coin to be pushed slightly ahead of the magnet - even if the coin contains no magnetic elements, such as the US penny.
Superconductors allow perfect, lossless conduction, which creates perpetually circulating eddy currents that are equal and opposite to the external magnetic field, thus allowing magnetic levitation. For the same reason, the magnetic field inside a superconducting medium will be exactly zero, regardless of the external applied field.In coin operated vending machines, eddy currents are used to detect counterfeit coins, or slugs. The coin rolls past a stationary magnet, and eddy currents slow its speed. The strength of the eddy currents, and thus the amount of slowing, depends on the conductivity of the coin's metal. Slugs are slowed to a different degree than genuine coins, and this is used to send them into the rejection slot.
Eddy currents are used in certain types of proximity sensors to observe the vibration and position of rotating shafts within their bearings. This technology was originally pioneered in the 1930s by researchers at General Electric using vacuum tube circuitry. In the late 1950s, solid-state versions were developed by Donald E. Bently at Bently Nevada Corporation. These sensors are extremely sensitive to very small displacements making them well suited to observe the minute vibrations (on the order of several thousandths of an inch) in modern turbomachinery. A typical proximity sensor used for vibration monitoring has a scale factor of 200 mV/mil. Widespread use of such sensors in turbomachinery has led to development of industry standards that prescribe their use and application. Examples of such standards are American Petroleum Institute (API) Standard 670 and ISO 7919.

Semiconductor

A semiconductor is a material that has an electrical resistivity between that of a conductor and an insulator, that is, generally in the range 103 Siemens/cm to 10−8 S/cm. Devices made from semiconductor materials are the foundation of modern electronics, including radio, computers, telephones, and many other devices. Semiconductor devices include the various types of transistor, solar cells, many kinds of diodes including the light-emitting diode, the silicon controlled rectifier, and digital and analog integrated circuits. Solar photovoltaic panels are large semiconductor devices that directly convert light energy into electrical energy. An external electrical field may change a semiconductor's resistivity. In a metallic conductor, current is carried by the flow of electrons. In semiconductors, current can be carried either by the flow of electrons or by the flow of positively-charged "holes" in the electron structure of the material.
Common semiconducting materials are crystalline solids but amorphous and liquid semiconductors are known, such as mixtures of arsenic, selenium and tellurium in a variety of proportions. They share with better known semiconductors intermediate conductivity and a rapid variation of conductivity with temperature but lack the rigid crystalline structure of conventional semiconductors such as silicon and so are relatively insensitive to impurities and radiation damage.
Silicon is used to create most semiconductors commercially. Dozens of other materials are used, including germanium, gallium arsenide, and silicon carbide. A pure semiconductor is often called an “intrinsic” semiconductor. The conductivity, or ability to conduct, of common semiconductor materials can be drastically changed by adding other elements, called “impurities” to the melted intrinsic material and then allowing the melt to solidify into a new and different crystal. This process is called "doping".
Energy bands and electrical conduction:-
Like in other solids, the electrons in semiconductors can have energies only within certain bands (ie. ranges of levels of energy) between the energy of the ground state, corresponding to electrons tightly bound to the atomic nuclei of the material, and the free electron energy, which is the energy required for an electron to escape entirely from the material. The energy bands each correspond to a large number of discrete quantum states of the electrons, and most of the states with low energy (closer to the nucleus) are full, up to a particular band called the valence band. Semiconductors and insulators are distinguished from metals because the valence band in the semiconductor materials is very nearly full under usual operating conditions, thus causing more electrons to be available in the "conduction band," which is the band immediately above the valence band.
The ease with which electrons in a semiconductor can be excited from the valence band to the conduction band depends on the band gap between the bands, and it is the size of this energy bandgap that serves as an arbitrary dividing line (roughly 4 eV) between semiconductors and insulators.
In the picture of covalent bonds, an electron moves by hopping to a neighboring bond. Because of the Pauli exclusion principle it has to be lifted into the higher anti-bonding state of that bond. In the picture of delocalized states, for example in one dimension that is in a wire, for every energy there is a state with electrons flowing in one direction and one state for the electrons flowing in the other. For a net current to flow some more states for one direction than for the other direction have to be occupied and for this energy is needed. For a metal this can be a very small energy in the semiconductor the next higher states lie above the band gap. Often this is stated as: full bands do not contribute to the electrical conductivity. However, as the temperature of a semiconductor rises above absolute zero, there is more energy in the semiconductor to spend on lattice vibration and — more importantly for us — on lifting some electrons into an energy states of the conduction band. The current-carrying electrons in the conduction band are known as "free electrons", although they are often simply called "electrons" if context allows this usage to be clear.Electrons excited to the conduction band also leave behind electron holes, or unoccupied states in the valence band. Both the conduction band electrons and the valence band holes contribute to electrical conductivity. The holes themselves don't actually move, but a neighboring electron can move to fill the hole, leaving a hole at the place it has just come from, and in this way the holes appear to move, and the holes behave as if they were actual positively charged particles.
One covalent bond between neighboring atoms in the solid is ten times stronger than the binding of the single electron to the atom, so freeing the electron does not imply destruction of the crystal structure.
Holes: electron absence as a charge carrier:-
The motion of holes, which was introduced for semiconductors, can also be applied to metals, where the Fermi level lies within the conduction band. With most metals the Hall effect reveals electrons to be the charge carriers, but some metals have a mostly filled conduction band, and the Hall effect reveals positive charge carriers, which are not the ion-cores, but holes. Contrast this to some conductors like solutions of salts, or plasma. In the case of a metal, only a small amount of energy is needed for the electrons to find other unoccupied states to move into, and hence for current to flow. Sometimes even in this case it may be said that a hole was left behind, to explain why the electron does not fall back to lower energies: It cannot find a hole. In the end in both materials electron-phonon scattering and defects are the dominant causes for resistance.The energy distribution of the electrons determines which of the states are filled and which are empty. This distribution is described by Fermi-Dirac statistics. The distribution is characterized by the temperature of the electrons, and the Fermi energy or Fermi level. Under absolute zero conditions the Fermi energy can be thought of as the energy up to which available electron states are occupied. At higher temperatures, the Fermi energy is the energy at which the probability of a state being occupied has fallen to 0.5.The dependence of the electron energy distribution on temperature also explains why the conductivity of a semiconductor has a strong temperature dependency, as a semiconductor operating at lower temperatures will have fewer available free electrons and holes able to do the work.