Remnants of my Tesla coil

My first year of high school I tried to build a functioning, high frequency Tesla coil entirely from scrap parts. This project is almost a cliche nowadays; thousands of dedicated hardware hackers have successfully created ominous and occasionally dangerous coils, and so-called “singing” Tesla coils are the new trend among hobbyists. But the project was one of my first earnest attempts to learn about something on my own and apply that knowledge to a non-scholastic project, and so I wanted to link to a few resources here that I found invaluable when I was first starting out:

Overviews

The Powerlabs Tesla coil page. This is the most “professional” Tesla coil I have found that was built by a hobbyist. The craftsmanship is impeccable, from the precision of the secondary coil winding to the care with which the capacitor bank was assembled. The care is reflected in the results; I am very confident that this is one of the most efficient Tesla coils I’ve come across, as it appears to regularly generate 18-inch streamers despite its compact size

The trashy Tesla coil. I like this project because the author defiantly avoids using any store-bought components or parts, using piping and wiring entirely scavenged from his local rubbish yard. This site is also home to one of my favorite anecdotes from a hobbyist:

For some funky reason every time I switched on the power, the sprinkler system in the yard turned on. I’m not kidding here. The yard gets watered every time I fire it up.

Primary and Secondary Coil

The red, long coil in image at the top of this post is the secondary coil from my own Tesla coil, which took me about a week of winding 28 gauge enamel-coated wire over oven-baked PVC pipe. That the toroid is a doorknob is a good tip-off that the payload isn’t resonantly coupled. The pancake-spiral in the foreground is a remnant of my original primary coil design, which I based on tutorial found on this page.

Capacitors

I first realized how attainable a homemade Tesla coil would be when I saw just how simple it can be to make high-voltage capacitors at home in the form of Leyden jars, which can be made from a film canister or bottle and some aluminum foil. Using a CD jewel case and some foil, I’ve even made capacitors that can be charged from a CRT screen but which will produce 3-inch sparks upon discharge—although predicting the discharge rate and stability of Leyden jars against dielectric breakdown is almost an art when one is using plastics and glass with aluminum foil. The best page for an intro to Leyden jars and their uses can be found here.

Primary transformer

Most Tesla coils use a step-up transformer even before the current reaches the primary circuit. This allows shielding of the electrical mains from sparks and shorts in the primary circuit, and it also allows one to get by using capacitors made from beer bottles, air gap discharges, etc. because a higher voltage primary circuit requires less finicky specifications (it would also be very difficult to use a spark gap to modulate the frequency if one was only using mains voltage). I originally ran my coil off of car batteries by using an electromagnetic buzzer and a pair of ignition coils in my primary circuit; however, if I were rebuilding it today I would instead use a neon sign transformer, which I believe offers much more reliable and safe performance despite running on mains power. Here’s a buying guide for NSTs for Tesla coils.

Spark Gap

When I was in high school, I always found the spark gap to be the most mysterious component in the Tesla coil primary circuit. After all, the primary circuit is already an AC circuit, and it seems like forcing the current to regularly jump an air gap would induce significant power losses that would reduce the efficiency of the transformer. The latter point is correct, but it turns out that the spark gap is still worthwhile because the timescale of the AC cycles coming out of the HV transformer being used to drive the primary circuit is way too fast to effectively switch most Tesla coil designs, given the dimensions and couplings of the primary and secondary coils. The spark gap allows the capacitors to fully charge and discharge at a rate set by their time constants and the properties of the spark gap itself (since things like pointed electrodes can create corona discharge, reducing the effective dielectric constant of the air in the gap). As a result, the spark gap acts like a high-power switch at a low enough frequency to allow effective transfer of energy between the primary and secondary coils. A good description of the idea behind using a spark gap (instead of a high-power relay and integrated circuit or other solid-state switch) can be found here and here.

Diamond nanowires

Here is an SEM picture of some 1-um-tall nanowires I made in diamond last summer. I learned how to make these while working for the Loncar group, who are generally interested in using nanofabrication techniques  to develop novel photonic (as opposed to electronic) devices that exploit nanoscopic phenomena like Casimir forces, quantum entanglement, and optomechanics. While these phenomena have been extensively characterized in the past, the group is working to actually use them to develop new technologies, such as single-shot spectroscopy, high-precision magnetometry, and even quantum computing

The subgroup that I worked in focused on nitrogen vacancy (NV) centers, which are point defects in diamonds that form when a single carbon atom in the lattice is replaced by a nitrogen atom, usually by bombarding the diamond with a high-energy beam of nitrogen ions. If the diamond is then annealed (heated up to a temperature below its melting point, but still sufficiently high that individual atoms can migrate through the lattice), the now-mobile nitrogen atoms will tend to seek out positions in the lattice that are adjacent to gap defects, or locations in the lattice where carbon atoms are missing. The intuition behind this behavior is that nitrogen, unlike carbon, prefers to form three bonds (it has a lone pair of two electrons that usually don’t form bonds). These lone electrons prefer to spread out due to their mutual repulsions, and so it is energetically favorable for the nitrogen to sit next to a vacancy in the lattice, where there will be less electromagnetic repulsion due to electrons in neighboring carbon atoms. The resulting two-point imperfection in the diamond, consisting of a nitrogen atom an an adjacent vacancy, is a very unique quantum system that many groups worldwide are now exploring for applications ranging from quantum computing to magnetometry.

A nitrogen vacancy center embedded in a diamond unit cell.

A nitrogen vacancy center embedded in a diamond unit cell.

 

The reason that NV centers are so exciting is that they act like fluorophores—you can shine a green laser onto them, and they will re-fluoresce red light. This means that incident light and absorbed light can be easily separated using color filters, making them immediately useful in many applications where synthetic quantum dots are used for similar effect. But a further advantage of NV centers is that they  natively have two possible ground states for their electrons, but when there is a magnetic field present the these two states “split” and one becomes more energetically favorable than the other. When an electron excited by green laser light relaxes into the now-less-likely ground state, it tends not to emit a red photon due to other mechanisms that spirit the energy away instead. The net result of this unique quantum structure is that the relative intensity of red florescence from an NV center can be used to infer the magnitude (and even direction) of local magnetic fields. All-optical measurement of magnetic fields using NV centers is currently being explored for biological applications (in which an electronic magnetometer would be too bulky or imprecise), but many groups are also exploring the use of NV centers for quantum computing applications, in which time-varying magnetic fields are used to flip a “bit” represented by the two possible ground states of the NV center, and the intensity of fluorescence from the NV center is used to read and write the state of the bit.

The energy level diagram of a nitrogen vacancy center. Yellow indicates non-radiative transitions.

The energy level diagram of a nitrogen vacancy center. Yellow indicates non-radiative transitions.

The wires shown here are usually built on diamonds that are known to contain a relatively high density of NV centers, so that it is very likely that each wire will contain at least one center. The wires essentially act as fiber optic cables: because the red light emitted by the NV centers can be emitted in several possible directions, but the optical setup used to measure fluorescence can only detect emission in one direction, the cylindrical wires are essentially used to channel the emitted light towards the top face of the wire, which is where the detection optics are then focused. The eventual goal of this research would be to use arrays of these wires to map magnetic fields of small objects like cells, allowing spatial resolution of magnetic fields in addition to precision measurement.

Two dipoles radiating out of phase

I thought I’d write about one of my favorite problems from freshman year. It doesn’t require any math to understand, but it points out many of the risks and subtleties that can arise when physics problems make too many “ideal” assumptions:

Suppose that you have two simple antennae, each consisting of a single, straight length of copper wire through which a single frequency of alternating current is passing. The two antennae are positioned some fixed distance apart, and they are oriented in parallel. If a remote physicist operating the two antennae introduces an appropriate delay between their driving signals, causing the AC waveforms in the two antennae to be 90 degrees out-of-phase (but still at the same frequency), then the electric field in the region between the two antennae will vanish due to destructive interference. Yet the two antennae are still emitting radiation; they are still each drawing current, and presumably the power they consume to create this current must be transferred into the resulting fields they emit. So when the two antenna destructively interfere, where does the energy go?

The conventional response to this question (and the one my freshman lab TA insisted upon) is that the field cancels out in some regions—such as between two two antennae–but it increases by a compensatory amount in other regions where the waves constructively interfere, resulting in the net energy stored in the fields (throughout all of space) remaining constant. While this is certainly a satisfactory answer for most textbook treatments of dipole radiation, it remains troubling because one can easily envision a case in which there are no other regions in which the waves can constructively interfere—for example, if mirrors were used carefully. If, instead of antennae, one pictures two out-of-phase lasers pointed towards each other, then it becomes much less clear where the compensating region between the two lasers would be. However, there’s another way of looking at the problem that sheds light on this inconsistency:

Conventional electrodynamics tells us why the two copper antennae will generate radiation: the moving charges in each antenna beget changing magnetic fields, which in turn create electric fields via Faraday’s law, which then create new magnetic fields as they collapse. This cycle of electric and magnetic fields taking turns forming and collapsing gives rise to self-propagating electromagnetic waves—a collapsing electric field changes quickly, thus inducing a magnetic field which eventually collapses to produce a new electric field, and so on. The power transmitted by the wave is thus determined by the amplitude of the initial magnetic field generated by the antenna, which in turn is proportional to the current through the wire. This current is, in turn, determined by the resistance of the wire comprising the antenna—if the wire were a impossibly perfect conductor, then even the most minor voltage difference between the two ends of the antenna would generate an impossibly infinite current via Ohm’s law. Thus the power put into a single antenna is determined by the resistance of its wire, and this power exits the antenna as electromagnetic radiation—so far, energy has neither been created nor destroyed.

The subtlety of the problem arises because an additional effect that occurs when there are two antennae near each other. The electrons moving back and forth inside one antenna aren’t just limited in motion due to the resistance of the copper wire, but also by the electric field due to the other antenna. If the other antenna is in-phase (no delay), then the electrons will keep experiencing a Lorentz force in the opposite direction to the way that the antenna’s power source wants them to move, and so the power source will need to provide more power in order to generate waves of the same amplitude—the current, and therefore power, drawn by the antenna increases. In the case when the sources are out of phase, or the waves are destructively interfering, the opposite effect occurs: the field from the other antenna actually helps the electrons along, allowing a given electron to oscillate at a certain amplitude without requiring as much energy from the power source. In other words, placing the antennae out of phase reduces the effective resistance, or impedance, of the two antennae, and thus reduces their power consumption by an amount equivalent to the drop in the energy of the electromagnetic field due to their destructive interference.

In the laser formulation of this problem, this explanation would amount to the light from one laser damping excitations in the lasing medium of the other laser, resulting in less power drawn from the source.

What I like about this scenario is the manner in which a very common assumption used in physics problems—that power supplies are monoliths, steadily providing a fixed voltage and current to each component of a system–turns out to be the source of the ambiguity. An electrical engineer who places an ammenter in series with one of the antennas would immediately notice the drop in input power when the antennae are placed out-of-phase. But in the way that the problem is often presented, the power consumption of the antennae seems like a fixed quantity, giving rise to the supposed paradox.

Phase Transitions and Ferrofluid

When I was younger, I came across a tutorial that described a simple way to make thermite entirely from homemade ingredients. The crux of the instructions was that iron oxide, the key ingredient in thermite from which molten iron is created, can be isolated from common sand simply by repeatedly dragging a magnet through a container full of it. At the time, my family happened to live near a beach, and so I resolved to gather as much iron oxide as possible and test out the recipe.

In order to collect the iron oxide, I would drag a bag full of magnets behind me every time I went to the beach. After about two weeks of regular collection missions, I obtained enough iron oxide in the form of magnetite (a black, crystalline solid) that I was able to successfully synthesize thermite, using a recipe I’ve described in previous post.

I eventually moved on to using purified, store-bought reagents for safer reactions, but I still had a large amount of magnetite leftover. Eventually another use of it occurred to me when I read this tutorial, which outlines the unusual properties of a ferrofluid, or magnetic liquid. Ferrofluids consist of ordinary solvents, like gasoline or acetone, that have been mixed with a high concentration of nanoscopic iron particles. The tiny bits of iron essentially act as bar magnets, and so they align in unison with an applied magnetic field just as the magnetic needles of a collection of compasses would. But because the iron particles are so small, Brownian motion (the “mixing” that constantly occurs in liquids due to the chaotic thermal motion of their constituent particles) keeps them suspended within the fluid. As a result, the liquid can shift from behaving like the solvent in the absence of the applied field to behaving as a solid when a magnet is brought near the liquid.

I managed to make a very rough ferrofluid by finely grinding up my leftover magnetite and then using the recipe found on this website. The store-bought ferrofluid used in the video at the top of this post was made using precise industrial methods, and so it naturally behaves in a much more elegant manner because the iron particles inside it are much more uniform. But my ferrofluid still exhibits two key behaviors of ferrofluids: it solidifies in response to an applied field, and it tends to form small, clumped structures rather than a single lump:

The erratic behavior of the ferrofluid can be seen as a simple type of phase transition, in which a system subjected to a smoothly varying stimulus (the proximity of the magnet to the fluid) undergoes a discontinuous change in behavior (the sudden appearance of peaks and lumps in the fluid). Phase transitions are crucial in biological systems in which many autonomous parts (like blood cells or the individual members of a school of fish) must behave as a collective entity for mutual benefits. In the ferrofluid example, the mutual benefit is energetic efficiency—the fluid tends to arrange itself in such a way as to minimize its internal energy. The individual particles of iron are initially independent and diffuse freely through the solvent, but when the external field is applied it becomes energetically favorable for the particles of iron to align and congeal into collective aggregates. In the first video, when the magnet is far from the fluid, large lumps tend to form that have well-defined peaks and arrangements. But when the magnet is brought much closer, these peaks tend to disassemble into many small, hairlike filaments because such structures contain less internal energy. The reason for this behavior is that the energy of the fluid is mostly stored in its surface tension–the collective attractions between different magnetized iron particles on the surface of the liquid hold energy in the same manner as distended springs. When the magnet is brought much closer to the liquid, it becomes necessary for the system to offset the excess energy by further increasing its surface area, resulting in a greater number of small structures.

Entropy and cellular automata

Here’s a few frames of a simple simulation of The Game of Life I wrote in MATLAB:

To me, it’s pretty unintuitive that biological processes, like DNA translation or bird flock motion, work so well given that they are often very far from “equilibrium” in the sense we learn in chemistry class. I was taught in high school to think of “equilibrium” as the most stable, least interesting, but most likely outcome of a chemical reaction—vinegar and baking soda eventually fizzle out into brown goo, and even nuclear fusion in stars eventually stops as clumps of iron form in the stellar core.

I think the supposed intuition for the idea of unavoidable equilibration comes from the second law of thermodynamics: entropy constantly increases in the universe, and so there is no spontaneous physical process that can occur on a large enough scale to reverse this tendency. The universe is like a deck of cards: it is always easier to shuffle it than to arrange it in a particular order; thus large scale processes tend to favor disordered outcomes rather than neat patterns. This idea appears throughout the sciences in various forms: one of the axioms of cosmology is that the universe at large scales is homogenous and isotropic—it has no definite structure or patterns, but rather looks like a well-mixed soup of randomly arranged galaxies and gas clouds.

Biological systems can locally violate this rule–they exist as well-ordered clockworks within a universe otherwise characterized by collision and diffusion. While the second law still holds on the large scale, the law of averages allows some leeway on the cosmically insignificant scale of the earth—for every sequoia tree or giant squid there is a much larger disordered element, such as a cloud of gas or a stellar explosion, to compensate. But it still seems surprising that systems as orderly as living beings, with their underlying ability to replicate and evolve repeatedly over millenia, can spontaneously have emerged from the noisy background to cosmos. This raises the question of whether there is some fundamental physical property that makes “living” things unique.

In 1970, the mathematician John Conway proposed “The Game of Life,” a simple mathematical model that sheds light on how “designed” systems can emerge from chaos. In Conway’s version of the game, a black grid is drawn on which some random pattern of squares or tiles is filled in with white. If these white tiles are taken to be some sort of initial colony of “living” cells against an otherwise dead (black) backdrop, then simple rules can be defined for updating the grid to represent colony growth:

1. If a black tile has two or three white tiles adjacent or immediately diagonal to it, then in the next generation it will become white (as if by reproduction).

2. If a white tile has more than three white tiles surrounding it (up to 8 total), then it will become black in the next generation as if by overcrowding; if a white tile has less than 2 white neighbors nearby, it will die in the next generation due to starvation.

3. Any white cell with exactly 2 or 3 white neighbors will persist to the next generation.

These simple rules create an extremely efficient simulation that can be run over many time steps to create life-like simulations of colony growth. What makes Conway’s game uncanny is that even the most random initial patterns can create dynamic, predictable colonies– the second law of thermodynamics does not mean that all, or even most, versions of the game will create a chaotic mass of cells. For example, here’s a pattern that one set of initial conditions will create when the game is run for many time steps (click the image to see the animation):

Click to see the gif on Wikimedia.

The animation shows several important structures that can emerge within the game: gliders are groups of cells that continuously move across the playing field, returning to their original shape (but in a different location) within a few generations. Some cells cluster together and form stable, immobile structures that persist indefinitely until they interact with a wayward glider or another structure.

Conway’s game provides a simple illustration of how life-like systems can emerge from random initial conditions, implying a decrease in entropy in the limited “universe” of the simulation. The game and its variants with different rules, tiling patterns, etc are collectively known as cellular automata, which form the basis of a lot of important research currently occurring in image processing and computational biology. Several noted scientists, including Turing, von Neumann, and Wolfram have investigated the implications of these simple models—Wolfram in particular has devoted several decades or research, thousands of textbook pages, and a particularly unusual Reddit AMA, to the theory that cellular automata provide the basis of the most fundamental physical laws of the universe.

But the Game of Life also connects to many more general mathematical concepts. Markov models, which mathematically characterize systems in which individuals undergo transitions, arrivals, or departures from several finite, well-defined states are an alternative way of representing the same information as Conway’s tiles. The defining principle of Markov models is that the next state is determined purely by the present state: a population ecologist who uses Markov models would assume that the next change in population size can be predicted purely by information about the present population (for example during exponential growth, in which the growth rate of a group of organisms correlates to the size of the group). An ecologist would keep track of all the possible state changes in the population using a transition matrix, which contains empirical estimates of the rate at which new individuals are born (arrival event), old individuals die (departure), and existing individuals survive (transition). The parallel with Conway’s three rules is clear, but Markov models can be easily represented with matrices, and so they represent a natural limiting case for any system in which a physical entity evolves based on a limited subset of its past states.

If Conway’s tiled grid is replaced with a continuous set of points, and the survival status of a given point depends on a weighted integral of the “brightness” of points within a given radius of it, then the transition matrices for many continuous cellular automata will become the solution of a differential equation in space and time. Certain types of diffusion equations, for example, use integration over neighboring points as a continuous approximations of the rules of the Game of Life. A set of differential equations that illustrate the well-defined structures that can emerge from an otherwise disordered system are the reaction-diffusion equations, which model the strange patterns that can be observed when a homogenous solution of potassium bromate and cerium sulfate is mixed with several acids:

Taken from Wikipedia

.

Thus diffusive differential equations, Markov models, and cellular automata really all describe essentially the same process, in which local interactions cause ordered structures and patterns to emerge and aggregate within an otherwise random system.

Laser Microscopy in 20 Minutes

protozoans

Protozoans in a water droplet, projected with a laser pointer beam.

Using a sketchy and cheap Chinese laser pointer, a decent mirror(here provided by an old hard drive platter), and some water from a disgusting aquarium tank, you can create a powerful projection microscope at home. The water droplet itself provides the magnifying optics—using a smaller droplet will increase the magnification, but make focusing the laser a lot more frustrating. The image size can be increased just by increasing the throw distance of the laser. Here’s my setup– I used the straw to make perfectly round droplets by dipping the end in the aquarium, and I used the microfiber cloth to keep smudges off the mirror:

Materials

My materials

Laser Microscope Schematic

My setup of the laser pointer microscope. I used a hard drive platter as my mirror.

With the water from the aquarium, I can easily see amoebas and paramecia swimming around and interacting. Obviously the diffraction limit heavily applies to the quality of the image, but some sub-cellular structures are definitely visible within the amoebas:

A good thing to note is that some of the more geometric bodies that you see moving are actually very small organisms that are Rayleigh and Mie scattering in the laser light—the bodies themselves are too small to see, but they create a geometric interference pattern that appears to move with them through the water.

Scattering microscopy can also be applied to other transparent materials, such as glass and crystals, to reveal internal structures. A good one to try is a clear marble, preferably one with cracks on the inside. Here is a photograph of the laser through Icelandic spar, a variant of the common calcite crystal that exhibits complex double refracting behavior in standard lighting. The laser reveals the cleavage plane of the crystal quite nicely:

Calcite Interference Pattern

The “Icelandic Spar” variety of Calcite exhibits double refraction when held against a sheet of text; the laser light reveals that this is due to its orderly internal lattice structure.

What is a laser?

Laser light is coherent, meaning that all the photons that comprise it march in phase—they never interfere with one another because they all take the same steps at the same time. This is incredibly useful because it not only means that all the photons have the same frequency—they take the same number of steps per minute—but it also means that they take identical steps (or are in phase).

The reason is matters to the scientists is because, while light is indeed broken up into little particles of energy, these photons happen to act like waves in that they can interfere with one another and overlap, just as ocean waves give rise to complex eddies and lulls. Most light sources like incandescent light bulbs simply toss off photons with whatever phase and frequency happens to be most convenient, but lasers are designed to product barrages of photons that are coherent(have the same phase) and monochromatic(have the same frequency). This allows physicists to start out with no interference at all, and then to introduce various substances into the laser beam to see how they cause interference. Often the interference properties of a substance provide fundamental details about its microscopic structure.

The basic idea of lasers is that an electric field causes many atoms in a gas to reach an excited state, or a state in which their atoms have reconfigured their electronic structure in order to hold additional energy. Most atoms would prefer not to hold this additional energy for long, and so after some time they will decay into their ground(normal) state, releasing the energy that they were hoarding as a photon. The trick to this process is that only certain changes in electron arrangement around the nucleus are physically possible, and so only certain changes in energy are possible. This means that gases are predisposed to emit photons with identical energies because their electrons spontaneously absorb and emit only photons that correspond to the allowed variations in electron distributions. The energy of a photon is directly proportional to its frequency(and thus color) via Plank’s law, which is why we know that the blue part of a candle flame is much hotter than the yellow(low frequency) part. Most elements have characteristic colors that they emit light at, as each element has a unique atomic geometry and thus a unique set of acceptable electronic configurations about their nucleus. The study of the characteristic colors, or spectra, of chemicals is the basis of spectroscopy, which I discuss in my post on incandescence.

So lasers already have the monochromatic issue taken care of—they simply use a mixture of gases that ensures that atoms only spontaneously absorb and emit light at the desired color. But lasers are so powerful because they use stimulated emission— as a photon in a laser passes by an excited atom that has not yet released its energy, it can provoke it to release a photon that is moving perfectly in step with it. So in addition to being monochromatic, the light emitted from a laser is always coherent(in phase).

There’s a reasonable explanation for how this occurs: according to the Pauli Exclusion Principle(or lack thereof for bosons, of which photons are a subclass), it is impossible to tell photons with the same set of properties apart. The basis of this is that photons, unlike objects we encounter in the macroscopic world, are able to overlap like waves. So if I place two photons in the same place, and everything about the two photons is the same, then I can never tell them apart. If a photon in a laser flies by an excited atom and stimulates a photon with a random phase to be released, however, there are two possible ways for the new photon to be in phase, but only one way for them to be out of phase. The reason for this is that there are two axes along which the photons can agree or disagree, but by the Pauli Principle all the disagreements appear to be the exact same, single state. This is rather unintuitive, but minutephysics provides a nice example with a quantum coin flip. The phenomenon is known as the consolidation of eigenstates(eigen is a German prefix that means “terrible algebra”), meaning that there end up being more options for the photons to stay in phase than to go out of phase, resulting in the former being statistically favorable.

As a result, the number of photons in-phase gradually builds up until eventually the laser output is dominated by coherent light. d.

Centripetal Force Notes

I recently had the opportunity to give a lecture to a physics class at my high school, and I was amazed by just how difficult it is to teach a class, even when it is on a topic that you supposedly know a bit about. Even if you attempt to have the most organized and logical presentation of the material possible, at the end of the day whether a class learns anything from your lecture depends on your ability to clarify ideas and provide clear metaphors ex tempore. I think that many people undervalue just how much effort excellent teachers must put into their work.

My lecture was about centripetal forces. When many of us are first taught about centripetal acceleration, it seems like a mysterious concept– the equations and formulae clearly work, but it’s very hard to visualize how exactly the underlying forces act. In these lecture notes, I attempted to explain circular orbits in terms of the classic parabolic projectile problem, using the conic sections as a bridge from a downward force to a central force.
Download here:
Centripetal force notes