How Quantum Computers Work

•June 18, 2008 • Leave a Comment

The massive amount of processing power generated by computer manufacturers has not yet been able to quench our thirst for speed and computing capacity. In 1947, American computer engineer Howard Aiken said that just six electronic digital computers would satisfy the computing needs of the United States. Others have made similar errant predictions about the amount of computing power that would support our growing technological needs. Of course, Aiken didn’t count on the large amounts of data generated by scientific research, the proliferation of personal computers or the emergence of the Internet, which have only fueled our need for more, more and more computing power.

Will we ever have the amount of computing power we need or want? If, as Moore’s Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.

Scientists have already built basic quantum computers that can perform certain calculations; but a practical quantum computer is still years away. In this article, you’ll learn what a quantum computer is and just what it’ll be used for in the next era of computing.

You don’t have to go back too far to find the origins of quantum computing. While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago, by a physicist at the Argonne National Laboratory. Paul Benioff is credited with first applying quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine. Most digital computers, like the one you are using to read this article, are based on the Turing Theory.

Defining the Quantum Computer

The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.

The Bloch sphere is a representation of a qubit, the fundamental building block of quantum computers.
Image used under the GNU Free Documentation License 1.2
The Bloch sphere is a representation of a qubit, the fundamental building block of quantum computers.

Today’s computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren’t limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today’s most powerful supercomputers.

This superposition of qubits is what gives quantum computers their inherent parallelism. According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today’s typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).

Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them, and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system’s integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.

Next, we’ll look at some recent advancements in the field of quantum computing.

Qubit ControlComputer scientists control the microscopic particles that act as qubits in quantum computers by using control devices.

  • Ion traps use optical or magnetic fields (or a combination of both) to trap ions.
  • Optical traps use light waves to trap and control particles.
  • Quantum dots are made of semiconductor material and are used to contain and manipulate electrons.
  • Semiconductor impurities contain electrons by using “unwanted” atoms found in semiconductor material.
  • Superconducting circuits allow electrons to flow with almost no resistance at very low temperatures.

Can adding iron to the oceans slow global warming?

•April 27, 2008 • Leave a Comment

By: Jennifer Horton.

Enter forward-thinking scientists and companies like Planktos and Climos, who propose adding iron to the world’s oceans to reduce atmospheric carbon dioxide levels and, in turn, to decrease temperatures. The idea of dumping iron in the oceans to lower temperatures has been around since the late 1980s and has been known variously as carbon sinking, ocean seeding or iron fertilization.

carbon cycle
Photo courtesy NOAA
Phytoplankton absorb CO2 and sunlight to produce energy in photosynthesis.

The premise is act­ually simple. Iron acts as a fertilizer for many plants, and some, like the phytoplankton that form the base of the marine food web, need it to grow. Adding iron to the water stimulates phytoplankton growth, which in turn gobble up carbon dioxide through photosynthesis. The resulting decrease in carbon dioxide is supposed to help reduce temperatures since carbon dioxide is one of the main gases responsible for trapping ­heat on the earth’s surface through the greenhouse effect.

Numerous iron dumping trials have been conducted since oceanographer John Martin suggested the idea more than 15 years ago [source: Haiken]. One trial conducted in 2004 indicated that each atom of iron added to the water could draw between 10,000 and 100,000 atoms of carbon out of the atmosphere by encouraging plankton growth [source: Schiermeier]. Some scientists theorize that adding iron to the Southern Ocean alone could reduce carbon dioxide levels by 15 percent [source: Schiermeier].

Scientist Oliver Wingenter suggests a more cautious approach, arguing that adding massive amounts of iron to the ocean could cause a major cooling of more than 10 degrees Celsius [source: Wingenter]. He recommends fertilizing just 2 percent of the Southern Ocean to cause a 2 degree Celsius cooling and to set back the tipping point of global warming 10 or more years [source: Wingenter].

Instead of focusing on cutting carbon dioxide levels, Wingenter’s research concentrated on increasing other gases that result from the phytoplankton blooms, namely dimethyl sulfide, or DMS. DMS is largely responsible for cloud formation in the polar region and could increase cloud reflectivity, which would in turn reduce temperatures. During his iron fertilization experiments, Wingenter found that adding iron increased the concentration of DMS five-fold [source: Wingenter].


•December 30, 2007 • Leave a Comment

If your morning routine includes a cup or two of coffee, you may know a few things about it. It’s a stimulant drink, it comes from beans that are roasted and ground and, for many of us, it’s a staple of life. But do you know where coffee grows and how it gets to America? How a French roast differs from an Italian roast? What a coffee cherry is? Or how decaffeinated coffee is made?

Photo courtesy
Your morning cup of Joe begins its life on a coffee plantation.

There’s much more to that morning cup o’ Joe than you may realize! In this article, we’ll look at coffee’s origins and how it spread, where it’s grown, how it’s harvested and processed and what roasting is all about. We’ll finish by learning how to make a really great cup of coffee.

Catching the Buzz

Coffee’s story begins with a goat, at least in legends. It’s said that Kaldi, an Ethiopian goatherd, noticed his goats acting very frisky after eating a certain shrub. He took some of the shrub’s berries for himself, caught the buzz and coffee’s future was secured.

Originally, coffee was a food, not a drink. Early East African tribes mixed the coffee berries (the unhulled bean, also called a coffee cherry) with animal fat, forming energy balls — something like primitive Power Bars. Coffee also grew on the Arabian Peninsula, and it was there that it was first developed into a hot drink, sometime around A.D. 1000. By the 13th century, Muslims were drinking coffee fervently. The “whirling dervishes” of early Islam may have been fueled by coffee.

As Islam spread, so did coffee. But the Arabs closely guarded the coffee plants, and no fertile seeds were found outside Arabia (with the exception of the other place where coffee grew naturally, Africa) until the 1600s. Another coffee legend states that an Indian smuggler named Baba Budan left Mecca with fertile seeds strapped to his chest. Soon, coffee plants were growing in India.

As European traders returned from exotic locales such as Turkey, they brought news of and a new-found taste for the black beverage. It was the Dutch who founded the first European coffee estate on the island of Java, then a Dutch colony (now part of Indonesia), in 1616.

Coffee crossed the Atlantic around 1727. Yet another coffee legend: Brazil’s emperor asks a spy, Lt. Col. Palheta, to smuggle seeds into the country. Palheta goes to French Guiana, exudes his considerable charm on the governor’s wife and leaves with a farewell bouquet — spiked with coffee seedlings. Brazil is now the world’s top coffee producer.

Coffee is grown in only one U.S. state, Hawaii. Its famed Kona coffee, grown on Hawaii’s volcanic mountains, is highly desired.

Photo courtesy Kona Coffee/Bay View Farm Coffees
Kona coffee beans, drying here in Hawaii, are highly desirable by coffee connoisseurs.

What gives coffee its kick?

Caffeine, of course. Caffeine is trimethylxanthine (C8H10N4O2). It’s an addictive stimulant drug that operates in the brain the same way amphetamines, cocaine and heroin do (although caffeine is much milder than those drugs). Caffeine occurs naturally in a number of plants, including coffee beans. Your average 6-ounce cup of drip-brewed coffee contains 100 mg of caffeine. A 12-ounce cola soft drink contains about 50 mg of caffeine.

The Bean Belt

While you may drink coffee every day, unless you’ve lived in a coffee-producing country you may have no idea what a coffee tree looks like. A coffee tree is a woody perennial evergreen, covered with dark-green, waxy leaves growing opposite each other in pairs. They can grow 30 feet (9 m) high, but in cultivation, coffee trees are kept short for easier harvesting. It takes three or four years after planting for the tree to become productive. The tree produces fragrant white blossoms (some say the blossoms smell like jasmine), and then, nearly a year later, the coffee cherries mature. A coffee tree produces continuously: One plant can be flowering, have immature beans and mature cherries all at the same time. Each tree can produce beans that make between 1 and 1.5 pounds (0.45 and 0.68 kg) of roasted coffee every season.

Source: National Geographic
Coffee grows best in an area known as the Bean Belt — the band around the Earth in between the Tropics of Capricorn and Cancer.

A coffee plant prefers rich soil and mild temperatures, with lots of rain and shaded sun. It grows best in a band around the middle of the world, bounded by the Tropics of Capricorn and Cancer, known as the Bean Belt. Soil, climate and altitude affect the flavor of the beans.

Fibonacci Number.

•October 13, 2007 • Leave a Comment

In mathematics, the Fibonacci numbers form a sequence defined by the following recurrence relation:

    F(n):=     \begin{cases}     0             & \mbox{if } n = 0; \\     1             & \mbox{if } n = 1; \\     F(n-1)+F(n-2) & \mbox{if } n > 1. \\    \end{cases}

That is, after two starting values, each number is the sum of the two preceding numbers. The first Fibonacci numbers (sequence A000045 in OEIS), also denoted as Fn, for n = 0, 1, … , are:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, …

(Sometimes this sequence is considered to start at F1 = 1, but in this article it is regarded as beginning with F0=0.)

The Fibonacci numbers are named after Leonardo of Pisa, known as Fibonacci, although they had been described earlier in India.


The Fibonacci numbers first appeared, under the name mātrāmeru (mountain of cadence), in the work of the Sanskrit grammarian Pingala (Chandah-shāstra, the Art of Prosody, 450 or 200 BC). Prosody was important in ancient Indian ritual because of an emphasis on the purity of utterance. The Indian mathematician Virahanka (6th century AD) showed how the Fibonacci sequence arose in the analysis of metres with long and short syllables. Subsequently, the Jain philosopher Hemachandra (c.1150) composed a well known text on these. A commentary on Virahanka’s work by Gopāla in the 12th century also revisits the problem in some detail.

Sanskrit vowel sounds can be long (L) or short (S), and Virahanka’s analysis, which came to be known as mātrā-vṛtta, wishes to compute how many metres (mātrās) of a given overall length can be composed of these syllables. If the long syllable is twice as long as the short, the solutions are:

1 mora: S (1 pattern)
2 morae: SS; L (2)
3 morae: SSS, SL; LS (3)
4 morae: SSSS, SSL, SLS; LSS, LL (5)

A pattern of length n can be formed by adding S to a pattern of length n−1, or L to a pattern of length n−2; and the prosodicists showed that the number of patterns of length n is the sum of the two previous numbers in the series. Donald Knuth reviews this work in The Art of Computer Programming as equivalent formulations of the bin packing problem for items of lengths 1 and 2.

In the West, the sequence was first studied by Leonardo of Pisa, known as Fibonacci, in his Liber Abaci (1202)[3]. He considers the growth of an idealised (biologically unrealistic) rabbit population, assuming that:

  • in the first month there is just one newly-born pair,
  • new-born pairs become fertile from their second month on
  • each month every fertile pair begets a new pair, and
  • the rabbits never die

Let the population at month n be F(n). At this time, only rabbits who were alive at month n−2 are fertile and produce offspring, so F(n−2) pairs are added to the current population of F(n−1). Thus the total is F(n) = F(n−1) + F(n−2)

The bee ancestry code

Fibonacci numbers also appear in the description of the reproduction of a population of idealized bees, according to the following rules:

  • If an egg is laid by an unmated female, it hatches a male.
  • If, however, an egg was fertilized by a male, it hatches a female.

Thus, a male bee will always have one parent, and a female bee will have two.

If one traces the ancestry of any male bee (1 bee), he has 1 female parent (1 bee). This female had 2 parents, a male and a female (2 bees). The female had two parents, a male and a female, and the male had one female (3 bees). Those two females each had two parents, and the male had one (5 bees). This sequence of numbers of parents is the Fibonacci sequence.

This is an idealization that does not describe actual bee ancestries. In reality, some ancestors of a particular bee will always be sisters or brothers, thus breaking the lineage of distinct parents.

Relation to the golden ratio

[edit] Golden ratio defined

The golden ratio.

The golden ratio.

The golden ratio \varphi (phi), also written τ (tau), is defined as the ratio that results when a line is divided so that the whole line has the same ratio to the larger segment as the larger segment has to the smaller segment. Expressed algebraically, normalising the larger part to unit length, it is the positive solution of the equation:

\frac{x}{1}=\frac{1}{x-1} or equivalently x^2-x-1=0,\,

which is equal to:

\varphi = \frac{1 + \sqrt{5}}{2} = 0.5 + \sqrt{1.25} \approx 1.618\,033\,988\,749\,894\,848\,204\,586\,834\,366\,.

[edit] Closed form expression

Like every sequence defined by linear recurrence, the Fibonacci numbers have a closed-form solution. It has become known as Binet‘s formula, even though it was already known by Abraham de Moivre:

F\left(n\right) = {{\varphi^n-(1-\varphi)^n} \over {\sqrt 5}}\, , where \varphi is the golden ratio.

The Fibonacci recursion


is similar to the defining equation of the golden ratio in the form


which is also known as the generating polynomial of the recursion.

Proof (by induction):

Any root of the equation above satisfies \begin{matrix}x^2=x+1,\end{matrix}\, and multiplying by x^{n-1}\, shows:

x^{n+1} = x^n + x^{n-1}\,

By definition \varphi is a root of the equation, and the other root is 1-\varphi\, .. Therefore:

\varphi^{n+1}  = \varphi^n + \varphi^{n-1}\,


(1-\varphi)^{n+1} = (1-\varphi)^n + (1-\varphi)^{n-1}\, .

Now consider the functions:

F_{a,b}(n) = a\varphi^n+b(1-\varphi)^n defined for any real a,b\, .

All these functions satisfy the Fibonacci recursion

\begin{align}   F_{a,b}(n+1) &= a\varphi^{n+1}+b(1-\varphi)^{n+1} \\                &=a(\varphi^{n}+\varphi^{n-1})+b((1-\varphi)^{n}+(1-\varphi)^{n-1}) \\                &=a{\varphi^{n}+b(1-\varphi)^{n}}+a{\varphi^{n-1}+b(1-\varphi)^{n-1}} \\                &=F_{a,b}(n)+F_{a,b}(n-1) \end{align}

Selecting a=1/\sqrt 5 and b=-1/\sqrt 5 gives the formula of Binet we started with. It has been shown that this formula satisfies the Fibonacci recursion. Furthermore:

F_{a,b}(0)=\frac{1}{\sqrt 5}-\frac{1}{\sqrt 5}=0\,\!


F_{a,b}(1)=\frac{\varphi}{\sqrt 5}-\frac{(1-\varphi)}{\sqrt 5}=\frac{-1+2\varphi}{\sqrt 5}=\frac{-1+(1+\sqrt 5)}{\sqrt 5}=1,

establishing the base cases of the induction, proving that

F(n)={{\varphi^n-(1-\varphi)^n} \over {\sqrt 5}} for all  n\, .

For any two starting values, a combination a,b can be found such that the function F_{a,b}(n)\, is the exact closed formula for the series.

Since \begin{matrix}|1-\varphi|^n/\sqrt 5 < 1/2\end{matrix} for all n\geq 0\, , F(n)\, is the closest integer to \varphi^n/\sqrt 5\, . For computational purposes, this is expressed using the floor function:

F(n)=\bigg\lfloor\frac{\varphi^n}{\sqrt 5} + \frac{1}{2}\bigg\rfloor.

[edit] Limit of consecutive quotients

Johannes Kepler pointed out that the ratio of consecutive Fibonacci numbers converges, stating that “…as 5 is to 8 so is 8 to 13, practically, and as 8 is to 13, so is 13 to 21 almost” and concludes that the limit approaches the golden ratio \varphi [5]


This convergence does not depend on the starting values chosen, excluding 0, 0.


It follows from the explicit formula that for any real a \ne 0, b \ne 0:

\begin{align}   \lim_{n\to\infty}\frac{F_{a,b}(n+1)}{F_{a,b}(n)}      &= \lim_{n\to\infty}\frac{a\varphi^{n+1}-b(1-\varphi)^{n+1}}{a\varphi^n-b(1-\varphi)^n} \\      &= \lim_{n\to\infty}\frac{a\varphi-b(1-\varphi)(\frac{1-\varphi}{\varphi})^n}{a-b(\frac{1-\varphi}{\varphi})^n} \\      &= \varphi  \end{align}

because \bigl|{\tfrac{1-\varphi}{\varphi}}\bigr| < 1 and thus \lim_{n\to\infty}\left(\tfrac{1-\varphi}{\varphi}\right)^n=0

[edit] Matrix form

A 2-dimensional system of linear difference equations that describes the Fibonacci sequence is

{F_{k+2} \choose F_{k+1}} = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix} {F_{k+1} \choose F_{k}}


\vec F_{k+1} = A \vec F_{k}.\,

The eigenvalues of the matrix A are \varphi\,\! and (1-\varphi)\,\!, and the elements of the eigenvectors of A, {\varphi \choose 1} and {1 \choose -\varphi}, are in the ratios \varphi\,\! and (1-\varphi\,\!).

This matrix has a determinant of −1, and thus it is a 2×2 unimodular matrix. This property can be understood in terms of the continued fraction representation for the golden ratio:

\varphi =1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\;\;\ddots\,}}} \;.

The Fibonacci numbers occur as the ratio of successive convergents of the continued fraction for \varphi\,\!, and the matrix formed from successive convergents of any continued fraction has a determinant of +1 or −1.

The matrix representation gives the following closed expression for the Fibonacci numbers:

\begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}^n =        \begin{pmatrix} F_{n+1} & F_n \\                        F_n     & F_{n-1} \end{pmatrix}.

Taking the determinant of both sides of this equation yields Cassini’s identity

 F_{n+1}F_{n-1} - F_n^2 = (-1)^n.\,

Additionally, since AnAm = Am + n for any square matrix A, the following identities can be derived:

{F_n}^2 + {F_{n-1}}^2 = F_{2n-1},\,
F_{n+1}F_{m} + F_n F_{m-1} = F_{m+n}.\,

[edit] Recognizing Fibonacci numbers

Occasionally, the question may arise whether a positive integer z is a Fibonacci number. Since F(n) is the closest integer to \varphi^n/\sqrt{5}, the most straightforward test is the identity


which is true if and only if z is a Fibonacci number.

A slightly more sophisticated test uses the fact that the convergents of the continued fraction representation of \varphi are ratios of successive Fibonacci numbers, that is the inequality


(with coprime positive integers p, q) is true if and only if p and q are successive Fibonacci numbers. From this one derives the criterion that z is a Fibonacci number if and only if the intersection of the closed interval

\bigg[\varphi z-\frac{1}{z},\varphi z+\frac{1}{z}\bigg]

with the positive integers \mathbb{N} is not empty.[6]

[edit] Identities

  1. F(n + 1) = F(n) + F(n − 1)
  2. F(0) + F(1) + F(2) + … + F(n) = F(n + 2) − 1
  3. F(1) + 2 F(2) + 3 F(3) + … + n F(n) = n F(n + 2) − F(n + 3) + 2
  4. F(0)² + F(1)² + F(2)² + … + F(n)² = F(n) F(n + 1)

These identities can be proven using many different methods. But, among all, we wish to present an elegant proof for each of them using combinatorial arguments here. In particular, F(n) can be interpreted as the number of ways summing 1’s and 2’s to n − 1, with the convention that F(0) = 0, meaning no sum will add up to −1, and that F(1) = 1, meaning the empty sum will “add up” to 0. Here the order of the summands matters. For example, 1 + 2 and 2 + 1 are considered two different sums and are counted twice.

[edit] Proof of the first identity

Without loss of generality, we may assume n ≥ 1. Then F(n + 1) counts the number of ways summing 1’s and 2’s to n.

When the first summand is 1, there are F(n) ways to complete the counting for n − 1; and when the first summand is 2, there are F(n − 1) ways to complete the counting for n − 2. Thus, in total, there are F(n) + F(n − 1) ways to complete the counting for n.

[edit] Proof of the second identity

We count the number of ways summing 1’s and 2’s to n + 1 such that at least one of the summands is 2.

As before, there are F(n + 2) ways summing 1’s and 2’s to n + 1 when n ≥ 0. Since there is only one sum of n + 1 that does not use any 2, namely 1 + … + 1 (n + 1 terms), we subtract 1 from F(n + 2).

Equivalently, we can consider the first occurrence of 2 as a summand. If, in a sum, the first summand is 2, then there are F(n) ways to the complete the counting for n − 1. If the second summand is 2 but the first is 1, then there are F(n − 1) ways to complete the counting for n − 2. Proceed in this fashion. Eventually we consider the (n + 1)th summand. If it is 2 but all of the previous n summands are 1’s, then there are F(0) ways to complete the counting for 0. If a sum contains 2 as a summand, the first occurrence of such summand must take place in between the first and (n + 1)th position. Thus F(n) + F(n − 1) + … + F(0) gives the desired counting.

[edit] Proof of the third identity

This identity can be established in two stages. First, we count the number of ways summing 1s and 2s to −1, 0, …, or n + 1 such that at least one of the summands is 2.

By our second identity, there are F(n + 2) − 1 ways summing to n + 1; F(n + 1) − 1 ways summing to n; …; and, eventually, F(2) − 1 way summing to 1. As F(1) − 1 = F(0) = 0, we can add up all n + 1 sums and apply the second identity again to obtain

   [F(n + 2) − 1] + [F(n + 1) − 1] + … + [F(2) − 1]
= [F(n + 2) − 1] + [F(n + 1) − 1] + … + [F(2) − 1] + [F(1) − 1] + F(0)
= F(n + 2) + [F(n + 1) + … + F(1) + F(0)] − (n + 2)
= F(n + 2) + F(n + 3) − (n + 2).

On the other hand, we observe from the second identity that there are

  • F(0) + F(1) + … + F(n − 1) + F(n) ways summing to n + 1;
  • F(0) + F(1) + … + F(n − 1) ways summing to n;


  • F(0) way summing to −1.

Adding up all n + 1 sums, we see that there are

  • (n + 1) F(0) + n F(1) + … + F(n) ways summing to −1, 0, …, or n + 1.

Since the two methods of counting refer to the same number, we have

(n + 1) F(0) + n F(1) + … + F(n) = F(n + 2) + F(n + 3) − (n + 2)

Finally, we complete the proof by subtracting the above identity from n + 1 times the second identity.

[edit] Identity for doubling n

Another identity useful for calculating Fn for large values of n is

F_{2n+k} = F_k F_{n+1}^2 + 2 F_{k-1} F_{n+1} F_n + F_{k-2} F_n^2

for all integers n and k. Dijkstra[7] points out that doubling identities of this type can be used to calculate Fn using O(log n) arithmetic operations.

(From practical standpoint it should be noticed that the calculation involves manipulation of numbers which length (number of digits) is {\rm \Theta}(n)\,. Thus the actual performance depends mainly upon efficiency of the implemented long multiplication, and usually is {\rm \Theta}(n \,\log n) or {\rm \Theta}(n ^{\log_2 3}).)

[edit] Other identities

Other identities include relationships to the Lucas numbers, which have the same recursive properties but start with L0=2 and L1=1. These properties include F2n=FnLn.

There are also scaling identities, which take you from Fn and Fn+1 to a variety of things of the form Fan+b; for instance

F_{3n} = 5F_{n}^3 + 3 (-1)^n F_{n}

F_{3n+1} = F_{n+1}^3 + 3 F_{n+1}F_n^2 - F_n^3

F_{3n+2} = F_{n+1}^3 + 3 F_{n+1}^2F_n + F_n^3

F_{4n} = 4F_nF_{n+1}(F_{n+1}^2 + 2F_n^2) - 3F_n^2(F_n^2 + 2F_{n+1}^2)

These can be found experimentally using lattice reduction, and are useful in setting up the special number field sieve, should you wish to factorize a Fibonacci number. Their existence is strongly dependent on the fact that F_n = \sqrt{1/5} \left(\phi^n - \left(-\phi\right)^{-n}\right); Fibonacci-like numbers with a less symmetrical form to the solution of the recurrence relation do not have such identities associated with them.

[edit] Power series

The generating function of the Fibonacci sequence is the power series

s(x)=\sum_{k=0}^{\infty} F_k x^k.

This series has a simple and interesting closed-form solution for x < 1/\varphi


This solution can be proven by using the Fibonacci recurrence to expand each coefficient in the infinite sum defining s(x):

\begin{align}   s(x) &= \sum_{k=0}^{\infty} F_k x^k \\        &= F_0 + F_1x + \sum_{k=2}^{\infty} \left( F_{k-1} + F_{k-2} \right) x^k \\        &= x + \sum_{k=2}^{\infty} F_{k-1} x^k + \sum_{k=2}^{\infty} F_{k-2} x^k \\        &= x + x\sum_{k=0}^{\infty} F_k x^k + x^2\sum_{k=0}^{\infty} F_k x^k \\        &= x + x s(x) + x^2 s(x)   \end{align}

Solving the equation s(x) = x + xs(x) + x2s(x) for s(x) results in the closed form solution.

In particular, math puzzle-books note the curious value \frac{s(\frac{1}{10})}{10}=\frac{1}{89}, or more generally

\sum_{n = 1}^{\infty}{\frac {F(n)}{10^{(k + 1)(n + 1)}}} = \frac {1}{10^{2k + 2} - 10^{k + 1} - 1}

for all integers k > = 0.



[edit] Reciprocal sums

Infinite sums over reciprocal Fibonacci numbers can sometimes be evaluated in terms of theta functions. For example, we can write the sum of every odd-indexed reciprocal Fibonacci number as

\sum_{k=0}^\infty \frac{1}{F_{2k+1}} = \frac{\sqrt{5}}{4}\vartheta_2^2 \left(0, \frac{3-\sqrt 5}{2}\right) ,

and the sum of squared reciprocal Fibonacci numbers as

\sum_{k=1}^\infty \frac{1}{F_k^2} = \frac{5}{24} \left(\vartheta_2^4\left(0, \frac{3-\sqrt 5}{2}\right) - \vartheta_4^4\left(0, \frac{3-\sqrt 5}{2}\right) + 1 \right).

If we add 1 to each Fibonacci number in the first sum, there is also the closed form

\sum_{k=0}^\infty \frac{1}{1+F_{2k+1}} = \frac{\sqrt{5}}{2},

and there is a nice nested sum of squared Fibonacci numbers giving the reciprocal of the golden ratio,

\sum_{k=1}^\infty \frac{(-1)^{k+1}}{\sum_{j=1}^k {F_{j}}^2} = \frac{\sqrt{5}-1}{2}.

Results such as these make it plausible that a closed formula for the plain sum of reciprocal Fibonacci numbers could be found, but none is yet known. Despite that, the reciprocal Fibonacci constant

\psi = \sum_{k=1}^{\infty} \frac{1}{F_k} = 3.359885666243 \dots

has been proved irrational by Richard André-Jeannin.

Right triangles

Starting with 5, every second Fibonacci number is the length of the hypotenuse of a right triangle with integer sides, or in other words, the largest number in a Pythagorean triple. The length of the longer leg of this triangle is equal to the sum of the three sides of the preceding triangle in this series of triangles, and the shorter leg is equal to the difference between the preceding bypassed Fibonacci number and the shorter leg of the preceding triangle.

The first triangle in this series has sides of length 5, 4, and 3. Skipping 8, the next triangle has sides of length 13, 12 (5 + 4 + 3), and 5 (8 − 3). Skipping 21, the next triangle has sides of length 34, 30 (13 + 12 + 5), and 16 (21 − 5). This series continues indefinitely.

[edit] Magnitude of Fibonacci numbers

The number of base b digits of F_n\, asymptotes to n\,\log_b\varphi.

[edit] Applications

The Fibonacci numbers are important in the run-time analysis of Euclid’s algorithm to determine the greatest common divisor of two integers: the worst case input for this algorithm is a pair of consecutive Fibonacci numbers.

Yuri Matiyasevich was able to show that the Fibonacci numbers can be defined by a Diophantine equation, which led to his original solution of Hilbert’s tenth problem.

The Fibonacci numbers occur in the sums of “shallow” diagonals in Pascal’s triangle and Lozanić’s triangle (see “Binomial coefficient).

Every positive integer can be written in a unique way as the sum of one or more distinct Fibonacci numbers in such a way that the sum does not include any two consecutive Fibonacci numbers. This is known as Zeckendorf’s theorem, and a sum of Fibonacci numbers that satisfies these conditions is called a Zeckendorf representation.

Fibonacci numbers are used by some pseudorandom number generators.

Fibonacci numbers arise in the analysis of the Fibonacci heap data structure.

A one-dimensional optimization method, called the Fibonacci search technique, uses Fibonacci numbers.[9]

In music, Fibonacci numbers are sometimes used to determine tunings, and, as in visual art, to determine the length or size of content or formal elements. It is commonly thought that the first movement of Béla Bartók‘s Music for Strings, Percussion, and Celesta was structured using Fibonacci numbers.

Since the conversion factor 1.609 for miles to kilometers is close to the golden ratio (denoted φ), the decomposition of distance in miles into a sum of Fibonacci numbers becomes nearly the kilometer sum when the Fibonacci numbers are replaced by their successors. This method amounts to a radix 2 number register in golden ratio base φ being shifted. To convert from kilometers to miles, shift the register down the Fibonacci sequence instead.[10][11][12]

[edit] Fibonacci numbers in nature

Sunflower head displaying florets in spirals of 34 and 55 around the outside

Sunflower head displaying florets in spirals of 34 and 55 around the outside

Fibonacci sequences appear in biological settings,[13] such as branching in trees, the fruitlets of a pineapple,[14] an uncurling fern and the arrangement of a pine cone.[15]. In addition, numerous poorly substantiated claims of Fibonacci numbers or golden sections in nature are found in popular sources, e.g. relating to the breeding of rabbits, the spirals of shells, and the curve of waves[citation needed].

Przemyslaw Prusinkiewicz advanced the idea that real instances can be in part understood as the expression of certain algebraic constraints on free groups, specifically as certain Lindenmayer grammars.[16]

A model for the pattern of florets in the head of a sunflower was proposed by H. Vogel in 1979.[17] This has the form

\theta = \frac{2\pi}{\phi^2} n, r = c \sqrt{n}

where n is the index number of the floret and c is a constant scaling factor; the florets thus lie on Fermat’s spiral. The divergence angle, approximately 137.51°, is the golden angle, dividing the circle in the golden ratio. Because this ratio is irrational, no floret has a neighbor at exactly the same angle from the center, so the florets pack efficiently. Because the rational approximations to the golden ratio are of the form F(j):F(j+1), the nearest neighbors of floret number n are those at n±F(j) for some index j which depends on r, the distance from the center. It is often said that sunflowers and similar arrangements have 55 spirals in one direction and 89 in the other (or some other pair of adjacent Fibonacci numbers), but this is true only of one range of radii, typically the outermost and thus most conspicuous.

50 years ago, Sputnik changed the world

•September 30, 2007 • Leave a Comment


The Kansas City Star

It happened the night “Leave It to Beaver” first aired across America, 50 years ago this coming Thursday.

Hundreds of miles above the TV antennas, Sputnik made its first beeps.

Just after the sun set on a nation meeting the Cleavers — that impeccable 1950s family played to fantasy proportions — many real-life families stepped outside and gazed up at a dark, disturbing reality: What newspapers called a “baby moon” roamed the skies, and it belonged to the Russians.

The gadget itself, just 184 pounds, did little but beep to ham-radio operators worldwide that the Soviets had launched the first artificial satellite. Its battery soon died. In three months, Sputnik fell into the atmosphere to burn up.

Yet the space around Earth would never be the same, and, in countless ways, neither would life on the ground.

“I remember just as well as it was yesterday,” said Blue Valley West High School biology teacher Ken Bingman, who was 18 then and living in Kansas City, Kan. “You could only see it at dusk or dawn, when the sun reflected off of it.

“As I stood outside my apartment, there came this bright, aluminum-looking ball, traveling across the sky with such … regularity.” No sound, no blinking lights, just a streak.

“It sent chills through me. One has to remember that the Cold War was really raging then. A sign posted on Southwest Trafficway said that in case of enemy attack, this was a one-way street heading south.”

The legacy of Sputnik has at least two distinct sides to it, depending on your age.

Older people recall the edgy, “space race” part of the story — a geopolitical and scientific struggle culminating in Neil Armstrong, an American, stepping onto the moon in 1969.

But for younger populations — including the U.S. majority not yet born when the race began — Sputnik’s legacy lives on in the global reach of their wireless laptops, made possible by the thousands of satellites that followed the first one into orbit.

“That’s something a lot of people take for granted today,” said Ed Thompson, an engineer familiar with satellite technology at Overland Park-based Sprint Nextel Corp.

“Nowadays you have satellites flying overhead all the time, and people don’t even give it a thought. … If you could see all of (the transmissions), it would be overwhelming.”

• • •

In Frontenac, Kan., the feelings that stirred in Jim Spigarelli and his teenage peers that week in 1957 were unmistakable:

“Disbelief that the Russians could put anything up in space before we did.”

Added Spigarelli, who today presides over the Midwest Research Institute in Kansas City: “Hands down, the U.S. and its schools were always thought to be No. 1. So at first, the reaction was shock.

“The other thing I thought? This was cool.

It was the cool factor, spreading through his small-town high school and classrooms across the nation, that nudged students such as Spigarelli to the science tables — once the domain of the quiet, not-so-popular kids.

He was the varsity quarterback, after all, “and being very competitive, I felt we can’t settle for second place. It was the competitive factor of the space race that pushed me” to pursue science rather than law or teaching. He majored in analytical chemistry in college.

In Sputnik’s wake, Congress passed the National Defense Education Act of 1958, boosting funds for science education and providing scholarships for aspiring chemists, engineers and mathematicians.

Fears that the Soviets could arm their satellites with nuclear weapons slowly gave way to a new understanding that the Cold War would be fought with slide rules (what we used before calculators, kids).

Initially, President Dwight D. Eisenhower dismissed the significance of the Soviet satellite. Sputnik’s clunky appearance — a kind of metallic exercise ball with four javelins sticking out — lacked the scope and grandeur of the futuristic space vehicles and moon stations illustrated earlier that decade in Collier’s magazine.

“That Collier’s vision — big equipment, with lots of people — had fueled most people’s expectations of what space travel would be. But Sputnik changed it,” said Roger Launius, senior curator at the Smithsonian Air and Space Museum’s division of space history.

The first men in space (and, for Russia in 1963, the first woman in space) were thus wedged into tin cans atop rocket boosters.

For U.S. scientists, an ugly and tiny vehicle was the only way to go if they were to come close to meeting President John Kennedy’s pledge of “landing a man on the moon and returning him safely to the earth” by the end of the 1960s.

By far, the biggest challenge in Kennedy’s call was the humans-getting-there part. Soviet unmanned missions were getting there even before Kennedy was elected.

In fact, a Russian spacecraft launched in 1959 had already struck the moon. A month later, another mission provided the Soviets the first pictures of the moon’s back side.

“It’s all about framing the debate. Kennedy framed it in terms of putting a man on the moon,” said Launius. “You could send robots up there, but as someone once said, ‘Nobody does ticker-tape parades for robots.’ ”

• • •

Besides housing the largest collection of Soviet space-travel artifacts outside Moscow, the Kansas Cosmosphere and Space Center in Hutchinson offers a nostalgic ride back in time for visitors: “Jetsons” lunch boxes, early astronaut suits, even replicas of the Apollo capsules and landers — all the offspring of Sputnik.

“The schoolchildren will look at spacecraft from back then and say, ‘That just looks like foil wrapped on the outside.’ Well, that’s pretty much what it is,” said Cosmosphere spokeswoman Jill Ulrich.

“They understand the looks of the space shuttle. This cone-looking stuff is very foreign to them.”

Even at the Olathe headquarters of Garmin International Inc. — makers of Global Positioning System devices that owe their existence to satellite technology — the name “Sputnik” means different things to different people.

“Not a heck a lot of old guys here, so to speak,” said company spokesman Ted Gartner.

“The glory days of manned space flight could be behind us. Whether it has a future, I don’t know. But as for the present, (Sputnik’s legacy) is the satellite technology that we use every day.”

We use it to track hurricanes. We use GPS devices in our cars to find the nearest store. Satellites let us watch live coverage of royal weddings and funerals, even follow the movements of endangered species.

Using mobile towers, Sprint this year signaled satellites to provide cell phone service to emergency workers after a tornado in Greensburg, Kan., ripped that area’s communications infrastructure to pieces.

All told, the U.S. Air Force counts some 6,000 satellites launched into space, at the behest of more than four dozen countries, since Sputnik. About 3,200 of those satellites continue to orbit — including two that deliver XM Satellite Radio.

Their names?

“Rock” and “Roll.”

By the numbers
5,958 Total man-made satellites — from puppy-toting capsules to the things that beam driving directions to your dashboard — that orbited the planet9,341 Pieces of space junk loitering above the atmosphere

1 in 3 Chances that a satellite or scrap of space junk is American-made

352 Lead in orbiting satellites the former Soviet Union has over the United States

Spawn of Sputnik
3,204Number of satellites circling Earth today


Number of objects launched into the sky that have fallen back to Earth — including Skylab, the space shuttles and thousands of satellites you never knew were aloft

Analysis and Identification of Computer Viruses

•September 24, 2007 • Leave a Comment

Analysis and Identification of Computer Viruses

In the past decade, computer and networking technology has seen enormous growth. This growth however, has not come without a price. With the advent of the “Information Highway”, as it’s coined, a new methodology in crime has been created. Electronic crime has been responsible for some of the most financially devastating victimizations in society.

In the recent past, society has seen malicious editing of the Justice Department web page (1), unauthorized access into classified government computer files, phone card and credit card fraud, and electronic embezzlement. All these crimes are committed in the name of “free speech.” These new breed of criminals claim that information should not be suppressed or protected and that the crimes they commit are really not crimes at all. What they choose to deny is that the nature of their actions are slowly consuming the fabric of our country’s moral and ethical trust in the information age.

Federal law enforcement agencies, as well as commercial computer companies, have been scrambling around in an attempt to “educate” the public on how to prevent computer crime from happening to them. They inform us whenever there is an attack, provide us with mostly ineffective anti-virus software, and we are left feeling isolated and vulnerable. I do not feel that this defensive posture is effective because it is not pro-active. Society is still being attacked by highly skilled computer criminals of which we know very little about them, their motives, and their tools of the trade. Therefore, to be effective in defense, we must understand how these attacks take place from a technical stand-point. To some degree, we must learn to become a computer criminal. Then we will be in a better position to defend against these victimizations that affect us on both the financial and emotional level. In this paper, we will explore these areas of which we know so little, and will also see that computers are really extensions of people. An attack on a computer’s vulnerabilities are really an attack on peoples’ vulnerabilities.

Today, computer systems are under attack from a multitude of sources. These range from malicious code, such as viruses and worms, to human threats, such as hackers and phone “phreaks.” These attacks target different characteristics of a system. This leads to the possibility that a particular system is more susceptible to certain kinds of attacks.

Malicious code, such as viruses and worms, attack a system in one of two ways, either internally or externally. Traditionally, the virus has been an internal threat (an attack from within the company), while the worm, to a large extent, has been a threat from an external source (a person attacking from the outside via modem or connecting network).

Human threats are perpetrated by individuals or groups of individuals that attempt to penetrate systems through computer networks, public switched telephone networks or other sources. These attacks generally target known security vulnerabilities of systems. Many of these vulnerabilities are simply due to configuration errors.

Malicious Code

Viruses and worms are related classes of malicious code; as a result they are often confused. Both share the primary objective of replication. However, they are distinctly different with respect to the techniques they use and their host system requirements. This distinction is due to the disjoint sets of host systems they attack. Viruses have been almost exclusively restricted to personal computers, while worms have attacked only multi-user systems.

A careful examination of the histories of viruses and worms can highlight the differences and similarities between these classes of malicious code. The characteristics shown by these histories can be used to explain the differences between the environments in which they are found. Viruses and worms have very different functional requirements; currently no class of systems simultaneously meets the needs of both.

A review of the development of personal computers and multi-tasking workstations will show that the gap in functionality between these classes of systems is narrowing rapidly. In the future, a single system may meet all of the requirements necessary to support both worms and viruses. This implies that worms and viruses may begin to appear in new classes of systems. A knowledge of the histories of viruses and worms may make it possible to predict how malicious code will cause problems in the future.

Basic Definitions

To provide a basis for further discussion, the following definitions will be used throughout the report;

• Trojan Horse – a program which performs a useful function, but also performs an unexpected action as well;

• Virus – a code segment which replicates by attaching copies to existing executables;

• Worm – a program which replicates itself and causes execution of the new copy and

• Network Worm – a worm which copies itself to another system by using common network facilities, and causes execution of the copy on that system.

In essence, a computer program which has been infected by a virus has been converted into a “trojan horse”. The program is expected to perform a useful function, but has the unintended side effect of viral code execution. In addition to performing the unintended task, the virus also performs the function of replication. Upon execution, the virus attempts to replicate and “attach” itself to another program. It is the unexpected and uncontrollable replication that makes viruses so dangerous. As a result, the host or victim computer falls prey to an unlimited amount of damage by the virus, before anyone realizes what has happened.

Viruses are currently designed to attack single platforms. A platform is defined as the combination of hardware and the most prevalent operating system for that hardware. As an example, a virus can be referred to as an IBM-PC virus, referring to the hardware, or a DOS virus, referring to the operating system. “Clones” of systems are also included with the original platform.

History of Viruses

The term “computer virus” was formally defined by Fred Cohen in 1983, while he performed academic experiments on a Digital Equipment Corporation VAX system. Viruses are classified as being one of two types: research or “in the wild.” A research virus is one that has been written for research or study purposes and has received almost no distribution to the public. On the other hand, viruses which have been seen with any regularity are termed “in the wild.” The first computer viruses were developed in the early 1980s. The first viruses found in the wild were Apple II viruses, such as Elk Cloner, which was reported in 1981 [Den90]. Viruses were found on the following platforms:

• Apple II


• Macintosh

• Atari

• Amiga

These computers made up a large percentage of the computers sold to the public at that time. As a result, many people fell prey to the Elk Cloner and virus’s similar in nature. People suffered losses in data from personal documents to financial business data with little or no protection or recourse.

Viruses have “evolved” over the years due to efforts by their authors to make the code more difficult to detect, disassemble, and eradicate. This evolution has been especially apparent in the IBM PC viruses; since there are more distinct viruses known for the DOS operating system than any other.

The first IBM-PC virus appeared in 1986 [Den90]; this was the Brain virus. Brain was a boot sector virus and remained resident in the computer until “cleaned out”. In 1987, Brain was followed by Alameda (Yale), Cascade, Jerusalem, Lehigh, and Miami (South African Friday the 13th). These viruses expanded the target executables to include COM and EXE files. Cascade was encrypted to deter disassembly and detection. Variable encryption appeared in 1989 with the 1260 virus. Stealth viruses, which employ various techniques to avoid detection, also first appeared in 1989, such as Zero Bug, Dark Avenger and Frodo (4096 or 4K). In 1990, self-modifying viruses, such as Whale were introduced. The year 1991 brought the GP1 virus, which is “network-sensitive” and attempts to steal Novell NetWare passwords. Since their inception, viruses have become increasingly complex and equally destructive.

Is there really a hole in the universe?

•September 3, 2007 • Leave a Comment

In August 2007, scientists from the University of Minnesota published an astonishing finding in the Astrophysical Journal. The universe, they declared, had a hole in it — a hole far bigger than anything scientists have ever seen or expected. This “hole” spans almost one billion light years and is six to 10 billion light years from Earth, in the Eridanus constellation [source: Daily Tech]. (For reference, one light year measures about six trillion miles.)

Outer Space Image Gallery

the spread of cosmic microwave background radiation
This picture depicts the spread of cosmic microwave background radiation, beginning with the universe just after the big bang (left), spreading through the universe’s many galaxies, clusters and voids (center), and ending with a recent CMB map. In the giant void, the WMAP satellite (top left) detects a cold spot while the VLA radio telescope (bottom left) sees fewer galaxies.

What makes this vast area of the universe a hole? The area shows almost no signs of cosmic matter, meaning no stars, planets, solar systems or clouds of cosmic dust. Researchers couldn’t even find dark matter, which is invisible but measurable by its gravitational pull. There were also no signs of black holes that might have gobbled up the matter once present in the region.

The hole was initially detected by a NASA program studying the spread of radiation emitted from the Big Bang, which scientists believe spawned our universe. It was then further examined using information gleaned from the Very Large Array (VLA) telescope, used in the NRAO VLA Sky Survey Project to study large sections of the visible sky.

One researcher described the find as “not normal,” going against computer simulations and past studies [source: Yahoo News]. Other such holes, also known as voids, have been found before, but this find is by far the largest. Other voids amount to around 1/1000th the size of this one, while scientists once observed a void as close as two million light years away — practically down the street in cosmic terms [source:].

Astronomer Brent Tully told the Associated Press that galactic voids in all likelihood develop because regions of space with high mass pull matter from less massive areas [source:]. Over billions of years, a region can lose most of its mass to a massive neighbor. In the case of this giant void, further studies may reveal some matter in the region, but it would still be far less than what is found in “normal” parts of space.

Earlier we said that the void was first discovered through a NASA program examining radiation stemming from the Big Bang. On the next page, we’ll take a closer look at that program and how scientists can look far back into the universe’s history — almost to its beginnings — in order to make discoveries like this one.