Before we begin, I want to reassure people that this does not affect any of my existing licenses. Free and Open licenses are designed, after all, to be irrevocable. Otherwise, they would not be free. This means that anyone who obtained copies of my work before the license change, can continue to use the old license. This includes modifying and redistributing the work under the same license. However, if you do wish to redistribute my work, please consider updating to the new license.

This does not mean that I am under any obligation to continue to distribute my work under the old license. Anyone who obtained the work under the new license will be required to comply with that license. Finally, any new material I create will only be available under the new license. The old license does not retroactively apply to any new content I create.

**So Why The Change?**

Ultimately, I want people to be able to do whatever they want with the stuff I create. I used to think this ought to include the ability to relicense my content as they see fit. However, far too often have I seen license agreements used as a means to restrict the rights of others. I don't want users of my content to be able to deny others the same freedom that I originally gave them. I want to ensure that my work remains free for as long as it is made available, whether by me or anyone else.

I truly believe in the benefits of a free and open world. When people are able to freely share and build upon each others work, there is no limit to what they can create. There have been several projects that I have wanted to share with others, but doing so would have been illegal due to complex copyright restrictions. While I cannot blame others for wanting to protect their intellectual property, it makes me sad that there aren’t more people willing to share.

So far, my primary concern with copyleft has been with derivative works which combine content from multiple licenses. What qualifies as a derivative work is vague at best. One cannot simply combine work which is proprietary with work that is copyleft, as the two are incompatible. Therefore, one has to be extra careful when using both copyleft and proprietary content, to make sure they are not running afoul of one or the other.

This does not seem to be as big an issue with the Creative Commons Share-Alike license, which is generally used for text, images, video, and the like. There is plenty of CC content already available, and the usefulness of these items is not contingent on whether they can be used alongside proprietary content. In addition, most such works are meant to be complete onto themselves. They may be expanded upon, or referred to in other works, but normally wouldn’t be used to create brand new works.

If you were licensing content intended to be used in entirely new works, such as texture images or background music, then a simple attribution license may be more appropriate. Furthermore, if anyone does want to use my work outside the scope of my license, they can always contact me. Despite its drawbacks, the best way to support the world I want to create, is with copyleft.

**The Same Cannot Be Said For Software**

One of my principle design decisions behind the Vulpine Core Library was to make it as modular as possible. It should be possible to swap out any third-party library it interacts with, for a different library, with a minimal amount of effort. The end user can use any library that can interface with the core library, even providing their own library if they so choose. This way, the core library can continue to be useful even if the libraries it's built upon become outdated, unobtainable, or otherwise undesirable.

Software libraries are created, by design, to interface with other software. If the Vulpine Core Library could not interface with another library, because the license of that library forbids it, that would defeat the purposes of making the library modular. That is why I cannot and will not use a “strong” copyleft license such as the GNU GPL for any of my libraries. Using such a license works well if you are making a standalone application, and you have all the rights necessary to use that license. But for libraries, it makes no sense.

Still, it would be nice if I could copyleft the library itself, without impairing it's functionality. The Free Software Foundation originally had a special version of the GNU GPL to handle this, appropriately called the GNU Library GPL. However they quickly replaced the license with the GNU Lesser GPL, as if to suggest that it was somehow inferior to the GPL. They even wrote a lengthy article explaining why you should avoid using the Lesser GPL and think carefully before applying it to your own work.

The most recent version (3.0) of the Lesser GPL isn’t even a license, but an extension of the ordinary GPL. Most of the terminology explaining the use cases of the license are gone, and the terms that remain are rather terse. Almost as if they wanted to obscure the fact that you could link against such libraries using proprietary software. I wound not be surprised if support for this license was dropped altogether by version 4.

It's no secret that the FSF wants to make everything copyleft. But it's this sort of disdain for more open licenses that originally made me want to stay away from copyleft. Fortunately for me, this isn't too much of a problem. Because the GNU licenses are ultimately free licenses, I can use any version, of any license, to distribute my software despite the FSF's recommendations.

**In Conclusion**

So far the best community I have found for sharing content has been Creative Commons. They have a total of 6 different licenses to choose from, based on what permissions the licensor is willing to give. The share-alike licenses are copyleft, while the rest are not. All of the licenses are treated equally, and none are considered morally superior to any other. They understand that different projects have different licensing needs.

Unfortunately, CC licenses are not recommended for software, especially open source software, as they have no indication of how to distribute the source code. So for now software developers are forced to choose between licenses by the FSF and more open licenses, each of which have their own political agenda. That's not to say the FSF's goal isn't admirable, but I do question their methods.

I wish I could just let people use my stuff, without having to be nit-picky about the details. Unfortunately there are those who would abuse this power to restrict the freedom of others. Are the licenses I have chosen the best representation of my wishes? Probably not. But writing my own license would have it's own barrel of problems, not least of which would be compatibility. Therefore I see these licenses as a sort of necessary evil.

If anyone wants or needs to use my content in a manner that is not applicable to my license, I encourage them to contact me. Depending on the circumstances I may be more than willing to grant them a special, non-exclusive, license for their particular use.

]]>This does not mean that I am under any obligation to continue to distribute my work under the old license. Anyone who obtained the work under the new license will be required to comply with that license. Finally, any new material I create will only be available under the new license. The old license does not retroactively apply to any new content I create.

Ultimately, I want people to be able to do whatever they want with the stuff I create. I used to think this ought to include the ability to relicense my content as they see fit. However, far too often have I seen license agreements used as a means to restrict the rights of others. I don't want users of my content to be able to deny others the same freedom that I originally gave them. I want to ensure that my work remains free for as long as it is made available, whether by me or anyone else.

I truly believe in the benefits of a free and open world. When people are able to freely share and build upon each others work, there is no limit to what they can create. There have been several projects that I have wanted to share with others, but doing so would have been illegal due to complex copyright restrictions. While I cannot blame others for wanting to protect their intellectual property, it makes me sad that there aren’t more people willing to share.

So far, my primary concern with copyleft has been with derivative works which combine content from multiple licenses. What qualifies as a derivative work is vague at best. One cannot simply combine work which is proprietary with work that is copyleft, as the two are incompatible. Therefore, one has to be extra careful when using both copyleft and proprietary content, to make sure they are not running afoul of one or the other.

This does not seem to be as big an issue with the Creative Commons Share-Alike license, which is generally used for text, images, video, and the like. There is plenty of CC content already available, and the usefulness of these items is not contingent on whether they can be used alongside proprietary content. In addition, most such works are meant to be complete onto themselves. They may be expanded upon, or referred to in other works, but normally wouldn’t be used to create brand new works.

If you were licensing content intended to be used in entirely new works, such as texture images or background music, then a simple attribution license may be more appropriate. Furthermore, if anyone does want to use my work outside the scope of my license, they can always contact me. Despite its drawbacks, the best way to support the world I want to create, is with copyleft.

One of my principle design decisions behind the Vulpine Core Library was to make it as modular as possible. It should be possible to swap out any third-party library it interacts with, for a different library, with a minimal amount of effort. The end user can use any library that can interface with the core library, even providing their own library if they so choose. This way, the core library can continue to be useful even if the libraries it's built upon become outdated, unobtainable, or otherwise undesirable.

Software libraries are created, by design, to interface with other software. If the Vulpine Core Library could not interface with another library, because the license of that library forbids it, that would defeat the purposes of making the library modular. That is why I cannot and will not use a “strong” copyleft license such as the GNU GPL for any of my libraries. Using such a license works well if you are making a standalone application, and you have all the rights necessary to use that license. But for libraries, it makes no sense.

Still, it would be nice if I could copyleft the library itself, without impairing it's functionality. The Free Software Foundation originally had a special version of the GNU GPL to handle this, appropriately called the GNU Library GPL. However they quickly replaced the license with the GNU Lesser GPL, as if to suggest that it was somehow inferior to the GPL. They even wrote a lengthy article explaining why you should avoid using the Lesser GPL and think carefully before applying it to your own work.

The most recent version (3.0) of the Lesser GPL isn’t even a license, but an extension of the ordinary GPL. Most of the terminology explaining the use cases of the license are gone, and the terms that remain are rather terse. Almost as if they wanted to obscure the fact that you could link against such libraries using proprietary software. I wound not be surprised if support for this license was dropped altogether by version 4.

It's no secret that the FSF wants to make everything copyleft. But it's this sort of disdain for more open licenses that originally made me want to stay away from copyleft. Fortunately for me, this isn't too much of a problem. Because the GNU licenses are ultimately free licenses, I can use any version, of any license, to distribute my software despite the FSF's recommendations.

So far the best community I have found for sharing content has been Creative Commons. They have a total of 6 different licenses to choose from, based on what permissions the licensor is willing to give. The share-alike licenses are copyleft, while the rest are not. All of the licenses are treated equally, and none are considered morally superior to any other. They understand that different projects have different licensing needs.

Unfortunately, CC licenses are not recommended for software, especially open source software, as they have no indication of how to distribute the source code. So for now software developers are forced to choose between licenses by the FSF and more open licenses, each of which have their own political agenda. That's not to say the FSF's goal isn't admirable, but I do question their methods.

I wish I could just let people use my stuff, without having to be nit-picky about the details. Unfortunately there are those who would abuse this power to restrict the freedom of others. Are the licenses I have chosen the best representation of my wishes? Probably not. But writing my own license would have it's own barrel of problems, not least of which would be compatibility. Therefore I see these licenses as a sort of necessary evil.

If anyone wants or needs to use my content in a manner that is not applicable to my license, I encourage them to contact me. Depending on the circumstances I may be more than willing to grant them a special, non-exclusive, license for their particular use.

First of all, I was able to generate this function by building up a polynomial by its roots. Technically the function is a rational function, with a pole at zero, but let's not get nit-picky. I describe how I am able to build up polynomials and rational functions in another blog post, so I wont be going over that here. For the purposes this blog post, our function is defined as follows:

In case that wasn't entirely clear, I will try to break the function down piece by piece. First of all, our function is a rational function with roots at r0, r1, and r2, and a pole at the origin. These roots are further defined in terms of an extra parameter t. This parameter increases as our animation progresses, causing the location of the roots to move. But they don't move in any old pattern, they actually orbit the pole in the center.

Because we are exponentiating a purely imaginary number, this results in a rotation in the complex plain. That's what causes our roots to orbit the center. By multiplying by a constant value, this changes the magnitude of our result, effectively changing the radius of our orbit. That way, each of our roots circle in their own orbit. Also notice the multiplier of the first root is negative. This is equivalent to a rotation by half, so it starts on the opposite side of the pole as the other two roots.

Notice also, that each t value is multiplied by a constant before taking the exponential. This determines the period of the roots orbit. Originally, I tried several different ratios for the orbital periods. I even tried setting them all to be prime numbers. This lead to a very long animation, since the animation only repeats for t values that are evenly divisible by all of the periods. The roots also seemed to spend a lot of time bunched up together, which led to some rather dull animations.

Because we are exponentiating a purely imaginary number, this results in a rotation in the complex plain. That's what causes our roots to orbit the center. By multiplying by a constant value, this changes the magnitude of our result, effectively changing the radius of our orbit. That way, each of our roots circle in their own orbit. Also notice the multiplier of the first root is negative. This is equivalent to a rotation by half, so it starts on the opposite side of the pole as the other two roots.

Notice also, that each t value is multiplied by a constant before taking the exponential. This determines the period of the roots orbit. Originally, I tried several different ratios for the orbital periods. I even tried setting them all to be prime numbers. This lead to a very long animation, since the animation only repeats for t values that are evenly divisible by all of the periods. The roots also seemed to spend a lot of time bunched up together, which led to some rather dull animations.

Taking inspiration from the moons of Jupiter, I decided to place the roots in Laplace resonance of each other. This special configuration guarantees that no more than two moons (or roots) are ever in conjunction at the same time. That is, it keeps the roots fairly spaced out, so that they don't clump up together. That’s why the first root starts on the opposite side of the origin, hence the negative multiplier. It also made the animation substantially shorter, which was a bonus.

This is really cool, as the roots are sort of the critical points of our function. For creating fractals, the zeros become our attractor points, and in image maps the zeros correspond to the nadir of our image, effectively creating a “little planet” around each root. Take a look at some of the still frames below, to see what I mean:

This is really cool, as the roots are sort of the critical points of our function. For creating fractals, the zeros become our attractor points, and in image maps the zeros correspond to the nadir of our image, effectively creating a “little planet” around each root. Take a look at some of the still frames below, to see what I mean:

The first example is a simple domain coloring, as I explained in my previous blog post on building functions. You can clearly see where the roots are as the image turns to black, while the white point in the center is our pole. The colors bend around the roots in a unique way to connect each root and pole, and fill in our image.

The next example was generated using Newton's method to find the roots of our polynomial. For those who don't know, Newton's method is a strategy for finding the roots of any function, not just polynomials. It requires an initial guess which we take to be the input of our function. Then we iterate until our output converges to one of our roots. Then we color it based on which root we found. That’s why there are only three colors in the above image, because we only have three roots.

The last example uses a mapping from a panoramic image that I took inside a cathedral in Brunswick. These are full spherical images that provide a 360 by 180 field of view. We can use stereographic projection to project our image onto the complex plane. We then use this image to domain color our function. This way the roots of our function map to the nadir, while the pole in the center maps to the zenith. This is what gives us a “little planet” around each root.

While looking at the images, particularly the panoramic maps, I noticed something odd. Despite the distortions, most of the image appears quite smooth, with no ripping or tearing. This is because so far we have only ever used conformal operations in our function, so the function itself is conformal. However, notice that there are certain points in the panoramic maps that appear to be “pinched” where the amount of distortion seems to be greatest.

The next example was generated using Newton's method to find the roots of our polynomial. For those who don't know, Newton's method is a strategy for finding the roots of any function, not just polynomials. It requires an initial guess which we take to be the input of our function. Then we iterate until our output converges to one of our roots. Then we color it based on which root we found. That’s why there are only three colors in the above image, because we only have three roots.

The last example uses a mapping from a panoramic image that I took inside a cathedral in Brunswick. These are full spherical images that provide a 360 by 180 field of view. We can use stereographic projection to project our image onto the complex plane. We then use this image to domain color our function. This way the roots of our function map to the nadir, while the pole in the center maps to the zenith. This is what gives us a “little planet” around each root.

While looking at the images, particularly the panoramic maps, I noticed something odd. Despite the distortions, most of the image appears quite smooth, with no ripping or tearing. This is because so far we have only ever used conformal operations in our function, so the function itself is conformal. However, notice that there are certain points in the panoramic maps that appear to be “pinched” where the amount of distortion seems to be greatest.

I decided to investigate further, and as I expected these “pinching” points correspond exactly with the roots of the derivative function. Why is this so? Consider a real valued function, whose derivative is the slope of the tangent line at each point. This tells us the instantaneous rate of change for each point. We can tell how fast the function is changing at each point, and whether that change is positive or negative.

The complex derivative is similar, but with more degrees of freedom. The absolute value, or magnitude, of our derivative tells us how much our function is changing; while the argument indicates the direction of change. You can think of our function as performing some sort of scaling and rotation to each point in space, which gives rise to the warping we see in our image. This idea is similar to the concept of divergence and curl from multivariable calculus, but the two should not be conflated.

So what does that mean for our “pinching” points? Well, when the derivative is zero the magnitude is also zero. That in itself isn’t a problem. A car can start moving and stop moving, all while maintaining fluid motion, so our image should remain smooth as well. But what about the rotation? When the derivative is zero, the argument is undefined. That means there is no way to tell which way the image should be turning when the derivative is zero. And that's why the image appears to be “pinched”.

This goes into some pretty deep stuff, and complex analysis is a fascinating topic! Many people who study it never truly gain an intuitive understanding of these concepts. I think, in toying around with these functions, I have accidentally stumbled upon a rather novel way to visualize these concepts directly. My work may not be as rigorous, but I hope that my insights will be of use to somebody. They have certainly helped me, as I continue to explore deeper into the beautiful world of mathematics.

]]>The complex derivative is similar, but with more degrees of freedom. The absolute value, or magnitude, of our derivative tells us how much our function is changing; while the argument indicates the direction of change. You can think of our function as performing some sort of scaling and rotation to each point in space, which gives rise to the warping we see in our image. This idea is similar to the concept of divergence and curl from multivariable calculus, but the two should not be conflated.

So what does that mean for our “pinching” points? Well, when the derivative is zero the magnitude is also zero. That in itself isn’t a problem. A car can start moving and stop moving, all while maintaining fluid motion, so our image should remain smooth as well. But what about the rotation? When the derivative is zero, the argument is undefined. That means there is no way to tell which way the image should be turning when the derivative is zero. And that's why the image appears to be “pinched”.

This goes into some pretty deep stuff, and complex analysis is a fascinating topic! Many people who study it never truly gain an intuitive understanding of these concepts. I think, in toying around with these functions, I have accidentally stumbled upon a rather novel way to visualize these concepts directly. My work may not be as rigorous, but I hope that my insights will be of use to somebody. They have certainly helped me, as I continue to explore deeper into the beautiful world of mathematics.

For sake of brevity, I will assume you have a working understanding of complex numbers. These are numbers of the form (a + b i) where (i) is defined to be the square root of negative one. Complex numbers are an entire topic to themselves. I may decide to do a crash course on them at some point, but until then there are plenty of great resources available online.

Take a look at the first function. This is a simple degree-one polynomial, also known as a monomial, or a line. It is trivial to see where f(z) is equal to zero. When z equals r1 the terms cancel and the result is zero, thus r1 is a root of our polynomial. In the second equation we have the product of two monomials. Because the product of two polynomials is always another polynomial, this whole expression is a polynomial. Notice also that when the first monomial is zero the whole expression is zero, since anything times zero is zero. This means that both r1 and r2 are roots of our polynomial.

We can extend this idea to build polynomials with as many roots as we like. We can even include certain roots more than once, increasing the multiplicity of that root. You may remember from algebra class, having to factor polynomials to determine their roots. This is sort of the same idea, but in reverse. We know what the roots are, and we want the polynomial.

More importantly, thanks to the fundamental theorem of algebra, we know that any polynomial of degree n has exactly n complex roots, although some roots may be doubles. This means that the polynomials we construct will have only the roots we specify, and no others. That way we can 'paint' a complex function simply by specifying where the function should be zero.

We can extend this idea to build polynomials with as many roots as we like. We can even include certain roots more than once, increasing the multiplicity of that root. You may remember from algebra class, having to factor polynomials to determine their roots. This is sort of the same idea, but in reverse. We know what the roots are, and we want the polynomial.

More importantly, thanks to the fundamental theorem of algebra, we know that any polynomial of degree n has exactly n complex roots, although some roots may be doubles. This means that the polynomials we construct will have only the roots we specify, and no others. That way we can 'paint' a complex function simply by specifying where the function should be zero.

Here we have a complex polynomial with three roots. This image uses a simple domain coloring, where the output of the function determines the color at each point. In this case, the hue is derived from the argument of the function, while the brightness corresponds to the absolute value. The roots are the points that are completely black.

Because our polynomial contains only these roots and no others, they determine how the rest of our image looks. Notice how a ring of color surrounds every one of our roots. If we, for example, decided to double up on one of our roots, then the ring of color around that root would go through every hue twice. As it turns out, there is only one way to have the hue transition smoothly from one root to the next, and that determines the image we generate. Pretty neat, huh?

The astute of you might have noticed that our derived polynomial is not unique. In fact, infinitely many polynomials can be said to share the same roots. After all, we can always multiply our function by a constant, and it will still have the same roots. Once again, anything times zero is still zero. However, multiplying by a constant only changes the magnitude of the output, and not the argument. This bodes well for us, as we can adjust this extra parameter to control the 'brightness' of our image.

But why stop here? If we can build polynomials, we can also build rational functions. A rational function is any function that can be written as the ratio of two polynomials. Its a bit like a rational number, but for functions. All we have to do is specify the roots for the top and bottom polynomials, also know as the numerator and denominator, just like we normally would. But what does this mean for our combined function?

The numerator is easy enough to understand. When the numerator is zero, our function equates to zero divided by some value. This is almost always zero. This means that the roots of our top polynomial are usually roots of our combined function as well. We will get to the exception to this rule in just a little bit.

The denominator is a bit tricky. As you have probably had hammered into you from a young age, you cannot divide by zero. In the real world, we say that the function is undefined when q(z) equals zero. The roots of the denominator generally lead to asymptotes when you plot the real valued function on a graph. However, we aren’t concerned with the real world, and when you enter the complex plane interesting things start to happen.

The denominator is a bit tricky. As you have probably had hammered into you from a young age, you cannot divide by zero. In the real world, we say that the function is undefined when q(z) equals zero. The roots of the denominator generally lead to asymptotes when you plot the real valued function on a graph. However, we aren’t concerned with the real world, and when you enter the complex plane interesting things start to happen.

Hear is another function with the same domain coloring as before. Once again, the black spots mark where the function is equal to zero. But what about those white spots? These are the spots where our denominator equals zero. Just like the roots, they also have rings of color around them. However these colors process in the opposite direction from the roots. They are sort of like anti-roots and are referred to as 'poles' within complex analysis.

Poles and zeros share a duality. You can see that the color changes smoothly across the poles with no discontinuity. In a certain sense, one can say the value of the function at a pole is infinity, not some positive or negative infinity, but a singular point at infinity. If that sounds weird, consider wrapping the real number line back upon itself so that it forms a circle. If you do that with the complex plane you end up with a sphere, known as the Riemann sphere.

Poles and zeros share a duality. You can see that the color changes smoothly across the poles with no discontinuity. In a certain sense, one can say the value of the function at a pole is infinity, not some positive or negative infinity, but a singular point at infinity. If that sounds weird, consider wrapping the real number line back upon itself so that it forms a circle. If you do that with the complex plane you end up with a sphere, known as the Riemann sphere.

This is not as crazy as it sounds. There are several geometric reasons to support the Riemann sphere. Consider a line as a real function. The slope of the line is given as it's rise over run. As you rotate the line the slope increases. When the line turns vertical the slope is infinite. When you rotate it some more, you pass infinity and the slope becomes negative. In the case of our complex function, you can invert the function and the roots and poles swap places.

In this realm, division by zero always results in infinity, except for zero divided by zero, which is still indeterminate. So what happens when you have a root and a pole at the same point? Technically that point becomes a discontinuity. However, the function usually remains perfectly smooth around the point, and the point itself is infinitely small so we don't even see it. So effectively the root and the pole just cancel each other out. This is not entirely rigorous, and ideally you would check the limit.

One last note. Some may be tempted to think that because we used two polynomials in the definition of our rational function, we would have two extra terms to consider. As we have shown before, our definition of a polynomial is only unique up to a constant value. However, this results in multiplying both the top and bottom by a constant. The ratio of those two constants is itself a constant, so effectively we can treat this as a single constant.

In this realm, division by zero always results in infinity, except for zero divided by zero, which is still indeterminate. So what happens when you have a root and a pole at the same point? Technically that point becomes a discontinuity. However, the function usually remains perfectly smooth around the point, and the point itself is infinitely small so we don't even see it. So effectively the root and the pole just cancel each other out. This is not entirely rigorous, and ideally you would check the limit.

One last note. Some may be tempted to think that because we used two polynomials in the definition of our rational function, we would have two extra terms to consider. As we have shown before, our definition of a polynomial is only unique up to a constant value. However, this results in multiplying both the top and bottom by a constant. The ratio of those two constants is itself a constant, so effectively we can treat this as a single constant.

So that's how you can build your own polynomials and rational functions, just by specifying the roots and the poles. By placing the roots and poles in strategic locations, and fiddling with the domain coloring, you can create all sorts of interesting designs. I originally got into abstract mathematics because I didn’t like math teachers telling me I couldn’t do something. To me math is an art form, and is as much about creativity as it is rigorous proof. That said, you have got to know how the rules work, so that you can know how to bend them.

]]>This might sound like a rather specific case, but this problem of finding zeros can be generalized to more exotic problems. For example, computing the inverse of a given function, or finding the intersection point between two curves, can both be obtained using root finding. Furthermore, despite their practical applications, root finding methods can generate beautiful fractals when applied to the complex plane.

The problem with root finding then is which method do you choose, because there are a lot, and as is typical with computer sience, different methods tend to fulfill different needs. From the research I have gathered, there appear to be two distinct groups of root finding methods: bracketing methods and non-bracketing methods.

Bracketing methods, as the name suggests, require a search bracket to be specified in order to function. The bracket in question must contain at least one zero. The primary advantage of bracketing methods is that they always converge to one of the zeros inside the bracket, guaranteed. The disadvantage is that they tend to converge rather slowly. They also require the end user to have some knowledge of the function being inverted.

The simplest root finding method is also a bracketing method known as*Bisection*. This method basically divides the search space in half, looking towards the half that is known to contain a root, and makes that the new search space. It repeats this process till the size of the search space becomes insignificant. That's it! It's extraordinary simple and easy to implement, but it's also the slowest to converge. In practice though, this is not as big an issue. Unless you are developing a real time system, or need to invert several different functions, there is nothing wrong with Bisection.

*False Position* is another bracketing method which tries to improve upon Bisection. Instead of always dividing the search space at the midpoint, it tries to be a bit more intelligent in choosing where to split the function. This leads to much faster convergence than Bisection in most cases, however it's possible to find some ill-behaved functions where False Position actually performs worse. Others have proposed alterations, or hybrid methods, to try and alleviate this problem. However, even when False Position is at it's best, it's still not as fast as the non-bracketing methods.

*Ridder's Method* is yet another bracketing method which takes this idea even further. Where False Position uses a straight line to determine where to subdivide the search bracket, Ridder's Method uses an exponential curve. This makes Ridder's Method converge very fast, reaching speeds comparable to non-bracketing methods while still guaranteeing convergence. It also worked reasonable well for all functions I tested it with, however there could be some function for which it also converges slowly, similarly to false position.

The simplest root finding method is also a bracketing method known as

Non-bracketing methods do not use a search bracket to find roots, instead they rely upon an initial guess or guesses to find the root. (Note that the guesses need not bracket the root). They also sometimes rely on additional information about the function to be inverted. Such methods usually converge extremely fast, when they do converge. Unfortunately, unlike the bracketed methods, convergence is not guaranteed. They can also be extremely sensitive to initial conditions, the initial guesses given at the start of the algorithm. It is this extreme sensitivity, however, that gives rise to beautiful root-finding fractals.

The

The *Brent-Dekker Method* is a unique case, in that it tries to be the best of both worlds, keeping the guaranteed convergence properties of bracketing methods, while achieving speeds similar too and exceeding non-bracketing methods. The implementation is exceedingly complicated (too much to discuss here), but apart from that it appears to have no downsides. It is also the root finding method of choice for statistical packages such as: Matlab, SciPy, Boost, and R.

Thus far I have implemented all of the above methods (excluding Muller's Algorithm) in the Vulpine Core Library. I picked out six different functions of varying complexity with which to test the speed at which the functions converged. I explicitly excluded Newton's method from these tests, due to the difficulty in computing the derivative. For the Secant Method, the initial guesses are the endpoints of the functions range, for all other methods the search bracket is the entire range. If you would like to check the veracity of my work, you can find the source code I used to test all of these methods on Git Hub.

I have combined the results from all six runs into a single chart for convenience (see Chart-1). Here you can see the relative error of each algorithm plotted against the number of iterations for which the algorithm has been running.

The X-coordinates indicate the number of iterations for which the algorithm has been running, and can be thought of as a measurement of time. For the purposes of these tests, one iteration corresponds to a single function evaluation. Thus algorithms like Ridder's Method, which preforms two function evaluations per loop, are only evaluated for every even number of iterations.

The Y-coordinates correspond to the inverse logarithm of the output error. I used the relative error between each successive iteration to evaluate how fast the function was converging. Because these values are very small (near zero) I used the inverse log function to scale them so that we could see more of the detail. Thus the steepness of the curve indicates how quickly the corresponding algorithm converges.

From these tests one can tell that the bracketing methods tend to converge in logarithmic time, while the non-bracketing methods (Secant and Brent) tend to converge much faster in log-logarithmic time. The real oddity though is Ridder's method, which seems to converge in logarithmic time for most functions, similar to the other bracketing methods, but in a few cases a log-logarithmic curve produced a better fit for the data.

The distribution of the data showed that the most stable algorithm, by far, was Bisection, despite also being the slowest. This makes sense, given that bisection always divides the search space in half at every iteration. Ridder's method also achieved a very high R-value, however due to its aforementioned unusual behavior, I am somewhat dubious about this measurement.

The next most stable algorithm was the Brent-Dekker Method. It seemed to preform extremely well for all of the functions in my test suite, while the Secant Method, which achieved comparable results for some functions, tended to vary much more dramatically. False Position preformed similarly, tending to vary greatly in performance based on the function being inverted.

The X-coordinates indicate the number of iterations for which the algorithm has been running, and can be thought of as a measurement of time. For the purposes of these tests, one iteration corresponds to a single function evaluation. Thus algorithms like Ridder's Method, which preforms two function evaluations per loop, are only evaluated for every even number of iterations.

The Y-coordinates correspond to the inverse logarithm of the output error. I used the relative error between each successive iteration to evaluate how fast the function was converging. Because these values are very small (near zero) I used the inverse log function to scale them so that we could see more of the detail. Thus the steepness of the curve indicates how quickly the corresponding algorithm converges.

From these tests one can tell that the bracketing methods tend to converge in logarithmic time, while the non-bracketing methods (Secant and Brent) tend to converge much faster in log-logarithmic time. The real oddity though is Ridder's method, which seems to converge in logarithmic time for most functions, similar to the other bracketing methods, but in a few cases a log-logarithmic curve produced a better fit for the data.

The distribution of the data showed that the most stable algorithm, by far, was Bisection, despite also being the slowest. This makes sense, given that bisection always divides the search space in half at every iteration. Ridder's method also achieved a very high R-value, however due to its aforementioned unusual behavior, I am somewhat dubious about this measurement.

The next most stable algorithm was the Brent-Dekker Method. It seemed to preform extremely well for all of the functions in my test suite, while the Secant Method, which achieved comparable results for some functions, tended to vary much more dramatically. False Position preformed similarly, tending to vary greatly in performance based on the function being inverted.

From these results and my research done so far I can conclude that the Brent-Dekker Method is the best general-purpose root finding algorithm you could ask for, if you don't mind the complexity. It seems like those Matlab and R guys were really on to something! Ridder's method also seems to be a good choice, however there are just a few two many oddities about it for me to wholeheartedly recommend it.

On the other hand, if speed is not a critical issue, there is nothing wrong with good old Bisection. It's safe, reliable, easy to implement, and just works.

]]>On the other hand, if speed is not a critical issue, there is nothing wrong with good old Bisection. It's safe, reliable, easy to implement, and just works.

All content on the internet, even free content, is subject to copyright. Free content works by granting you a non-exclusive license to use that content in ways that are normally reserved for the copyright holder, such as the ability to modify and redistribute the work. However with all the different types of licenses available, the subtle differences between them can be more than a bit confusing.

At this point, I would like to state that I am not a lawyer. Like most people, reading legal text makes my head spin. So you should not take any of this as legal advice. Furthermore, the ideas expressed in this article are my personal opinions only. If you disagree with me that is fine. You should use whatever license is most appropriate for your work. That said I hope you will find my insights useful.

Basically there are two different types of free licenses: permissive licenses and so called “Copyleft” or “Share-Alike” licenses. Permissive licenses allow you to copy, modify, recombine, and redistribute the work with minimal restrictions. Usually, only attribution is required. Copyleft provides the same permission as a permissive license, but requires you to release any derivative works you make under the same copyleft license.

Basically there are two different types of free licenses: permissive licenses and so called “Copyleft” or “Share-Alike” licenses. Permissive licenses allow you to copy, modify, recombine, and redistribute the work with minimal restrictions. Usually, only attribution is required. Copyleft provides the same permission as a permissive license, but requires you to release any derivative works you make under the same copyleft license.

## PERMISSIVE LICENSES: | ## COPYLEFT LICENSES: |

The core idea behind copyleft licenses is that if you use free content, anything you make with that work should be free as well. For example, if you release a project under a copyleft license, and someone else takes that project and makes improvements to it, then you can incorporate those changes back into the original project without their permission. It also prevents companies from profiting from open source content, without giving back to the open source community.

The proponents of copyleft licenses argue that free content should always be free, even if it is changed by someone else later down the line. While I agree that this is a noble and worthwhile sentiment, in practice things become much more complicated. Using a restrictive, copyleft license can create unintended (often severe) consequences that make the work less-free than it otherwise might have been.

For instance: If you were to release a software library under a copyleft license like the GPL, and someone else wanted to write a program that used both your library and a proprietary library (maybe they need it to read PDFs or play MP3s), they would not be allowed to do so. The GPL required that their entire program, including any libraries, also be released under the GPL, something that the proprietary library strictly forbids.

Another example: suppose you release your stock photography under an CC-BY-SA license, and someone else wanted to use your stock photography to build a website. Would it then be necessary for their entire website to be released under CC-BY-SA as well? Unfortunately it's not entirely clear, and some people may wish to avoid using your photography even if their use of it would be entirely legitimate.

I am personally of the believe that all creative work is derivative. Everything we make is inevitably inspired by the work that came before it. Like biological creatures, ideas must be able to copy, mutate, and recombine in order to successfully evolve. Only by taking inspiration from several different sources are new ideas able to spring fourth.

Copyleft licenses allow a work to be copied and mutated, but it falls short of allowing works to be recombined. Copyleft effectively erects a wall between free and proprietary content, never allowing the two to be used together. Material that uses a permissive license can always be used in a copyleft work, but the reverse is not true.

Permissive licenses also have the advantage of being short and easier to understand. The MIT license is only 158 words long, while the GNU Public License v3 contains a whopping 5,495 words. And what's the one thing that no-one ever does when installing new software? Read the license!

Proponents of copyleft often say that the using a permissive license is like giving proprietary companies a free hand-out. Companies can take advantage of permissive content without contributing anything back to the open source community. However even if they use permissive content, they still must attribute the creators of that content, making it easy to expose them for what they are.

I personally think it's better to teach by example, rather than force others to follow my ideology, as copyleft seems to do. Using copyleft would also mean restricting certain uses of my work that I would otherwise want to support. For this reason, and others, I choose to use permissive licenses wherever I am able.

Note that I do not have anything against people who prefer copyleft license. In fact, I hope they find my work useful and incorporate it into their own. I can see why it is appealing. But for me, personally, permissive licenses are the best way to go. If you have read this far, I hope that you will give them some consideration.

The proponents of copyleft licenses argue that free content should always be free, even if it is changed by someone else later down the line. While I agree that this is a noble and worthwhile sentiment, in practice things become much more complicated. Using a restrictive, copyleft license can create unintended (often severe) consequences that make the work less-free than it otherwise might have been.

For instance: If you were to release a software library under a copyleft license like the GPL, and someone else wanted to write a program that used both your library and a proprietary library (maybe they need it to read PDFs or play MP3s), they would not be allowed to do so. The GPL required that their entire program, including any libraries, also be released under the GPL, something that the proprietary library strictly forbids.

Another example: suppose you release your stock photography under an CC-BY-SA license, and someone else wanted to use your stock photography to build a website. Would it then be necessary for their entire website to be released under CC-BY-SA as well? Unfortunately it's not entirely clear, and some people may wish to avoid using your photography even if their use of it would be entirely legitimate.

I am personally of the believe that all creative work is derivative. Everything we make is inevitably inspired by the work that came before it. Like biological creatures, ideas must be able to copy, mutate, and recombine in order to successfully evolve. Only by taking inspiration from several different sources are new ideas able to spring fourth.

Copyleft licenses allow a work to be copied and mutated, but it falls short of allowing works to be recombined. Copyleft effectively erects a wall between free and proprietary content, never allowing the two to be used together. Material that uses a permissive license can always be used in a copyleft work, but the reverse is not true.

Permissive licenses also have the advantage of being short and easier to understand. The MIT license is only 158 words long, while the GNU Public License v3 contains a whopping 5,495 words. And what's the one thing that no-one ever does when installing new software? Read the license!

Proponents of copyleft often say that the using a permissive license is like giving proprietary companies a free hand-out. Companies can take advantage of permissive content without contributing anything back to the open source community. However even if they use permissive content, they still must attribute the creators of that content, making it easy to expose them for what they are.

I personally think it's better to teach by example, rather than force others to follow my ideology, as copyleft seems to do. Using copyleft would also mean restricting certain uses of my work that I would otherwise want to support. For this reason, and others, I choose to use permissive licenses wherever I am able.

Note that I do not have anything against people who prefer copyleft license. In fact, I hope they find my work useful and incorporate it into their own. I can see why it is appealing. But for me, personally, permissive licenses are the best way to go. If you have read this far, I hope that you will give them some consideration.

Cover Photo by: Brian Turner

]]>In order to take the log of a complex number, we must re-write the number in it's polar form. However this form is not unique. In particular, the parameter 'n' can be any integer value, and we still get a valid result. Furthermore, the result of the log function is dependent on which value of 'n' we choose. This means that the log function is multivalued. In fact an infinite number of complex values could satisfy the log function.

Ideally, we would like to work with a single valued function. This allows us to create mappings from the plane to itself, to see how the function warps the geometry. To do this with a function like log, we must introduce a branch cut. A branch cut is a discontinuity in the function which separates one leaf of the function from another. That discontinuity is important, which we will get to later.

If that sounds strange, try to imagine a corkscrew. Every time the corkscrew makes one revolution, it overlaps itself. However, our function can't have overlapping regions, as overlapping regions correspond to multiple values. Thus we must make a cut after one full revolution, only considering the output values of a single revolution. Where exactly one revolution starts and ends is somewhat arbitrary.

You may be more familiar with the square root function. Every square root has two values, one positive and one negative. Typically though, we only consider the positive value when taking square roots, and this can be thought of as the principle value. The complex logarithm also has a principle value. It is the leaf of the log function that contains the real valued outputs, and has the branch cut along the negative real axis.

If that sounds strange, try to imagine a corkscrew. Every time the corkscrew makes one revolution, it overlaps itself. However, our function can't have overlapping regions, as overlapping regions correspond to multiple values. Thus we must make a cut after one full revolution, only considering the output values of a single revolution. Where exactly one revolution starts and ends is somewhat arbitrary.

You may be more familiar with the square root function. Every square root has two values, one positive and one negative. Typically though, we only consider the positive value when taking square roots, and this can be thought of as the principle value. The complex logarithm also has a principle value. It is the leaf of the log function that contains the real valued outputs, and has the branch cut along the negative real axis.

So that's it then? We have a single valued version of the complex logarithm? Not so fast. Although there are good reasons to use the principle branch cut, we could have placed the branch cut on the positive imaginary axis, on one of the diagonal lines, or any ray emanating from the origin. See (Image 1) for various possible branch cuts.

As you can see from the complex maps, the branch cuts are clearly visible. One side of the branch cut looks radically different from the other. The colors in the image do not blend smoothly. Apart from aesthetic concerns, this also has implications for things like complex analysis and path integrals. In particular, this makes it difficult to answer our original question: What is the logarithm of a negative number? If we were to use the principle value, our answer would fall right on the branch cut!

So what branch cut you use depends a lot on your application. However we've only been considering the natural logarithm. Things get completed fast when we start to consider other multivalued functions. The generalized logarithm has two branch cuts, via the change of base formula. This can lead to weird things like negative base logarithms, or log base one. The inverse trigonometric functions, arc-sine, arc-tangent, and so on, also have multiple values and are often defined in terms of logarithms and square roots.

As you can see from the complex maps, the branch cuts are clearly visible. One side of the branch cut looks radically different from the other. The colors in the image do not blend smoothly. Apart from aesthetic concerns, this also has implications for things like complex analysis and path integrals. In particular, this makes it difficult to answer our original question: What is the logarithm of a negative number? If we were to use the principle value, our answer would fall right on the branch cut!

So what branch cut you use depends a lot on your application. However we've only been considering the natural logarithm. Things get completed fast when we start to consider other multivalued functions. The generalized logarithm has two branch cuts, via the change of base formula. This can lead to weird things like negative base logarithms, or log base one. The inverse trigonometric functions, arc-sine, arc-tangent, and so on, also have multiple values and are often defined in terms of logarithms and square roots.

For the most part, you are better off using the principle value of such multivalued functions. While your teacher may be incorrect in saying you can't take the log of a negative number, in most all cases you probably shouldn’t. For those who are curious, however, experimenting with branch cuts can lead to some beautiful, if counter-intuitive, results. Just be sure, if you follow that path, not to go mad, and don't try any of this stuff on a math test if the teacher says not to.

]]>A while back I wrote a little Lua script, just for fun, for the programmable emulator known as Biz Hawk. If you don't know what Biz Hawk is, you can read more about it and download it yourself here. Basically, it works by tracking the coordinates of some in-game object in memory (typically the player character) and calculate both it's speed and the distance traveled based on it's change in position.

I call it the SNES Speedometer, although technically it's also an Odometer, and it can be used for more than just SNES games. Below, I give an example of it being used in Super Mario 64. In Screenshot-1 you can see the HUD for the speedometer, rendered over the game screen.

In the bottom left corner, circled in red, you can see the stats the speedometer actually tracks. The first, 'Loc' is the actually location of the object being tracked. The second, 'Dist' shows the total distance traveled. And finally third, 'Vel' shows the velocity of the object at that particular point in time.

In the top left corner, circled in blue, you can see the speedometer graph, showing changes in velocity over time. The 'peek' stat tells you what your maximum velocity was over the last 10 seconds, or basically the highest point in the graph. This view is optional though, so you can disable it while you are playing, if you don't want the graph cluttering your screen.

In the bottom left corner, circled in red, you can see the stats the speedometer actually tracks. The first, 'Loc' is the actually location of the object being tracked. The second, 'Dist' shows the total distance traveled. And finally third, 'Vel' shows the velocity of the object at that particular point in time.

In the top left corner, circled in blue, you can see the speedometer graph, showing changes in velocity over time. The 'peek' stat tells you what your maximum velocity was over the last 10 seconds, or basically the highest point in the graph. This view is optional though, so you can disable it while you are playing, if you don't want the graph cluttering your screen.

- Bubsy Bobcat
- Earthbound
- F-Zero
- Hyper Zone
- The Legend of Zelda: A Link to the Past
- Pinball Dreams & Fantasies
- Super Mario Kart
- Super Mario World
- Super Mario 64

It is possible, however, to extend the script to support virtually any game. To do so, one simply need to find the RAM address of the object they wish to track. Sometimes, this involves having to follow a pointer to another object. There are a few other details as well, but everything you need to know is explained in the source code.

If this sounds like something you might be interested in, download the code and give it a try! I would love to hear how people use this, or if anyone manages to extend it to other games. Whenever real world measurements are given, I have tried to be as accurate as possible, however I know not everyone will agree with me. If you think I got the scaling factor wrong, you can always change it in the source code.

If this sounds like something you might be interested in, download the code and give it a try! I would love to hear how people use this, or if anyone manages to extend it to other games. Whenever real world measurements are given, I have tried to be as accurate as possible, however I know not everyone will agree with me. If you think I got the scaling factor wrong, you can always change it in the source code.

snes-speedometer.lua |

For the past several months, I have been watching tutorials on YouTube on how to do just that, most of them involving WordPress. I did check out a few other options though, including Jumla, Blogger, and SquareSpace. I even considered Tumbler briefly, but it turned out to be not what I was looking for. Ultimately I went with SquareSpace, for it's ease of use, and the fact that it came with most everything I wanted right out of the box.

I really did want to use WordPress, which was free and open source. But in order to use WordPress, I had to pay for separate hosting, select the right theme, and install various plug-ins to get the functionality I needed, many of which came for a premium. It did not seem so 'free' after that.

I really did want to use WordPress, which was free and open source. But in order to use WordPress, I had to pay for separate hosting, select the right theme, and install various plug-ins to get the functionality I needed, many of which came for a premium. It did not seem so 'free' after that.

It's interesting, I actually started to set up a WordPress website, and thought that was what I was going to use. I only gave SquareSpace a second look because my hosting was taking forever to setup, due in part from some misunderstanding in billing. But I was impatient, I did not want to wait a week to start building my website, so I decided to give SquareSpace a second chance, and I'm glad I did. By the time that I was finally able to get into my WordPress dashboard, my SquareSpace website was practically already built!

Now I know the easiest solution isn't always the best solution, and that alone was not enough to convince me to use SquareSpace. I spent the next week or two deliberating weither to use WordPress or SquareSpace, testing and comparing features on both platforms. I must say that SquareSpace is far easier to use, even with plug-ins like Beaver Builder which increase WordPress's user friendliness.

SquareSpace is also cheaper, after you add in the cost of hosting and plug-ins. While you probably could run a WordPress website for less, if your going to be serious about designing your website, you are going to want those premium features. On top of that, SquareSpace includes everything in one package, so you know exactly what you are getting, and there are no surprise costs.

That said, there are certain things that SquareSpace simply cannot do: like host a membership site, start a wiki or a forum, or use a shopping system outside the one they provide. If WordPress can't do something, you just install another plug-in. If you need to build that sort of website, then you have no choice but to go with WordPress, as SquareSpace simply won't cut it.

Thus, I had to ask myself if I really needed any of those extra features. While I must admit, I found a few that felt tempting, most felt like bells and whistles. Even though I thought these extra features were great, it seemed like they would just wind up bloating my site and distracting visitors and myself from what I really wanted to focus on. While SquareSpace already offered all the core functionality that I would need.

All I really want is a place to showcase my video games and various other projects I'm working on. I also wanted a blog, where I could write about my endeavors, my thoughts and feelings, and just life in general. I thought about putting my panoramic photography in here as well, but I fear that may detract attention away from my games. And if I am honest with myself, I would rather be making games than panoramas, as panoramas are just a side hobby, but games are my passion.

With all this in mind, it seemed like SquareSpace was the best fit. My one complaint about the platform, is it's lack of a proper backup system. There is an option to export your website to WordPress, but it doesn’t work that well, and SquareSpace even says it should not be used for backup. I've looked at other solutions, like using web crawlers, printing to pdf, and just downloading the web-pages directly. Each leaves something to be desired, and isn't a true backup anyway.

I know most people will say that a backup with SquareSpace isn't necessary, as SquareSpace takes care of that for you. SquareSpace is known for being very robust and secure, another point which factored into my decision making. But I would still like to have a copy of my site on my hard-drive, for my own records. Just having that file there gives me peace of mind, even if it's never used for anything. Although I've just started, this website and blog is very important to me, and I would hate for it all to just disappear one day.

I probably made this all sound like one giant sales pitch for SquareSpace, but it's really not. I haven't been using the service for all that long, and I hope I made the right choice. I think that I did the best that I could under the circumstances, and only time will tell how things play out. With any luck, I and my readers will be enjoying this blog for many years to come.

]]>Now I know the easiest solution isn't always the best solution, and that alone was not enough to convince me to use SquareSpace. I spent the next week or two deliberating weither to use WordPress or SquareSpace, testing and comparing features on both platforms. I must say that SquareSpace is far easier to use, even with plug-ins like Beaver Builder which increase WordPress's user friendliness.

SquareSpace is also cheaper, after you add in the cost of hosting and plug-ins. While you probably could run a WordPress website for less, if your going to be serious about designing your website, you are going to want those premium features. On top of that, SquareSpace includes everything in one package, so you know exactly what you are getting, and there are no surprise costs.

That said, there are certain things that SquareSpace simply cannot do: like host a membership site, start a wiki or a forum, or use a shopping system outside the one they provide. If WordPress can't do something, you just install another plug-in. If you need to build that sort of website, then you have no choice but to go with WordPress, as SquareSpace simply won't cut it.

Thus, I had to ask myself if I really needed any of those extra features. While I must admit, I found a few that felt tempting, most felt like bells and whistles. Even though I thought these extra features were great, it seemed like they would just wind up bloating my site and distracting visitors and myself from what I really wanted to focus on. While SquareSpace already offered all the core functionality that I would need.

All I really want is a place to showcase my video games and various other projects I'm working on. I also wanted a blog, where I could write about my endeavors, my thoughts and feelings, and just life in general. I thought about putting my panoramic photography in here as well, but I fear that may detract attention away from my games. And if I am honest with myself, I would rather be making games than panoramas, as panoramas are just a side hobby, but games are my passion.

With all this in mind, it seemed like SquareSpace was the best fit. My one complaint about the platform, is it's lack of a proper backup system. There is an option to export your website to WordPress, but it doesn’t work that well, and SquareSpace even says it should not be used for backup. I've looked at other solutions, like using web crawlers, printing to pdf, and just downloading the web-pages directly. Each leaves something to be desired, and isn't a true backup anyway.

I know most people will say that a backup with SquareSpace isn't necessary, as SquareSpace takes care of that for you. SquareSpace is known for being very robust and secure, another point which factored into my decision making. But I would still like to have a copy of my site on my hard-drive, for my own records. Just having that file there gives me peace of mind, even if it's never used for anything. Although I've just started, this website and blog is very important to me, and I would hate for it all to just disappear one day.

I probably made this all sound like one giant sales pitch for SquareSpace, but it's really not. I haven't been using the service for all that long, and I hope I made the right choice. I think that I did the best that I could under the circumstances, and only time will tell how things play out. With any luck, I and my readers will be enjoying this blog for many years to come.