Today marks 50 years until first contact with the Vulcan.
I finally got around to reading an article which turned out to be one of the better analyses of the social media phenomenon. I call it an analysis because it doesn’t say “social media is a Bad Thing”, like a lot of the more sensational article-writers (including myself, at times) do. It talks more about how some patterns of social media are Bad Things, which is more constructive since it can lead to ways to fix those patterns.
Bad Catholic published a guest post with perhaps the single best explanation of the Catholic obsession with the Virgin Mary that I’ve ever read.
And finally, I’ve decided that I am going to start posting digests (like this one) with links to interesting articles and some remarks on them.
No, it isn’t Pi Day, or anything resembling it. I was editing some stuff, and I noticed the original “Approximating Pi” article. I’d been meaning to rewrite the code since I learned about continued fractions, and since I didn’t have anything better to do1, I decided to do so. A detailed explanation follows the source code2, since this is less mathematically facile than the previous version.
This version is slightly different from the previous one, in that it generates all of the best rational approximations of pi, in order. According to Wikipedia:
A best rational approximation to a real number x is a rational number n/d, d > 0, that is closer to x than any approximation with a smaller denominator.
The first 16 terms output are:
This suggests that as far as rational approximations of pi go, 355/113 looks like it might be the best bang for your buck. (The jump in size without much improvement corresponds to an abnormally large term in the continued fraction expansion.)
View the code on Github
This code relies pretty heavily on continued fractions, which take a little bit of explaining. They’re basically a slightly arcane way of writing real numbers which has some nice properties. Let’s take 3.456 — or 432/125 — as an example. It’s kind of like 3, but a bit more. More like 3 + 1/2. But not really 2, more like 2 + 1/5. But not really 5, more like 5 + 1/5. But not really 5, more like 5 + 1/2. Actually, exactly like 5 + 1/2. This leaves us with the following monster3:
But instead of writing out all of those 1s, we could just write [3; 2, 5, 5, 2]. That’s a continued fraction. If you want to make things more complicated, you could have something other than 1s in the numerators, in which case you’d have a generalized continued fraction. (The normal kind are also known as simple continued fractions, to distinguish them from the generalized kind.)
Why would you want such a horrible thing? There are a variety of reasons (some of which I don’t even pretend to understand), but in this case, we’re going to take advantage of their approximation superpowers.
You know how I said that you can make things more complicated? Well, that’s exactly what this function does. It spits out a generalized continued fraction which is equal to pi4. We have to use a generalized continued fraction, because the simple continued fraction for pi has no known pattern, and appears to be more or less random.
Because generalized continued fractions tend to not only be slightly inconvenient to work with, but also lack the property that we’re after (more on that later), we convert it into a simple continued fraction.
Given a continued fraction, define x to be the expansion of the whole sequence, and x' to be the part that we don’t know. Thus, before we receive any terms, we have x = x'.
In this function, the variables a through d represent the homographic5 function
. This function establishes bounds on the value of x. Assume that x' is between 0 and infinity6. At one end, x' = 0, so that x = b/d. On the other end, if we take the limit as x' approaches infinity, x = a/c. These represent our lower and upper and bounds of x, not respectively.
Now, if we take a term (p, q), indicating that the unknown part x' = p + q/x'', where x'' is the part that is still unknown, we can substitute into our homographic form and get
Multiplying the numerator and denominator by x'' simplifies this to
This is equivalent to the expression
a, b, c, d = p*a+b, q*a, p*c+d, q*c in the code.
We then evaluate the integer parts of the bounds, and when they are the same, we know for sure what the integer part of the number is, so we can yield that, and remove it from the total.
The reciprocation is there, of course, because we’re trying to do a simple continued fraction, and we want it to match general form above. This is equivalent to the expression
a, b, c, d = c, d, a-t*c, b-t*d.
The result is that the function takes in terms of the generalized continued fraction, until it can produce a term of the simple continued fraction, at which point it does so, removing it from the running total.
This function follows the same pattern as
canonicalize(), but instead of spitting out the integral part as soon as it knows it, waits until it has taken all of the terms (note that it takes a finite sequence), and outputs a rational number equivalent to the simple continued fraction we started with. Note that the upper bound is given, since the last term of a terminating continued fraction is always infinity.
This one is pretty much straight from Gosper7:
If we call the integer part of a continued fraction the 0th
term, then we can say that the a (regular) continued fraction is an increasing
function of its even numbered terms, and a decreasing function of its odd
numbered terms. Thus, to compare two continued fractions, compare the terms
at the first position where they differ, then reverse the decision if this
position is odd numbered. If one continued fraction terminates before
differing from the other, regard the shorter one to be suffixed by an
Here, I use
None to represent infinity.
This is the function which performs the main purpose of the script. It essentially takes advantage of the fact that you can use the continued fraction to generate a best rational approximation of a number by truncating the continued fraction at some point, then converting it to a regular fraction. Furthermore, the last term of the truncated sequence can be reduced by up to half of its value to produce yet more best rational approximations. There’s a borderline case for even trailing terms, since they are divisible by two exactly; in that case, the result is a best rational approximation only if it’s better than the previous one in the sequence. This is equivalent8 to checking whether the terms encountered so far, reversed, are a greater continued fraction than the remaining terms9.
Okay, there are a couple reasons I hate politics1. The one I want to talk about right this instant is this one which I’m sure many engineers2 agree with: that the system itself is broken. Yes, the Constitution, as written, is a great basis. It’s flexible, largely balanced between various branches, contains provisions for its own modification, pretty fair, protects human rights: all good stuff. Then again, a lot of people feel this way, and they can argue about politics all day. What is my point?
My point, and my problem, is that I regularly find myself looking at a political question and, rather than choosing a side, find myself unasking3 the question. In other words, I take the position that the question itself is wrong. Not only that, but when someone presents an argument against my opinion based on another government policy, my response ends up being “actually, that’s broken too.” (This response tends to be pretty well-received.)
I’m sure many other people with much more sense than I feel this way, but for some reason I find it crippling when it actually comes down to acting on my opinions. Perhaps they see the system, and realize that it’s too entrenched4 to change, and so decide to make the best of a bad lot. Now that I am able to vote, I find myself struggling more and more to decide what the best course of action to take there is. I wish to vote, but I also don’t want to vote for someone championing a suboptimal solution.
Perhaps I’m just too much of an idealist (read: hipster5) for my own good? How does everyone else handle this?
A man did not stand up and say ‘I believe in Jupiter and Juno and Neptune,’ etc., as he stands up and says ‘I believe in God the Father Almighty’ and the rest of the Apostles’ Creed. Many believed in some and not the others, or more in some and less in others, or only in a very vague poetical sense in any. There was no moment when they were all collected into an orthodox order which men would fight and be tortured to keep intact. Still less did anybody ever say in that fashion: ‘I believe in Odin and Thor and Freya,’ for outside Olympus even the Olympian order grows cloudy and chaotic. It seems clear to me that Thor was not a god at all but a hero. Nothing resembling a religion would picture anything resembling a god as groping like a pigmy in a great cavern, that turned out to be the glove of a giant. That is the glorious ignorance called adventure. Thor may have been a great adventurer; but to call him a god is like trying to compare Jehovah with Jack and the Beanstalk. … Polytheism fades away at its fringes into fairy-tales or barbaric memories; it is not a thing like monotheism as held by serious monotheists. … Finally it did satisfy, or rather it partially satisfied, a thing very deep in humanity indeed; the idea of surrendering something as the portion of the unknown powers; of pouring out wine upon the ground, of throwing a ring into the sea; in a word, of sacrifice.
The Everlasting Man, by G. K. Chesterton. From “V. Man and Mythologies”, which is rapidly becoming my favourite chapter in the entire book. I had to resist the urge to quote more.
EDIT: On the next page:
Certainly the pagan does not disbelieve like an atheist, any more than he believes like a Christian. He feels the presence of powers about which he guesses and invents. St. Paul said that the Greeks had one altar to an unknown god. But in truth all their gods were unknown gods. And the real break in history did come when St. Paul declared to them whom they had ignorantly worshipped.
A clarification regarding my use of the word “simple(ish)”: I am assuming, firstly, that you are already comfortable with programming in an imperative language such as C, Java, or Python, and secondly, that you already know at least a little Haskell.
Haskell type signatures, to the uninitiated, are a little odd. Take the following simple two-argument function:
plus :: Int -> Int -> Int
plus a b = a + b
If you’re coming from an imperative language, you might be tempted to read that signature as “this function takes an
Int and an
Int and returns an
Int”. That’ll do in most cases, but it isn’t really true. And the fact that it isn’t true is a really cool feature of Haskell.
-> arrow is a right-associative operator. So you whould read
Int -> Int -> Int as
Int -> (Int -> Int), which doesn’t help anything at all, because now it looks even worse. And now here is the kicker: all functions in Haskell take exactly one argument. One. Even our
plus function there. Which is odd, because it sure looks like it takes two arguments. The thing is, Haskell does a little magic trick for us (which isn’t really magic). This magic is called currying.
The trick is that when you do
plus 2 3, two things happen. First, 2 is applied to
plus. Application is a fancy way of saying that an argument is given to a function. This application results in a new function, which also takes one argument. 3 is applied to that function, which one might notate as
(plus 2), and returns an
plus 2 3 is the same as
(plus 2) 3. So the
Int -> (Int -> Int) means that
plus is a function which takes an
Int, returning a function which takes an
Int, returning an
So why is that useful? Well, consider the builtin function
map, which has the signature:
map :: (a -> b) -> [a] -> [b]
Basically, it takes a function which takes a thing of type
a and returns a thing of type
b, and an array of things of type
a. It then spits out an array of things of type
b, which is generated by applying the function to every argument the array of things of type
map (\b -> plus 2 b) [3 1 4 1 6] returns
[5 3 6 3 8].
Now, remember that
(plus 2) returns a function, right? So you could also do:
map (plus 2) [3 1 4 1 6]
and get the same result.
tl;dr: Calling a Haskell function with not enough arguments basically returns what you might think of as a half-called function. It’s got some arguments already, you just need to supply the remaining ones. And it’s just another function.