Author Archive for dy10


Github autoevo repo now up!

Recommend forking and we’ll get started, shall we?

As commit note says, everything is vanilla and waiting to be used.

Link here:


Learning Clojure the Porting Way

This week I spent some time trying to see if I could port my Project Euler solutions code written in C++ to Clojure with little success. I’m kinda excited that my little experience in J (array processing) can come back into use. On the other hand, I’m also missing J’s super useful implementations of handy number sequences, such as i.x and whatnot. One thing that’s bogging me down is that I keep forgetting that Clojure is functional, meaning that everything returns a value or should. Also, recursion being the only looping method is kind of frustrating, at least right now.

Here’s one that I successfully converted (Problem 1 – Find the sum of all the multiples of 3 or 5 below 1000):

(loop [sum 0 x 1000]
  (if (zero? x)
    (recur (if (zero? (* (mod x 5) (mod x 3))) (+ sum x) sum) (dec x))))

There must be a much more elegant way to do it but that’s what I came up with. Until I get to actual list/array processing, I guess.

As for Git, haven’t really done much with it. Did a bit of reading through the Git Community Books and various sites online, trying to learn the driving force behind Git. I don’t think it actually matters that it is GIT that we use, as long as we use some sort of VCS of some kind. Although staging might be interesting and might be an incentive to actually work under a VCS without fears of idiotic commits and pushes. Workflow-wise, us being so tiny, it won’t matter whether it is Git or SVN or CVS or BZR. Just need to remember to commit.


Notes on lpector-gptp10-preprint

Fwoosh, this is a relatively hard reading. Some of my thoughts:

Lots of work is being put into semi-random mutation of primitives, whether in the generational or the variational mechanisms of the programs. That makes sense, because this is, after all, genetic programming and not anything else. Maybe it’s because I’m taking a class with Lynn Miller or I’m just plain wrong, but I wonder if there shouldn’t be some sort of addition of protein programming? As in, modification/addition of code that is already known to have some sort of usefulness. Because it seems that although much energy is spent in creating functional programs, if almost everything was already functional, and there was just some sort of cherry-picking method or as already exists a tournament, lots of computational effort could be saved. Of course there would need to be research on what constitutes “useful” (which I assume GP is supposed to automatically create), but it’s just a thought.

ADD: I’m guessing the “code” operator/type of Push kind of functions as a sort of protein, in that it modularizes various primitives to create a (hopefully) functional piece of code. Evolution is supposed to find which atoms make the right code – perhaps there should be some sort of over-optimization process to facilitate this creation? There doesn’t need to be explicit optimization code, but I’m thinking more in terms of libraries/databases of “proteins” that work, or those that don’t. The big problem would be that it any such thing would theoretically be infinite and would probably slow the process down by a lot. No optimization to be gained there, but just another thought.

MORE ADD: The fitness process has to be run through each fresh run of a new environment, and better code is “saved” through the “mandatory improvement” implementation – interesting. Worth mentioning is the selection process:

  • Prefer reproductively competent parents
  • Prefer parents with non-stagnant lineages
  • Prefer parents with good problem-solving performance

Also, I was wondering whether there is already literature on the effect of programs on the problem they are trying to solve. As I see it, the problem is the “environment” in which the programs are supposed to evolve to match or die in. But it’s not just the environment that has an effect on the various organisms inhabiting it, but the organisms change the environment too. Maybe bringing ecology is too much, but it seems like if there is major change due to this, it should be accounted for.

One question I had is on the purpose of reproduction in real life. Obviously this is a question that nobody knows the answer to, but it seems like one side effect of a well-functioning reproductive system is that one will have more offspring. Do populations of species increase as their reproductive systems become easier to manage and thus work? How does this relate to autoconstructive programming?

Just one clarification needed – is AutoPush the descendant of Pushpop?

This is really interesting stuff – I wonder if there is anything to be gained from existing poly/metamorphic viral code and anti-viral (specifically heuristics) literature.


All Set Up, Running regressionForClass

I got Eclipse + Counterclockwise (ccw) to run on my Debian system, after a bunch of twiddling. They really make you go out of your way to have it work properly. What a shame.

I’m on generation 25 and running for the program that was linked.

February 9th EDIT: So, I went up to generation ~125-150 twice on my computer without definite answer being generated. I kept the solve-parameter the same (x^6 – 2x^4 + x^2), but the problem with autogenerational evolutionary programming seems to be that there is as of yet not enough method to the madness. For example, I would see the “best program” line range from just a few primitives long to monsters that made my scroll bar tiny.

The funny thing is that max-generations is set to 10k, which at the rate the program was running, on my C2D laptop would take about 200 hours just for the possibility of maxing out the generational runs.

Also, I’m not able to understand the code fully (or even partially yet), but I’m still wrapping my head around the parentheses (or they’re wrapped around my head) and I don’t know the functions of all the primitives yet. I’ll need to take a deeper look inside not just this program but the push code.


Report on push-pgem-final.pdf – Part 1

I got to read Genetic Programming and Autoconstructive Evolution with the Push Programming Language. The paper’s basically an introduction to Push, PushGP, and Pushpop, detailing both how they work and why they do what they do.

Push, the Language

There are several requirements for a programming language to be an effective language in the evolutionary computing, genetic programming field. The first is that the language provide for autonomy – the programs must themselves “learn or evolve solutions to problems with minimal human intervention.” This requires at the very least that the language allows for code to manipulate itself.

The second is that the language should support multiple data types, and the ability to handle “errors” arising from using more than one type.

The third is that the code should be able to be modular, or able to replicate many identical groups of operations without explicitly stating the group of operations each time.

Lastly, the code must be able to evolve the methods through which the code evolves. While this may seem like circular logic, this is designed to simulate the evolution of life on Earth – “evolution … presumably began without the imposition of pre-specified mechanism for reproduction and diversification.” Autoconstructive evolution is defined in the paper to be “any evolutionary computation system that adaptively constructs its own mechanisms of reproduction and diversification as it runs.”

Push General Design Goals:

  • expressive
    • multiple data types
    • modules
    • complex control structures including recursion
  • self-manipulating
  • syntactically uniform

Push fulfills all of those goals. Push has a very neat way of manipulating its own code, through the use of the CODE type. I found this to be one of the neatest things about Push. This one design choice enables the use of modularity, recursion, and self-manipulation. Push also incorporates other the typical numerical types (BOOLEAN, FLOAT, INTEGER) and nontypical ones as well (CHILD, TYPE) [don’t know what these are].

Push code is also designed to be syntactically uniform, and uses postfix notation. The postfix is because Push is a stack-based language as opposed to a register-based language, so whereas typical Lisp code would look like:

(+ 9 3)

and reading would go from left to right and execution would go from right to left, Push would have:

(3 9 +)

where the 3 is pushed onto the INTEGER stack, then the 9, then the + would be executed on the numbers. It is important that Push is syntactically uniform it allows Push to “follow ‘permissive execution’ precedent[s] established in previous stack-based genetic programming work.” This makes sure that an operator operating on too little arguments will be passed a NOOP.

As stated before, Push has multiple data types, and a stack for each one. Operators can also be typed to operate solely on the stack they were designed for, although conversions are also made possible using the CONVERT function.

[I would like to know more about the DO and DO* functions.]

Push itself is just a language devised for application in genetic programming systems, one of which is PushGP.


Whereas Push is the code, PushGP is the framework under which the programmers devise the parameters that will hopefully lead to code generation and the evolution of programs that fit certain tasks.

The flow of the PushGP algorithm goes somewhat like this: Random population generation -> fitness testing -> (reproduction, mutation, crossover operators[?]) applied to fitter programs -> rinse and repeat until “fit” is found or maximum generation limit is reached. Quote: “In most of our experiments we use a traditional mutation operator that replaces a randomly chosen sub-expression with a new random subexpression.” [I’d like to know more about that.]

[Also, tournaments? And what’s the big deal with parity? Why was it such a big deal that Push was able to solve the even 4-parity problem?]

End of Part 1. Pushpop will be Part 2.