Ubuntu 14.04 Unity 3D RAM

My AMD Server seemed to be running out of RAM this morning. Checking the processes, it appeared that Unity 3D was using approximately 18G/32G (i.e. with no virtual machines running, the OS was still using 18G). Why? I don’t know why Unity 3D freaked out, but ‘compiz’ was chewing up 1.5G all by itself. A quick check showed that Unity 2D is no longer available in Ubuntu 14.04.

So, I installed gnome-flashback-session.

After it was installed, and after the logout and login under Metacity, the baseline RAM footprint dropped to 1.5G total.

Posted in Ubuntu | Comments Off on Ubuntu 14.04 Unity 3D RAM

Freenas Backup Machine

The goal of this machine was to be a “small, inexpensive, bring your own HDs, standalone backup solution”.

For these purposes, that meant using a small case that still had at least 2 internal 3.5″ bays.

For Freenas, was the latest version where the .img file was available. 9.3 is available, but only as an .iso file. Another item of note when using the .img on a USB drive: booting the first time, it will appear to hang after showing “waiting up to 5 seconds for ixdiagnose to finish”. It isn’t stuck – it is just resizing your filesystem on the USB. It took mine about 9 minutes to finish this step. After the first boot completes this step, it does not stall there ever again.

Some facts on the CPU: it is currently #471 on PassMark [3,777] cpubenchmark.net. It has a “value” rating of 58.7. Intel is producing so many clones of the Xeon E5, at so many different clock speeds, that the first sub-$1000 CPU is #28 (core-i7 5930K@3.50GHz, $580). The only core-i7 that is sub-$300 is the $299 i7 4790@3.6GHz at #58 with a score of 10,105, and a “value” of 32.4. It used to be fun to get a CPU in the top 50, but it looks like that will never happen again.

All product links are from the actual vendor.

Item Product Cost
CPU Intel Pentium G3450 Haswell Dual-Core 3.4Ghz Socket 1150 53W $90
RAM Corsair Vengeance 4GB (1 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 Desktop Memory Model CMZ4GX3M1A1600C9 $44
Motherboard GIGABYTE GA-B85M-HD3 LGA 1150 Intel B85 HDMI SATA 6Gb/s USB 3.0 Micro ATX $71
Power Supply TFX 275W Power Supply, with case
Video Intel HD Graphics, built in
Case APEX DM-387 Black Steel Micro ATX Media Center / Slim HTPC Computer Case w/ ATX12V TFX 275W Power Supply $57
USB Drive Kingston Digital 8GB DataTraveler Micro USB 2.0 (DTMCK/8 GB) $6
HD Drive BYOD $50-$400
OS Freenas 64bit $0
Total $268 + drives
Posted in Computer Builds | Comments Off on Freenas Backup Machine

What is AngularJS – the key is client-side

After working with AngularJS for a couple of months now, I can finally express a concise answer to “What is AngularJS?”

It is:

  1. MVC where the model is on the client side
  2. MVC where the view is a template based in the .html, and is rendered on the client side
  3. MVC where the controller is “live” – changes to the model reflect in the template immediately

The key: “on the client side”. No more complicated mappings inside your .jsp from fields to Java objects, no more complicated mappings from “post actions” to specialized controllers that track the application state. No more painting the initial page one way with .jsp and then updates with AJAX. It replaces your .jsp template with more more natural .html with embedded template variables and controls, and keeps everything straight.

AngularJS throws in a couple of “neat tricks” – dependency injection, testability, separation of client-server, scope. But AngularJS’s two tag lines: “HTML enhanced for web apps” and “AngularJS — Superheroic JavaScript MVW Framework” — don’t provide much neither of which is very helpful.

AngularJS (or some other library that does MVC-client-side better, now that the secret is out of the bag) is the wave of the future. The productivity gains are incredible. It is literally easier to re-write your .jsp and implement that one new feature than it is just to extend your .jsp.

Posted in Software Engineering | Comments Off on What is AngularJS – the key is client-side

Amazon SDK broken dependencies

If you have received this error message:

java.lang.IllegalStateException: Unsupported cookie spec: default

It is because Amazon made their SDK dependency look like this:
+— com.amazonaws:aws-java-sdk
| +— org.apache.httpcomponents:httpclient:[4.1, 5.0) -> 4.4-beta1
| | +— org.apache.httpcomponents:httpcore:4.4-beta1

i.e. they made an open-ended statement that their SDK would work will all 4.x releases of httpclient.

As of 4.4-beta1, their statement became false. Somewhere down in the guts of httpclient, “default” is no longer a valid cookie specification, and now parts of the AWS SDK do not work.

See 2014/06/28/computer-science-hard-things/ ‎for a full essay on the problems with “dependency resolution”. In this case, Amazon just messed up – there is no way any particular aws-java-sdk release can claim compatibility with an entire 4.x line of the httpclient library.

The fix (at least in gradle; it should be similar in all build systems) is to exclude httpcomponents on the aws-java-sdk line, and then add a specific httpcomponents (e.g. 4.1 worked nicely for me, since presumably Amazon actually tested with that release before their release). Since Amazon was “fuzzy” about their actual dependency requirements, you may have to try 4.2, 4.3, etc. to make sure you get an actually-compatible-with-aws version of httpclient.

Posted in Software Engineering | Comments Off on Amazon SDK broken dependencies

Use a DSA to implement your DSL (insipired by Cucumber)

This post was inspired by The Training Wheels Came Off by Aslak Hellesøy, author of The Cucumber Book.

TL;DR – Use a Domain Specific (testing) API to implement your Domain Specific Language

That article describes the motivation behind removing web_steps.rb — in a nutshell, they were removed because these step definitions are not at the correct level of abstraction for a properly defined Cucumber .feature file. The direct quote on the subject: “Cucumber was designed to help developers with TDD at a higher level”.

The basic idea is that your .feature file should not be written like this:

Scenario: Successful login
  Given a user "Aslak" with password "xyz"
  And I am on the login page
  And I fill in "User name" with "Aslak"
  And I fill in "Password" with "xyz"
  When I press "Log in"
  Then the http status should be 200
  Then the http session cookie should not be empty

Instead, your .feature file should look like this:

Scenario: Successful login
  Given log in succeeds with a user "Aslak" with password "xyz"

Notice at this level, there is no mention of http, http status 200, cookies, buttons or button names, etc. It describes only the high-level test.

In his article, he codes to the idea in this post, but never names explicitly says it. The idea: keep your .feature definitions high-level, and implement your step definitions using a set of intermediate helper methods. This intermediate level is what I call the Domain Specific API (DSA) from my title. It looks like this:

  @Given("^log in succeeds with a user \"([^\"]*)\" with password \"([^\"]*)\"$")
  public void log_in_succeeds(String user, String password) {
     dsa.actionLogin(user, password);

In essence, the approach leads to four levels of test:

  1. .feature file
  2. step definitions implementations
  3. DSA implementations
  4. technology library (e.g. HttpClient)

The extra “Domain Specific Api” layer allows you to dive into the implementation-specific details without “polluting” your main .feature files with too many details.

Reference Links:
For Java, see Cucumber-JVM.
For Ruby, see Capybara.

Posted in Software Engineering | Comments Off on Use a DSA to implement your DSL (insipired by Cucumber)

Computer Science Hard Things

There is a popular saying about Computer Science (see here and here):

There are only two hard things in Computer Science: cache invalidation and naming things.

— Phil Karlton

There is a funny variation that makes it “There are only two hard problems in Computer Science: cache invalidation, naming things, and off-by-one errors.”

I propose there are actually three hard things:

  1. Naming things
  2. Cache invalidation
  3. Dependency resolution

My criteria for being a “hard thing”:

  1. Must be applicable to multiple scopes
  2. Must not be fully solved

Examined this way, it is interesting to see why these are the three deserve to be on the list:

  1. Naming things
    1. Applicable to every area in computer science – variable names, class names, machine names, network names, security policy names, URIs, etc. It even applies to this list: think of the difference between naming the first item “cache invalidation” versus just “caching”.
    2. Not at all solved. You can barely say we have good heuristics for this.
  2. Cache invalidation
    1. Applicable to multiple layers of computer hierarchy: CPU registers, L1, L2, L3, etc., disk caches, network resource caches, DNS caches, etc.
    2. Solved in the sense we know it is a balancing act between efficiency and correctness. Not solved for the general case, however. If there even is a “general case” at all.
  3. Dependency resolution
    1. Applicable to multiple domains: run-time (think Dependency Injection), build time (think Apache Ivy and Maven), hardware-software, distributed systems, and probably more
    2. Solved in the sense we know about topological sorting to help with transitive dependencies.
      For run-time, the entire sub-field of dependency injection has multiple solutions: Spring Framework, Guice, PicoContainer. Does anybody remember DLL Hell? That shows that “API definition” (which is a candidate for its own “Hard Thing” entry) is a sub-problem of dependency resolution.
      For build time, the better build systems make it easy to specify your dependencies and add global exclusions to get you out of transitive dependency issues.
      For hardware-software, think about the hardware requirements for running a particular application or installing a particular driver.
      For distributed systems, think about (for example) your application requires which version of which database. For provisioning, has been partially solved by Chef and Puppet and others. For detection, still very much roll-your-own.

So, did I create any converts? Do you agree there are 3 Hard Things in Computer Science?

Posted in Software Engineering | Comments Off on Computer Science Hard Things

Agile non Evolutionary Stable Strategy

Starting with a quotation I saw at a rest stop while on vacation:

Dryland farming works best when in a wet year.

This was on a placard explaining that Dryland farming had a string of successful years when it was wet. But when it got dry, the same techniques failed.

I’ll summarize that as: if you have a problem, and implement a fix, and fixes the problem, you still might not know how much your fix actually worked.

Which brings me to how Agile has “fixed” waterfall development. And whether these are just the “wet years” for Agile.

Getting back to the title – an Evolutionary Stable Strategy (ESS) describes a strategy, that if adopted by a population in a given environment, cannot be invaded by any alternative strategy that is initially rare. The first thing to note is that waterfall development is positively a non evolutionary stable strategy – Agile started out rare, but has effectively invaded (subjective judgement only – statistics are hard to find. Most of the statistics like to spout “Agile is 3 times more likely to succeed” – as if a 3% chance versus 1% chance is worth bragging about…) It is also true is that waterfall itself had previously been invaded by “hybrid waterfall” long before Agile, making it doubly non-ESS, if that is even possible. Being non-ESS is no big deal in itself.

Of interest here is why I’m claiming that Agile is non-ESS. After all, Agile is still on its upswing (again, hard to find concrete statistics here). And quite possibly, no development process is stable because of the inherent fickleness of management and their desire to chase the new fad. So, if no process is stable, then it isn’t saying very much to claim that Agile is also non-ESS.

My value-add is: I think I know the reason Agile will be successfully invaded and replaced.

I believe Agile’s replacement will come about as a result of Tim’s Rule on Agile (only highly experienced developers can make Agile work) and Choose Your Path Wisely (after years of choosing not to learn, you no longer have the option of learning). My assertion: Agile does not create developers that are sufficiently capable in executing Agile successfully. My assertion is based on observing developers with multiple years of “successful” Agile development experience, yet at the same time lacking in critical software engineering skills. Why does it matter, you ask? If they are on a successful Agile project, and have “succeeded” without those skills, then aren’t those skills by definition not needed? My answer: the skills are needed, and they are being provided by developers (or scrum masters, or business owners, etc.) with extensive non-Agile experience. And once that pool of people is gone, or stretched too thin, the Agile-only generation, who choose the Agile path, will be unable to step up and provide those critical skills (not “unwilling”, just literally “unable”). There will be much pain and frustration as the Agile consulting industry struggles to figure out what has gone wrong. And lots more finger-pointing (“You’re doing Agile wrong!”). In the end, I’m claiming that Agile is only working now because it is the “wet” years. And the “dry” years are coming.

Posted in Software Engineering | Comments Off on Agile non Evolutionary Stable Strategy

Virtualbox Window Sizing

Ran across this in my (seems to be never ending) search for larger resolution virtual machines running Ubuntu. This time running under Virtualbox in a server connected to tiny monitor (1680×1050).

From the VirtualBox OS (Ubuntu 14.04 LTS in my case), run:

xdotool search --name "window name" windowsize 1600 1200

An actual command line where my virtual machine was called “ub1404.64 lts snap22”:

# xdotool search --name "ub1404*" windowsize 1600 1200

After that, the outer virtual machine console window resized to 1600×1200. Then, Ubuntu running in that VM resized itself to 1600×1200. Then, the remote desktop connection (using VNC viewer in my case) connected at 1600×1200.

You can install the xdotool application with (# apt-get install xdotool).

Posted in Ubuntu | Comments Off on Virtualbox Window Sizing

Secret Share in Java on Maven Central

Just completed a push of the Secret Share in Java project to Maven Central.

That is my first “officially published” open-source project.

See the link on search.maven.org.

GroupId: com.tiemens
ArtifactId: secretshare
Version: 1.3.1

Posted in Software Project | Comments Off on Secret Share in Java on Maven Central

Gradle Signing Plugin

If you’ve hit this page, it is probably because you’ve seen this error:

$ gpg --verify secretshare-1.3.1-SNAPSHOT.jar.asc secretshare-1.3.1-SNAPSHOT.jar
gpg: Signature made Wed 04 Jun 2014 03:01:00 PM CDT using DSA key ID FC76F04F
gpg: DSA key FC76F04F requires a 256 bit or larger hash
gpg: Can't check signature: general error

The problem is internal “rules” with DSA signatures prevent gpg from performing the verify operation on a signature that was “too small” at creation.

I have no idea how to convince gradle to change its “signature generation parameter”.

Instead, my fix was to generate another key with $ gpg –gen-key, and this time, when asked about the DSA size, instead of picking the default 2048, I picked 1024.

Everything verifies now.

Posted in Software Project | Leave a comment