Java Command Line Parsers Taxonomy

Ever wanted to see a complete list of Java command-line parser libraries? Here is the project for you.

The git project javacommandlineparsers contains a list of Java command-line parser libraries, in JSON format, and transformed to markup, csv and html.

It also shows how to perform groovy templating with gradle. The file is generated from a template using SimpleTemplateEngine, and a .csv and .html files are generated with groovy.text.markup.MarkupTemplateEngine.

The raw .json content is available here.

Posted in Uncategorized | Comments Off on Java Command Line Parsers Taxonomy

Java command-line argument parser taxonomy

Every once in a while, at the beginning of a new project, I start the search for a command-line argument parsing library in Java. This post shows the results of examining the field as of mid 2018. This post serves both as a “which to use” and as a “which have been evaluated, and found wanting” guide.

* Best So Far
JCommander v1.71
annotation, good documentation, custom parser, validation

* Honorable Mention
picocli v3.3
annotation, good documentation — documentation looks suspiciously like it is patterned after JCommander

* Pretty Good
field-based, annotation-based, unclear if “–long-Name” is supported

annotation-based, interface-based but can be instance-based with setters, has short and long names

* Also-Rans

Key-Value – the opposite of a good library. Many of the libraries below share this same deficiency.

single “parse” call with array of options, Key-Value get value where key is the option

Jakarta Commons CLI
like JArgs, except key is “String”, not option object

create individual options, where options are “holders”, parse() into the holders

old (2008), not well documented

it generates .java, but still requires an additional .jar

Posted in Software Engineering | Comments Off on Java command-line argument parser taxonomy

Virtual Machine Server

The goal of this machine was to replace the oldest machine build recorded on this blog – the 2009 Core i7 920 build. This is the current workhorse virtual machine server.

The approach taken (refurbished) was a direct result of (1) DDR4 RAM prices being high, (2) Video card (GTX 1080) prices being way too high, and (3) Intel 8700 CPU just not being exciting.

So instead of new, once I ran across these refurbished Dell Precision T3600 machines at, I set a budget at $400 and got the best available. After a 10% bump in the budget, the 64GB machine was the “just right” choice. It has a 6-core CPU and a decent clock speed.

Some facts on the CPU: the Xeon E5-2640@2.5GHz is currently #255 on PassMark [9,500] compared with the i7-920@2.67GHz at #683 [4,938] There are multiple versions of the e5-2640 – v4, v3, v2 and original.

Some facts on the system: The Dell Precision T3600 was reviewed in 2012 by AnandTech at a review price of $4,450 – but that machine only had 8GB of memory (and a better CPU and Graphics card). So call it a wash – $4,500 retail price six years ago. Ebay has a 4x16GB kit for sale for $340 right now.

All product links are from the actual vendor.

Item Product Cost
System Dell Precision T3600 Workstation
Cached Newegg
CPU Intel Xeon E5-2640 2.5GHz 6 cores socket 2011 incl.
RAM 4x 16GB DDR3 DRAM 1333 RDIMM ECC, 12,913 MB/s incl.
Motherboard Single CPU socket 2011, Intel C600 chipset, 2x USB 3.0 incl.
Power Supply 635 Watt externally removable toolless 80 Plus Gold incl.
Video NVIDIA Quadro 600 (96 CUDA Cores, 1 GB DDR3) DVI-I/DisplayPort “Entry 3D” incl.
DVD/CD Thin form factor DVD incl.
Case 2x 3.5″, 2x 2.5″ bays incl.
SSD Drive 128GB Vertex reused
HHD Drive 2x Hitachi Ultrastar 2TB 7200RPM HUA723020ALA641 Enterprise $60 each $120
OS Either Ubuntu 18.04 or Centos 7.4
Software VMWare Workstation 14 Pro $250
Total $810
Posted in Computer Builds | Comments Off on Virtual Machine Server

ZFS Case Upgrade

The original ZFS machine has grown a bit over the years. It started with two 2TB hard drives, then got two more, then finally two more. The “little” NZXT case has more than enough drive slots, but this setup violated one of my two rules for storage systems:

“Da Rules”:

  1. Just because a case has N drive bays does not mean it has enough cooling for N drive bays
  2. RAID5 is not enough

These two rules were a result of stuffing 6 drives in a nice Lian-Li aluminum case that had a 6-bay internal cage (these events predate this recorded history). One day, it lost a drive. After replacement, but during the re-silvering, it lost a second drive. And thus the entire array was gone. The first drive may or may not have been lost to heat. The second drive was definitely lost to heat.

The ZFS machine addressed rule #2 by having drives in a mirror and using active scrubs. The active scrubs make sure that both drives have a readable copy of each sector. So, when a disk is lost, you are reasonably sure the re-silver will has something valid to read. This machine has lost 2 drives (not at the same time). Yes, re-silvering was stressful.

To address rule #1, my ZFS machine finally gets the case it deserved in the first place: a Rosewill RSV-L4500 – 4U. At $116, this case has 15 drive slots and 8 fans. The three-sets-of-five bays lets me install the drives with an air gap of one drive between any two drives.

Here are the smartctl temperatures before and after:

Drive NZXT case Rosewill Case (immediate/24hrs)
1 43 31/32
2 42 31/32
3 35 33/33
4 36 34/34
5 37 34/32
6 35 35/33

Drives #3 and #4 are the original, “babied” WD 2TB black drives. What little cooling existed in the NZXT, these drives got, so there was not much movement in temperature.

Posted in Computer Builds, ZFS | Comments Off on ZFS Case Upgrade

JRebirth quick evaluation

Want to see how you can quickly tell that somebody wrote 79,695 lines of Java on a completely wasted library/framework? Watch how quickly JRebirth comes to a head-slap fail:

1) Visit
2) Under Documentation, click on Installation, create the build.gradle
3) Under Documentation, click on “Create your first Application”
4) Create .java for SampleApplication – find out it depends on SampleModel. Ok, then
5) Create .java for SampleModel — find out it depends on SampleView

Hard fail.

A model that has a compile-dependency on the view? “You keep using that word. I do not think it means what you think it means.” Dig a bit further into the source code, and you see the phrase “the class type of the view managed by this model” over and over. The model managing the view?

A model never depends on a view in order to compile, for one simple reason: a correctly designed model can support multiple views simultaneously.

There is no reason to investigate JRebirth further – their project went off the rails at step one.

Posted in Software Engineering | Comments Off on JRebirth quick evaluation

Java Self Loathing

Oracle hates its own product so much (Java) that it actively discourages people from ever running it. This is apparent in the JNLP dialogs you get when starting an application for the first time.

Can you spot the difference?

Both are really scary, with a big yellow alert, and a “I accept the risk…” checkbox.

The difference (since you probably didn’t find it) –
The first image is an “unrestricted access” dialog (which you should almost never run, no matter what the source) and the “so it is recommended not to run this application” probably understates the dangers.

The second image is a “limited access” dialog (which should be reasonably ok to run) and the “so it is recommended not to run this application” is completely overkill (unless, Oracle has errors in the sandbox code, which is something nobody can rule out, since, Oracle hates Java…)

So, Oracle hates Java so much that they pop up a dialog that looks 99% the same between two completely different cases. And, since you should not get into the habit of clicking that “I accept the risk” checkbox, even I have a difficult time recommending JNLP to anybody.

But hey, you should check out Lot Area Calculator, recently updated to have a JNLP link.

Posted in Software Engineering, Uncategorized | Comments Off on Java Self Loathing


2018 resolution: Say RIP to REST – aka “the year of RIP REST”

The specific resolution – only use “REST” casually, as a synonym for “client-server”

REST has had a pretty good run. The PhD dissertation was published in 2000. It did an awesome job describing the architecture of the web. Then some one, and then many ones, decided “the web” was a synonym for “enterprise network API”. That was a sad day, and the troubles began. Now, over a decade later, there is finally growing realization that “REST” is a terrible architecture for anything except web pages and maybe some key-value NoSQL APIs.

Section 5.1.5 “Uniform Interface” is one major failing.
Restricting a network architecture to the CRUD verbs is the opposite of “good network architecture”. This “Key-Value” design has been tried many times in the past. And rejected in almost every case. One domain where it stuck was “the web”. Another major failing in REST (although it is not in the dissertation) is the insistence on using HTTP response status codes at the REST level. This “mixing of levels” is another known anti-pattern (can you imagine writing your REST API using EBADF, EACCESS, EINVAL, etc.? Of course not, because the people who created HTTP understood protocol levels. But REST was attached at the hip to HTTP “for simplicity”).

It will be better days from now on – no more creating deficient network APIs in the name of REST. No more useless debates on versioning or HATEOS, or trying to “fix” REST. No more explaining “REST stands for REST Ein’t Soap, Tada!”. (Because that was all REST really was as a network API framework – REST was not SOAP.)

For those interested in what will replace REST, take a look at one possibility in GraphQL GraphQL might not be the winner. But the winner will look a lot like it.

2015-06-06 –
Discovered Jan/2018 –
Posted in Software Engineering | Comments Off on RIP REST

Vue.js carousel State Fair

This is the announcment page for Minnesota State Fair Space Tower.

This was the “next level of difficulty” for Vue.js. It involved using vue-cli to create the webpack-simple basis for the project, then learning how to incorporate components into the project (in this project, that was Vue Carousel), and learning the npm run build and how to export that to a static web page.

Because of the (ridiculously) heavy-weight build process, it took a while to get going. And, it still has a “root” (aka “leading slash”) problem in the final build.js. It also has JavaScript that has the beginnings of a “cache-bust” technique, aka src=”dist/IMG_3107.jpg?1974a0f53e964bb24495a619408dbaf3″, but the dist directory itself only has the short-name “IMG_3107.jpg”.

Overall, still pretty simple, and holds lots of promise as an AngularJS replacement.

Posted in Software Project | Comments Off on Vue.js carousel State Fair

The future of AngularJS and Angular 2

It has been fun developing AngularJS applications. It was the first complete framework that was both a higher level of abstraction than jQuery and easy enough (not “easy”, per se) to learn and use. You can even play with my Tic-Tac-Toe AngularJS applcation.

Then, around September, 2014, Angular 2.0 was announced, and 2.0.0 was released September, 2016. After 20 minutes of using Angular 2.0, it became clear that “drastically different” might have been an understatement.

It made me sad. Sad like when the Java people lent their name to JavaScript. A lot of confusion resulted, and Java took a hit that took a decade to recover.

“Angular” isn’t going to recover.

AngularJS was awesome, but Angular 2 was a classic case of “second system syndrome“. Angular 2 froze people out of AngularJS, but totally lost ground to the other libraries and frameworks. Angular 4 is out as of March, 2017, and the biggest thing they are bragging about is “Semantic Versioning“. They don’t seem to have much else to brag about, so why not? Oh yeah, ngIf now has an “else”. Yeah, that’s what is what is keeping Angular 2/4 down…

If there is ever a greenfield project in my future, it will use React with Redux and webpack.

Another likely alternative is Vue.js

[If the tone of this post seems weird, it is because this is a “record my current understanding and prediction” more than anything else.]

A seriously messed up post on Angular and React: here. It has the opposite prediction – that “Angular” is going to be great. Another messed up post is here. This post confuses AngularJS with Angular 2 (like, “Angular 2, created in 2010”)

Aurelia is another viable competitor in this field.

Posted in Software Engineering | Comments Off on The future of AngularJS and Angular 2

Should REST and microservice APIs be Versioned?

This is a very lively topic.

The starting point for these questions must be: in our world (enterprise grade computer science), what isn’t versioned? The answer to that is: everything is versioned.

Therefore, the actual question splits into two parts:
1) What makes REST and micro-services so special that they don’t get versioned? The answer here is easy – nothing. They must be versioned, just like everything else.

One post (here) tries to convince others of “versioning only as a last resort”. This book tries to convince others that “just monitoring the logs” is a sufficient solution to the problem. This is all amateur-hour stuff. Just ignore them.

Now that we know it must be versioned, the actual question arises:
2) How should the versioning be implemented in REST and microservices?

The choices seem to be:
1) URL: (e.g. http://yourapp/v1.2.3/person…)
2) URL parameter: (e.g. http://yourapp/person?Version=1.2.3)
3) Header – custom (e.g. “X-Api-Version: v1.2.3”)
4) Header – Accept (e.g. “Accept: application/yourapp.v1.2.3”)
5) Payload (e.g. inside the JSON { “version” : “v1.2.3”, “name” : “joe”, … } )
6) Hostname: (e.g. http://v1.yourapp/person)

People argue against #1 with strange arguments like a URL “represents a person” or “the idea of a person”. It is strange because “person” is not, and can not, be a universally workable concept. (Quick test if you are not convinced – write client side code that receives person JSON data, and computes the total of that person’s age and height. It can’t be done, for multiple reasons: you don’t know the keys and you don’t know the units. What if “age” is actually “bornAt” as milliseconds since the epoch?) The genius Roy T. Fielding simply got it wrong for the enterprise – a resource is a universal construct only in a university classroom. And, “Controls have to be learned on the fly” may eventually work once machines take over, but for now, even “age plus height” can’t be learned dynamically.

So – there just isn’t A concept of a person. There must be some implementation behind it, and that implementation needs to be versioned. Because in the enterprise world, something always changes.

The argument against #3 and #4 is a variation of the above: the URL “http://yourapp/api/person” makes it look like “person” is universal. But it is not universal. So the URL is a lie. It must be versioned.

The argument against #5 is the “surprise!” factor. Your client application had link to a person, but the data came back 3 versions in the future, and your client application has no mechanism available to “correct” the request, unless #1-#4 is provided. It’s the “box of chocolates” style programming – you never know what you’re going to get.

Since #6 is basically #1 with the “v1” moved to the left so much that it ends up in the DNS of the hostname, is shares the advantages. However, it proposes an uncomfortable use of DNS. And it ties the concept to a particular hostname (or pattern of hostnames, like “”, “”, etc.) In the end, it seems like a personal preference that a REST server should respond to requests without needing to know its DNS entry. (Or worse, creating option #4a: “Look for a version in the ‘Host:’ header of the request”).

One clever variation is just: do more than one! It seems like extra work, and you have to worry about conflicts (like 3 different versions in the same request). But, it does ensure that everybody yells at you equally 🙂

There are plenty of enterprise-grade APIs that use #1. They do differ on /v1/ versus /v1.0.0/, with the majority using the simpler /v1/.
* Dropbox – #1 (“/2/”), and is too embarrassed to document it
* Amazon – #2 (“/foo?Version=2012/01/02”)
* Twitter – #1 (“/1.1/”), also too embarrassed to document it
* Best Buy Developer API – #1 (“/v1”), also too embarrassed to document it
* Facebook – #1 (“/v2.5/”) with #3 in return (“facebook-api-version:v2.0”)

More links on versioning:

Quote from the book:

Posted in Software Engineering | Comments Off on Should REST and microservice APIs be Versioned?