The table includes both “as built” and “dreaming only” CPUs.
Current and historical record of Mice and Keyboards.
Ever wanted to see a complete list of Java command-line parser libraries? Here is the project for you.
The git project javacommandlineparsers contains a list of Java command-line parser libraries, in JSON format, and transformed to markup, csv and html.
It also shows how to perform groovy templating with gradle. The Readme.md file is generated from a template using SimpleTemplateEngine, and a .csv and .html files are generated with groovy.text.markup.MarkupTemplateEngine.
The raw .json content is available here.
Every once in a while, at the beginning of a new project, I start the search for a command-line argument parsing library in Java. This post shows the results of examining the field as of mid 2018. This post serves both as a “which to use” and as a “which have been evaluated, and found wanting” guide.
* Best So Far
annotation, good documentation, custom parser, validation
* Honorable Mention
annotation, good documentation — documentation looks suspiciously like it is patterned after JCommander
* Pretty Good
field-based, annotation-based, unclear if “–long-Name” is supported
annotation-based, interface-based but can be instance-based with setters, has short and long names
Key-Value – the opposite of a good library. Many of the libraries below share this same deficiency.
single “parse” call with array of options, Key-Value get value where key is the option
Jakarta Commons CLI
like JArgs, except key is “String”, not option object
create individual options, where options are “holders”, parse() into the holders
old (2008), not well documented
it generates .java, but still requires an additional .jar
The goal of this machine was to replace the oldest machine build recorded on this blog – the 2009 Core i7 920 build. This is the current workhorse virtual machine server.
The approach taken (refurbished) was a direct result of (1) DDR4 RAM prices being high, (2) Video card (GTX 1080) prices being way too high, and (3) Intel 8700 CPU just not being exciting.
So instead of new, once I ran across these refurbished Dell Precision T3600 machines at newegg.com, I set a budget at $400 and got the best available. After a 10% bump in the budget, the 64GB machine was the “just right” choice. It has a 6-core CPU and a decent clock speed.
Some facts on the CPU: the Xeon E5firstname.lastname@example.orgGHz is currently #255 on PassMark [9,500] compared with the email@example.comGHz at #683 [4,938] There are multiple versions of the e5-2640 – v4, v3, v2 and original.
Some facts on the system: The Dell Precision T3600 was reviewed in 2012 by AnandTech at a review price of $4,450 – but that machine only had 8GB of memory (and a better CPU and Graphics card). So call it a wash – $4,500 retail price six years ago. Ebay has a 4x16GB kit for sale for $340 right now.
All product links are from the actual vendor.
||Dell Precision T3600 Workstation
||Intel Xeon E5-2640 2.5GHz 6 cores socket 2011
||4x 16GB DDR3 DRAM 1333 RDIMM ECC, 12,913 MB/s
||Single CPU socket 2011, Intel C600 chipset, 2x USB 3.0
||635 Watt externally removable toolless 80 Plus Gold
||NVIDIA Quadro 600 (96 CUDA Cores, 1 GB DDR3) DVI-I/DisplayPort “Entry 3D”
||Thin form factor DVD
||2x 3.5″, 2x 2.5″ bays
||2x Hitachi Ultrastar 2TB 7200RPM HUA723020ALA641 Enterprise
||$60 each $120
||Either Ubuntu 18.04 or Centos 7.4
||VMWare Workstation 14 Pro
The original ZFS machine has grown a bit over the years. It started with two 2TB hard drives, then got two more, then finally two more. The “little” NZXT case has more than enough drive slots, but this setup violated one of my two rules for storage systems:
- Just because a case has N drive bays does not mean it has enough cooling for N drive bays
- RAID5 is not enough
These two rules were a result of stuffing 6 drives in a nice Lian-Li aluminum case that had a 6-bay internal cage (these events predate this recorded history). One day, it lost a drive. After replacement, but during the re-silvering, it lost a second drive. And thus the entire array was gone. The first drive may or may not have been lost to heat. The second drive was definitely lost to heat.
The ZFS machine addressed rule #2 by having drives in a mirror and using active scrubs. The active scrubs make sure that both drives have a readable copy of each sector. So, when a disk is lost, you are reasonably sure the re-silver will has something valid to read. This machine has lost 2 drives (not at the same time). Yes, re-silvering was stressful.
To address rule #1, my ZFS machine finally gets the case it deserved in the first place: a Rosewill RSV-L4500 – 4U. At $116, this case has 15 drive slots and 8 fans. The three-sets-of-five bays lets me install the drives with an air gap of one drive between any two drives.
Here are the smartctl temperatures before and after:
||Rosewill Case (immediate/24hrs)
Drives #3 and #4 are the original, “babied” WD 2TB black drives. What little cooling existed in the NZXT, these drives got, so there was not much movement in temperature.
Want to see how you can quickly tell that somebody wrote 79,695 lines of Java on a completely wasted library/framework? Watch how quickly JRebirth comes to a head-slap fail:
1) Visit http://www.jrebirth.org/
2) Under Documentation, click on Installation, create the build.gradle
3) Under Documentation, click on “Create your first Application”
4) Create .java for SampleApplication – find out it depends on SampleModel. Ok, then
5) Create .java for SampleModel — find out it depends on SampleView
A model that has a compile-dependency on the view? “You keep using that word. I do not think it means what you think it means.” Dig a bit further into the source code, and you see the phrase “the class type of the view managed by this model” over and over. The model managing the view?
A model never depends on a view in order to compile, for one simple reason: a correctly designed model can support multiple views simultaneously.
There is no reason to investigate JRebirth further – their project went off the rails at step one.
Oracle hates its own product so much (Java) that it actively discourages people from ever running it. This is apparent in the JNLP dialogs you get when starting an application for the first time.
Can you spot the difference?
Both are really scary, with a big yellow alert, and a “I accept the risk…” checkbox.
The difference (since you probably didn’t find it) –
The first image is an “unrestricted access” dialog (which you should almost never run, no matter what the source) and the “so it is recommended not to run this application” probably understates the dangers.
The second image is a “limited access” dialog (which should be reasonably ok to run) and the “so it is recommended not to run this application” is completely overkill (unless, Oracle has errors in the sandbox code, which is something nobody can rule out, since, Oracle hates Java…)
So, Oracle hates Java so much that they pop up a dialog that looks 99% the same between two completely different cases. And, since you should not get into the habit of clicking that “I accept the risk” checkbox, even I have a difficult time recommending JNLP to anybody.
But hey, you should check out Lot Area Calculator, recently updated to have a JNLP link.
2018 resolution: Say RIP to REST – aka “the year of RIP REST”
The specific resolution – only use “REST” casually, as a synonym for “client-server”
REST has had a pretty good run. The PhD dissertation was published in 2000. It did an awesome job describing the architecture of the web. Then some one, and then many ones, decided “the web” was a synonym for “enterprise network API”. That was a sad day, and the troubles began. Now, over a decade later, there is finally growing realization that “REST” is a terrible architecture for anything except web pages and maybe some key-value NoSQL APIs.
Section 5.1.5 “Uniform Interface” is one major failing.
Restricting a network architecture to the CRUD verbs is the opposite of “good network architecture”. This “Key-Value” design has been tried many times in the past. And rejected in almost every case. One domain where it stuck was “the web”. Another major failing in REST (although it is not in the dissertation) is the insistence on using HTTP response status codes at the REST level. This “mixing of levels” is another known anti-pattern (can you imagine writing your REST API using EBADF, EACCESS, EINVAL, etc.? Of course not, because the people who created HTTP understood protocol levels. But REST was attached at the hip to HTTP “for simplicity”).
It will be better days from now on – no more creating deficient network APIs in the name of REST. No more useless debates on versioning or HATEOS, or trying to “fix” REST. No more explaining “REST stands for REST Ein’t Soap, Tada!”. (Because that was all REST really was as a network API framework – REST was not SOAP.)
For those interested in what will replace REST, take a look at one possibility in GraphQL http://graphql.org/ GraphQL might not be the winner. But the winner will look a lot like it.
This is the announcment page for Minnesota State Fair Space Tower.
This was the “next level of difficulty” for Vue.js. It involved using vue-cli to create the webpack-simple basis for the project, then learning how to incorporate components into the project (in this project, that was Vue Carousel), and learning the npm run build and how to export that to a static web page.
Overall, still pretty simple, and holds lots of promise as an AngularJS replacement.