migrating speedment to java 9

Migrating Speedment to Java 9
Speaker: Dan Lawesson @dan_lawesson
See the list of all blog posts from the conference

Cute – Spire their mascot on github has two years experience in that role

Since a library, want to be running with Java 9 as soon as it is released.

Speedment

  • Streams API ORM – customer.stream().filter(field.equal(value)).count();
  • uses JVM memory acceleration, ode generation and modular design
  • Type safety
  • Works like streams – you don’t get any values back until terminal operation runs
  • Have non-SQL code like a collector to convert the result into JSON
  • can use findAny() with Optional on result – generates SQL limit statement
  • Have finderBy so can join tables
  • Like SQL, Streams are declared. Describe the what, not the how. But with SQL, you have to describe the result set format.

Jigsaw Effects/Problems

  • A package must only belong to one modules. Yet it is common for two jars to have same package. The first one in the classpath takes precedence. In Java 9 must have only place for package.
  • Automatic modules are for smoth transition to Java 9. It moves the Java 8 jars from the classpath to the module path. The jar automatically becomes a module
  • However, automatic modules create split pakcages and can’t have those in Java 9
  • sun.misc.Unsafe – should not be used but a key for real world Java success
  • OSGi bundling is different than Jigsaw

Jigsawing the Java 8 open source application

  • Running Java 8 under Java 9 JDK is easy
  • Created module info file
  • Brute force is to move all jars into depdencies in your monolithic module. When works, actually modularize app. Didn’t take this approach because already had OSGi modules
  • Moduler approach: create directory for each module and move relevant packages to that directory. Add empty module-info.java (no requires/exports). That won’t compile so now can incrementally add dependencies and re-compile. Since this is iterative, they wrote a script to do it.
  • Patch abuse of non-exported JDK APIs. Can add exports of java packages as a temporary workaround. Would need this flag at runtime if the temporary workaround isn’t removed. The workaround is just so you can identify all the issues and TBDs.
  • Remove the OSGI bundling. Comment it out so building a jar instead of a bundle in Maven
  • Use code generation so no reflection

Speedment Enterprise

  • Harder becuase use sun.misc.Unsafe, third party dependencies with package issues

The first 20 minutes was about the Speedment library. I felt like that was a lot for a non-product talk. I wasn’t surprised because Dan was at my lunch table. And it was interesting. It just wasn’t necessary to understand the Java 9 part. Dan made a lot of references to things earlier in the day, which was nice. Also, the path Speedment took to move to Java 9 was very useful. I would have wanted to hear more about the issues in enterprise. Are they just outstanding issues. What do they plan to do if the libraries don’t release.

java @ speed: making the most of modern hardware – live blogging from qcon

Java @Speed – Making the most of modern hardware
Speaker: Gil Tene
See the list of all blog posts from the conference

duct tape engineering should only be done when absolutely necessary

We think of speed as a number. But it’s not a quality without a context. Are you fast when you deploy? When at peak load? When the market opens? When acually trade? How long can you be fast in a row?

In Java, speed starts slow when app starts and gets faster until gets to steady point. Because the code changes over time. It starts out purely interpretted then optimizes after profiling. Also, GC pauses.

Modern servers

  • Number cores/chip has tripled
  • Instruction window keeps increasing
  • More parallelism each generation
  • Cache also increasing

Compilers

  • Can reorder code
  • Can remove dead code – nobody knowsif it ran the code. So can say did it; just really fast.
  • Values can be propagated – remove temporary variables
  • Can remove redundant code
  • Reads canbe cached – as if you extracted a variable. Use volatile if needs to avoid
  • Writes can be eliminated – can save calculation if doesn’t change
  • Can inline method call
  • Also does clever tricks lie checking for nulls only after SEGV happens. If you turn out to throw a lot of null pointers, deoptiizes to add guard clause
  • Class Hierarchy Analysis (CHA) – looks at whole code base for optimizations
  • Inlining works without final because knows no subclass. If a new subclass shows up, deoptimizes at that time.
  • If think only have one subclass, add guard clause and optimize. The guard clause will unoptimize
  • Deoptimizations create slowdown spikes in performance even during the optimized phase. Warmup isn’t always enough because warm up code might not hit all scenarios. “The one thing you haven’t done is trade.” So the first real trade is slow because it is deoptimization.
  • Azul has a product that logs optimizations and re-loads them on startup from prior runs.

Microbenchmarking is hard because some things are optimized away (like basic math). Use jmh from OpenJDK to microbenchmark, but still suspect everything.

I like that he showed the assembly code and explained the relationship to a simple for loop.

development metrics you should use (but don’t) – live blogging from qcon

Development Metrics you should use (but don’t)
Speaker: Cat Swetel @CatSwetel
See the list of all blog posts from the conference

Breakfast is good. You should eat breakfast. So Fruit Loops?
Metrics are good. You should have metrics. So bad metrics?

Metrics should fall into four baskets: quality, responsiveness, productivity and predictablilty. Value isn’t called out because this is about development metrics.

“The RIGHTER we do the WRONG thing, the WRONGER we become”

Definitions: (in this presentation)

  • start – when pulled work into team (not when requested)
  • finished – when customers can user

Metric: Time in process

  • Units of time for one unit of work
  • Display as a satter plot to see trends over time
  • Can look at average and 90% line (likely worst case). Would you rather hear 90% of the time it will just under two months or on average 20 days. Either way will be 53 days.
  • Display as a bar chart frequency distribution. See mode (entry with most items). Also see if long tail pattern. Tells a story about predictability
  • Weibull distribution – fat toward zero but tail trickles into infinity. This is like your commute. It usually takes X minutes but then sometimes something happens. (Remember from phrase: Weibull wiggle and they wobble but they don’t fall down)
  • Figure out story from data. In this case, determined that thought all work was the same, but really two distinct types. Were able to detect that high priority items were rushed and everything else waited.
  • Learned really had a multi-modal curve. Like two separate bell curves
  • This covers responsiveness and predictability

Metric: Throughput

  • Units of work per unit of time
  • Team cares about total capacity
  • Customer cares about how many new features
  • Cover range so cn see high and low
  • Ok to see dip while make improvements and then it goes up after. Expect productivity to drop before normalize around the change
  • Can display as range (hard to read
  • Can display bar chart showing probability for each number of requests
  • This covers productivity and predictability

Metric: Time in state

  • Need to collaborate across teams so wasted time waiting.
  • “Touch time” is a very small percentage of the total time
  • Helps determine where work is stuck
  • Good if see trend that getting better/worse. Look at more recent data. Are ueues growing or shrinking.
  • Do not make the bars red and green. Don’t want to avoid investing in improvement. Also, red/green colorblindness
  • Can stack within bar to display work in different states
  • Can display cummulative flow diagram as line graph
  • Little’s Law – average time in system
  • If arrival and departure rates don’t match, should affect expections
  • This covers predictability

Provided warning about this being number in context,not an estimate. Statistics are just answers/numbers. A person needs to provide the story/context around them.

I really liked this talk. Going deep on why a few metrics are useful is great!