[javaone 2025] interested in where the language is going

Speaker: Brian Goetz

See the table of contents for more posts


Note: I was 15 minutes late to this session as I was doing a book signing:

Patterns

  • like expressions turned inside out
  • data flows from the environment into the expression
  • If expression succeeds, produces a result
  • Patterns compose like expressions
  • Payoff is that Java is better at dealing with Java
  • OOPs strong suit (complex entities/boundaries) became less important. Data has become more important
  • Pattern matching allows us to sometimes recover lost details – ex: type
  • Eventually can decompose any class, not just records.
  • Will impact API design
  • Lots of work left. Started with features that are easiest to understand.
  • Will use for marshalling in future

Valhalla

  • Goal is flatter and denser memory layouts for Java object graphs, unify the type system (primitives vs objects), enabling new numeric types as libraries rather than new primitives
  • Big refactoring
  • While JVM doesn’t specify object layout in memory, have to use pointers
  • Locked into existing layout – objects have identity so can only have one copy, references re nullable so need certain number of bits, kept discovering more
  • Primitives were necessary in 1995 for performance bug has dogged us ever since. Like original sin.
  • Rift widened in Java 5 with generics and then again in Java 8 with lambdas
  • Primitives are a performance hack for flatness and density
    • JEP 401 – value classes – not all objects need identity . If don’t need, JVM has more flexibility and can store however wants. Implicitly final. == uses state not identity. No intrinsic monitor.
    • Use as few degrees of freedom as possible. Favor private over public. Favor final over mutable. Favor value over identity
    • Flattening should be a JVM optimization. Language model should be about program semantics. Give up things that are impediments to flattening.
    • Nullability is a density and flatness hazard. Also a safety hazard. Not all code wants/needs nullability; place for bugs to hide
    • Future JEPS – null restricted types, null restricted value types, minimize gaps between int and Integer and surely more

My take

It was nice hearing about these features from Brian’s POV. While I didn’t learn much about the features, I did learn a bunch about the philosophy which was interesting. Also that was intentional since I knew I’d be late. I needed something I could jump into without it being cumulative. This fit the bill!

[javaone 2025] java for ai

Speakers: Paul Sandoz

See the table of contents for more posts


General

  • Not going to add XML to the platform as it changes. Scala did that and thinks wrong choice
  • Goals: developer productivity and program performance
  • Many features meet demands of AI but useful for other things
  • “All web companies grow up to be Java companies”. Want to make all AI companies grow up to be Java companies as well
  • AI on Java is harder than it should be

Panama – Foreign Function and Memory (FFM) API

  • Optimal usage for foreign (off heap) memory
  • Better interoperation with foreign (native) APIS
  • Let’s map to native libraries
  • ex: used for Matrix API using native BLIS. The matrix API is Java (for linear algebra)
  • jextract gives a C like interface in Java. Still have do deal with allocating memory
  • Zero copy memory since all memory is held natively on the heap. JVM is not involved

Panama – Vector API

  • SIMD (single instructions, multiple data) programming for optimal CPU utilization
  • Better number crunching
  • Useful for high performance data parallel algorithms
  • Incubating a long time because need value classes to be released
  • Example: https://github.com/mukel/llama3.java
  • Testcase to improve the runtime compiler.
  • Not as fast as native algorithm but might be good enough one day.
  • Show performance improvement from using Vector AI with a demo. The “before” was so slow it was hard to watch. The “after” was essentially at the speed of reading.

Valhalla – Value classes and objects

  • Optimal utilization of on heap member
  • Enables more kinds of numbers
  • Runtime will optimize
  • More kinds of numbers — ex:Float16, Decimal64, Complex, Interval, etc
  • Incubating

Babylon – Code Reflection

  • Interoperates with foreign programming models – ex onnx
  • Better use of hardware and better number crunching
  • For foreign programming models aka not the one for the Java Language Specifications. Ex: GPU, autoparallelization of for loops
  • New Java code model — symbolic in memory representation of java code using a tree like structure. Like the compiler’s AST but more suited for analysis and transformation

My take

I like that he was able to go into more detail than we got into at the keynote. Good demo. Some of it was too advanced for me (not an exert on the matrix stuff), but I learned a lot. I missed a bunch. I went to google something I didn’t know and fell down a rabbit hole. He also showed Onnx which we saw during the keynote.

[javaone 2025] next level features of langchain4j

Speakers: Lize Raes, Mohamed Abderrahman

See the table of contents for more posts


Concepts

  • SystemMessage – instructions
  • ContentRetriever – context
  • Tools – function calling
  • UserMessage – user to LLM
  • AiMessage – LLM to user
  • ChatMemory

5 Levels towards AGI

  • Can perform work of entire orgs of people
  • Can create new innovations
  • Can take actions on users behalf
  • Can solve basic problems like a PHD with tools
  • Current AI like ChatGPT that takes with humans

Options

  • LLM manages step transitions in state machine – can jump states when unexpected requests, flexibility, but risky
  • Code manages step transitions – any complexity possible, reliable, separation of concerns, tailored model size. However, not flexible. can’t deal with unexpected scenarios and more work to write.

RAG

  • Retrieval Augmented Generation
  • Fetch info relevant to request and send information to LLM
  • Advanced RAG features
  • Retrieval Augmentor in addition to retriever
  • LLM writes query
  • Adds info/context
  • Need to measure performance of model. Compare across models
  • MCP (Model Context Protocol)

Steps in code:

  • Create document content retriever – can limit scope. Ex: scientific literature
  • Create web search content retriever
  • Create SQL database content retriever

Guardrails and Moderation

  • Guardrails add limits. Ex: list examples of queries that shouldn’t be allowed
  • Moderation – checks if message violent, etc. Can use a different model for validations
  • LLMs are more sensitive to examples than instructions

Testing approaches

  • Test human evaluation (thumbs up/down)
  • AI assisted

Websites

  • swebench.com – closes github issues
  • llm-price.com – shows prices per token and per million tokens
  • JUnit Pioneer – test retry
  • Examples from session: https://github.com/LizeRaes/ai-drug-discovery

My take

Excellent examples. The real world scenario of diseases/antigens/antibodies was good. Good concepts and great demo. Showing Prometheus/Graphana was good as well.