PASSED! Scott’s Experience Taking the Java 21 Certification Exam 1Z0-830

As we mentioned earlier this week, Oracle announced their new Java 21 Certification Exam 1Z0-830 this week. Being a gluten for punishment, I signed up as soon as it was available to take (Thursday). I passed! But with a much tighter margin than I expected. Read on for more details.

What can I say about the exam? Well, first signing up and taking the exam is a completely different process. It’s all remote now! We will be posting a series of articles covering the steps to take the exam (and there are many!). So I’ll leave off those details for now.

The 1Z0-830 exam itself is very different from past exams. The first hint was they raised the exam time from 90 to 120 minutes. And I can see why! The questions (and answers) on the exam are quite long. While there was a handful of singe-page questions, the vast majority required scrolling multiple pages. This was compounded by the fact for some questions, each multiple choice option contained 20-30 lines of code with 6-10 options available. Process of elimination can be a slow process if you’ve got to eliminate 9 out of 10 answers!

I don’t know about you, but reading over a hundred lines of code for a single question is really time consuming! I ended up finishing with only 5 minutes to spare.

There’s no other way to put it, but the exam was difficult. While I think all of the questions were in scope (and covered in our new Java 21 1Z0-830 book coming out later this year!), I’ve never seen so many topics mixed into a single question. As an example (not real!), a question might have a 15-line code sample and then asked you to select 2 out of 6 interface declarations (~20-lines each) that will make the code print “Hello World”.

If you expected the options to all be similar, you’d be wrong!

Some of the options were vastly different than others, testing all sorts of things. As a further example (again, this isn’t a real question), one interface might be wrong because it includes a private instance variable, while another might be incorrect because of some inheritance issue. Furthermore, another might be wrong because it includes a pattern matching switch statement that is missing a default clause.

TLDR: Oftentimes, each question covered topics across multiple objectives.

There were the occasional question that limited its breath to a single objective, but I found that to be the exception, not the rule.

As for content, I can’t give too much away but I will say this:

  • [New Java 21 Feature] Pattern matching switch was definitely on the exam (with and without records)
  • [New Java 21 Feature] I didn’t get a question about Virtual threads or Sequenced Collections, but that’s likely just bad luck of the draw.
  • Previous exam topics like JDBC/Annotation/Security were not on the exam (reflected in the objectives).
  • Logger.getLogger() appeared on the exam but don’t panic. You don’t need to know anything beyond it, other than it being an alternative to System.out.println().
  • Records were definitely on the exam. Like. A lot.
  • Streams, while certainly still on the exam, weren’t as challenging or centrally focused as they had been on previous exams. In fact, when I saw a stream question, I was excited because they tended to be shorter and easier to read (direct opposite of my previous exam experience!).
  • Modules were on the exam. I actually thought the module questions were fairer and more self-contained than other questions, in part because you can’t easily mix modules with other topics like pattern matching or threading.

When taking the exam, you absolutely need to pace yourself! If you get a question that’s just too long, you should definitely skip and come back later. Unfortunately for me, at least half of the questions were quite long to read. Another tip is to study the options carefully (which is really hard to do in a short time and given the 20-30 line length of each option) to try to spot differences. For example, if you can spot that 4 out of 8 questions use a bad modifier or return type, you can answer it much more reliably. Unfortunately, many of the questions bordered on being eye exams. While I could have easily spotted a private modifier in the wrong place on a short code sample, add multiple layers of inheritance and dozens of lines of code, and it blended in surprisingly well. The questions were fair, but quite difficult.

Oh, and scrap paper. You need scrap paper. There is no in-app whiteboard (like they did in the past for some exams). Since it’s proctored remotely, I asked my proctor before I started I could use blank sheets of paper. I held the sheets up to the camera one at a time, and they approved. I don’t know if Oracle has a policy, so it might be proctor specific. Scrap paper was critical in part because if you have to pick from 2 out of 10 options, you need to some way to track what you’ve eliminated (the right-click to cross out feature has been off the exam for years now). There were also a number of questions that involved doing math (order of operation, nested for() loops with lots of variables, etc) that tracking variable states was hard.

On the plus side, my results were available immediately on the screen as soon as I submitted the exam, and the certificate was posted to my Certview within the hour.

We’ll have more details after Jeanne takes the exam, so stayed tuned!

AWS Summit 2024

I went to AWS Summit New York today, a free one day conference. It’s the first time I’ve gone. I didn’t live blog but am writing a summary post of my day after the fact.

Overview

AWS spent a ton of money on this event. They rented out all or most of the Javits Center in NYC (this is where NYC Comic Con is held). They gave coffee/soft drinks and even free lunch. They also spent a lot of money on security. For cause. There were protesters right outside the front door.

I tried to experience the major parts of the event.

Expo

The exhibit hall was large on the third floor with lots of vendors related to cloud. There were also some fun activities like a drone and toy car racing. Lots of space for sitting/networking.

There were also some stages in the expo for shorter (15-30 minute talks). They had headphones for people who couldn’t filter out the background noise of the expo. It was nice because you could flit by and see if you were interested. I listened to some pieces of cert/education talks and a full one from Elastic on LLMs and summarizing security incidents

Breakouts

There were lots of one hour breakout sessions on the first floor. I went to two customer success stories (Venmo and Fannie Mae). It was dark in the breakout rooms. Like most places have for keynotes

Learning highlights for Venmo

Key strategies

  • Distribute load to maximize processing throughput
  • Use event based systems for anything not in critical path

Other notes

  • Django app. Used Celery for async work,  reader db instanes for queries that can use
  • Then added DynamoDB, MongoDB, OpenSeach Service, data lake, microservics, Cassandra (for microservices), Kafka
  • Split Mysql into Auroa MySQL comatible secondary ySQL and Analytics MySQ databases

Social feed data migration

  • Transactions visible, high traffic because home screen
  • Every transaction geerations a feed story along wih certain profile operations
  • 3.6TB of data, 5.6 bllion entries
  • Since digit lateny on data retrieval
  • 90% of memory usage
  • switched to DynamoDB due to cost (90% less), performance (equivlanet), managed servie, data encrption at rest, integration with other AWS offerings
  • Migrated via backfill followed by dual writes. Let verify performne under pro load and confirm data consistent. Then started ramping reads on new database. Started with 1% reading from new DynaeoDB. Finally cut off writes to ol MongoDB

Offloadng transaction history

  • For each payment put message on Kafka queue and write to Cassandra via microservice.  Implemented as best effort write Needed to guarantee 100% of data so could move over use caes taht required full fidelity data
  • Switch to write ahead log – write log essage saying intend to peror action and store in DynaeoDB Then proess transaction/pblish essage. FInally, delete inteded action message ow that completed. Background process looks for pendin messages 
  • Asyc payment processin using Kinesis
  • Problem batches huge and inconsistent for credit car sage, delays, outage costly, can’t send 500 error/need to reconcile, not a way to replay transactions internally
  • Added Kineis Data Stream via think wrapper to put mesage on strea and ackowledge success to upstrea. From KInsis, have consumers/lambda procss. Also usig Auora, DocumentDB, ElastiCache, DynamoDB and SQS

Key learnings for Fannie Mae

data science research 

  • compared research vs deveopment – ex: research has poc, live prod data, latest tools/patterns
  • pilars of platform:
  • data access – prod data, data usage contracts
  •  governance  – control by business, not tech, autoamted integration with governace
  •  operationalization – testing, validation, Ci/CD
  • data science controls
  • register research activities in CMDB so can provision/tag resources. Automated provisioning, strealined architect review process
  • Data access.sharing  contracts, perissions, ingress/egress rules, sensitive data protection rules
  • Cde deployment and change managment  CI/CD, scanning
  • Data science platform architecture
  • code/image repo
  • pblic data endpoints
  • code/package library
  • read only access to enterprise data lake
  • research envs –
  •  collaboration – just in time access – read only access to prod enterprise data lake. results an’t be shared; considered dev
  • validation  – testing/shakeot – still read only
  • operaiton – headless execution/- now can write to prod, create reports and share exterally
  • data access JIT (just in time). Fannie Mae has a patent on this
  • request access to data. could be from many data sources
  • JIT access engine checks against coarse grained contracts
  • Then goes to policy manager to check fine graine access controls. Use UI to create rules. creates new role dynamically so can use token to access

Building a generative AI use case

  • Used Anthropic’s  Claude 3 Sonnet via Amazon Bedrock and Aazon Neptune (graph db)
  • A lot of analysis of unstructured documents, average of 5 hours per doc and 8K dos per year
  • Deep Insight for LLM driven knowledge extraction. Uses ontology (schema( an LLM t generate knowledge graphs. Human in the loop to validate Then knowledge utiilization step to use natural language via a chatbot
  • taxonomy – linear top down hierarchy. Ontoogy – interconnected network representation
  • Disambigution important to avoid duplication
  • graph database  
  • reduces risk of hallucinations because more context
  • two types – 
  • Property Graph (Apache Tinkerpop) . Query with Gremlin or Cypher
  • RDF Graph (from W3C). query with SPARQL
  • extraction uses Bedrock, fargate, lambda, neptune, s3
  • utilization uses – bedrock, fargate, neptune and a chatbot
  •  also uses LangChain – Neptune Open Cypher QA chain (converts natural langague queries into Cyper so can do query(  and Amazon OpenSearch
  • challenges
  • pick onthology framework – Chose Turtle (Terse RF Triple Language for reeasability/ease of reading
  • find best way to chunk. Chose at sections so handle complex tables btter
  • Picking graph type. Chose property graph due to better OSS framework support
  • Amazon Kendra (enterprise search( did not integrate with Amazon Neptune. Used LangChain’s NeptuneOpenCypher QA Chain instea

Chalk Talks

Chalk talks were also on the first floor. They were also an hour but had less prepared content. The one I went to had 20 minutes of talking/demos. Most of the time was Q&A or discussion. They had a whiteboard with a camera to show what was on it so the speakers could write/draw real time. This meant one projected screen was the computer and one was the physical whiteboard.

Learning highlights

  • gen customers what to know what model to use, how to move quickly and how keep data secure/private
  • Bedrock provides foundational models via single API, customize model, RAG (Retrieval Augmented Generation), agents for multi step tasks, security/privacy/safety
  • Models include – amazon’s models, anthorpic,, cohere, meta, etc. ANd lots of variants/versions of each.
  • Two use cases: observability of generative AI itself, using gen AI to help with observability
  • gather metrics – ex: number tokens used for input/output
  • collected metadata/requests/responses so understand how customers use
  • governance/controls/guardrails
  • Cloudwatch – analyze inovcation logs, protect sensitve date, real time metrics and alarms (Ex: more latency on different version of claude), single pane of glass/dashboard
  • recorded demo #1 (while video was recorded, he narrated live. also paused periodically to say more
  • can send model invocation logs to either s3 (if using other loggiing system) or cloudwatch

Builder Sessions

Also on the first floor, these were small group labs. I went to one on Amazon Q. They had 4 areas on the room with 10 chairs each. An instructor from AWS was allocated to each group. After a short intro, the instructor helped anyone stuck and answered questions. This was great.

The lab had an access code good for three hours so you continue a little longer if you wanted. In theory, there was separate wifi for the lab but it didn’t work. The main conference wifi was fine though.

Learning highlights

  • Amazon Q Developer has a free and paid version.
  • The paid version promises not to learn from your data, It’s licensed per person but only billed if the developer uses in a month.
  • IDE integration for VS Code and IntelliJ.
  • Chat bar. Often gives sources/links. From 2023 for public internet. RAG for Amazon so more recent
  • Can explain code, refactor code, fix code and migrate to later version of Java. Can also write a plan for writing code and write code (with some errors)
  • Code Whisperer was folded into Q
  • It was slow, but I was on a conference network

Main dev activities

  • planning – docs, examples, deisgn
    • creating = generate cpde,amage omfra
    • test amd secure – test cases, scan for security vulnerabiliteies
    • operate – identify and mitigate code issues, monitor performance and efficiencey
    • maintenance and modernization – modernize and update old code languages and dependencies

Amazon Q Developer tries to help with all phases

  • plan – explain code with conversational coding (chatbot)
  • create – inline code complete, conversational coding
  • test/secure – unit test generation, OWASP top 10 security scanning
  • operate – debug/optimize code with conversational coding
  • maintenance and modernatization  update code with agent from legacy

Keynote

The keynote was in a big room that wouldn’t fit everyone. They also used all the breakout rooms as overflow and streamed to the stages in the expo. I like that as it was easy to eat and listen. Or talk to the vendors and listen to parts. Or not.

Announcing our new OCP Java 21 1Z0-830 Exam Study Guide!

OCP 21 Announcement

Jeanne and I thrilled to announce our new Java OCP 21 Study Guide is coming out soon! This is based on the recently announced Oracle 1Z0-830 Exam for Java 21. Preorders starting now!

You can be confident when purchasing our book that it contains everything you need to pass the exam. In fact, we were part of a select group that worked with Oracle to help develop and refine the exam. We even helped kick JDBC off the exam (as it only tested obscure topics), so you have one less thing to study!

What’s new about the 1Z0-830 Java 21 exam? Obviously new features of Java 21 like virtual threads, sequenced collections, and much broader support for pattern matching are covered. What’s the same? The format (50 questions), time (90 minutes), and passing score (68%) are the same as the previous exam.

Jeanne and I are nearly done writing the book so expect this in book stores by the end of the year! We’ll continue to post updates on the OCP 21 Book page.