[kcdc 2025] Passkeys: The end of Passwords and the Future of Authentication

Speaker: Mateusz Zajac

For more see the table of contents


General

  • Don’t need complex passwords
  • Phishing proof
  • Public key crypto _ biometrics
  • One tap sign in
  • Secure
  • Fewer breaches
  • Simpler flows
  • Lower support costs – fewer password resets/tickets
  • Lower fraud – starting to move to customer facing apps like travel. Not just finance
  • 1 billion people use daily

Problems with passwords

  • Easy to guess/steal
  • Phishing
  • Credential stuffing – if one account falls, others follow
  • Server breaches. Most common attack
  • Users have to keep track

Passwords vs Passkeys

  • Passkeys auto generates. Passwords type twice.
  • Passkeys can use face id
  • Passkeys don’t require reset. Password reset flow has many steps. Including memorable but different than last batch of passwords. 57% users forgot password after reseting. 30-40% help desk calls password reset related
  • 81% breaches involve compromised credentials
  • 51% of people reuse password
  • 2.5 million passwords stolen each week
  • Passkeys synced via iCloud
  • 92% users give up and don’t try to reset
  • 400 million google accounts use

2FA

  • SMS phishable
  • Push fatigue where keep getting notification until give in and click

Passkey

  • Pair of keys
  • Private key on your device
  • Private key kept safe
  • Phone creates a sharing key
  • Website sends challenge need secret key to solve
  • Use face id and solves
  • Sign ins are four times faster than passwords

Amazon login example

  • One time setup – your device creates a private/public key pair. Amazon stores public key
  • When try to login, Amazon sends a cryptographic challenge. This avoids replay attacks.
  • Your phone uses Face ID to confirm it is you. Then phone has private key sign the challenge and sends to Amazon. Amazon authenticates

Phishing prevention

  • Scammer tries with fake sight
  • Your phone refuses to sign because domain is wrong

iOS Code

  • WebAuthn
  • FIDO2 – gets url, challenge size, etc

Cross Device Sign in

  • Websitte generates QR code
  • Scan with phone. Uses bluetooth to verify physical proximity
  • Single use
  • Expires quickly
  • Private key never leaves device
  • Useful if want to log in from someone else’s computer

Challenge

  • If lose phone
  • Cross platform sync
  • Inconsistent browser support
  • Human factors – trust, education

Good references

  • w3c.org/TR/webauthn
  • fidoalliance.org
  • developer.apple.com/passkeys
  • etc

Informal Q&A

  • Two people had facial recognition not work
  • External device

My take

Great comparison and great statistics.

[kcdc 2025] AI Vulnerabilities in 2025 – Some of the darker sides of AI

Speaker: Andreas Erben

For more see the table of contents


From the news

What drives model behavior?

  • Initial training
  • Fine puning
  • Potential customer fine tuning
  • System prompt – how the model should behave.
  • Data/prompt
  • Filters on data, prompts, output

Other links

My take

After the first example, there was a bunch of content on how LLMs work. I tuned out a little during that section probably because familar. Then it got interesting again.

[devnexus 2024] breaking ai: live coding and hacking apps wih generative ai

Speaker: Micah Silverman

For more, see the 2024 DevNexus Blog Table of Contents


Changes/Trends

  • AI is not a silver bullet
  • Treat AI like a junior dev and verify everything produced
  • New iteration of copy/paste. More code that got from Stack Overflow
  • We don’t do thorough code reviews of libary code. Loo at security, number commiers, when last release
  • Different because our name now on commit
  • More challenging to maintain visiblity
  • Backlog ever growing

Common use in Dev

  • Addig coents
  • Summarizing code
  • Writing Readme
  • Refactorin code
  • Providing templates
  • Pair programming
  • Generating code (the new stack overflow)

Stats

  • 92% software dev using AI in some form
  • Those who use AI are 57% faster
  • Those who use AI are 27% more likely to complete task
  • 40% of co pilot generated code conains vulnrabiliies. Same stat without AI s o this is what we trained it on
  • Those using AI wrote less secure code but believed ore secure. We trust AI too much. Junior devs get more scrutinity than senior devs
  • [Some of these stats aren’t austation. Ex: early adopters]

Using AI well

  • Good starting point
  • Don’t use without review

Problems

  • Hallucinations – including supporting “evidence” even where wrong. Gave addition example
  • ChatGPT got worse in math over a few months. 98% to 2%. Now some math specific AIs
  • ”ChatGPT is confidentally wrong” – Eelko de Vos
  • First defamation lawsuit – ChatGPT made up case law
  • AI doesn’t know when wrong

AI and Code

  • Asked for Express app taking name as a request parameter. Tried a bunch of times and name parameter never sanizitzed so cross site scripting vulnerabilities. Ideally wouldn’t auto generate vulnerabilities
  • Can give bad advice – asked if code was safe from NoSQL inection. GPT and Bard said safe. Was not safe.
  • Samsung put all code in ChatGPT and leaked code, keys, trade secrets. Became part of training data. ChatGPT says got better about dealing with secrets.
  • Terms of services of ChatGPT say can use anything as training data

Sample Conference App

  • Using CoPilot
  • Spring Boot app in IntellIJ

Basic example

  • Gave co-pilot comments/prompts to create code
  • Showed generating code to read a file that doesn’t exist due to a typo

JPA SQL injection example

  • Showed not the most common way to write JPA here in 2024; usually don’t need direct query
  • Showed copilot offers prompt to get result
  • Tried adding to prompt to protect against SQL injection. Got a naive regex sanitizer
  • Then tried requesting named parameters in the query. After that did promps for setting parameter and getting result. All was well.

Example

  • Tried getting file then wih a file separator
  • Successfully got constant defined in file so some context sensitivity. But ignored file separator reques fro propt
  • Then requesting saving the file and successfully geerated code to write it
  • Requested getting a person and successfully called already written getPerson() method
  • Then set image name
  • Co pilot offered to write prompt to save person.
  • Then requested adding the message and added attribute to model
  • However, has path traversal issue
  • Showed BurpSuite monitoring as run example. “Send to repeater” keeps cookies and such while letting alter request.
  • Changed file name to ../image/snyklogo.png”. If works will replace logo with uploaded pic
  • Showed Synk IDE extension which also note path traversal issue
  • Tried asking to sanitize input against path traversal an got check for two dots. Not good enough but a first pass
  • Tried asking for whitelist to protect against directory traeversal. Checked directory name prefix which is better but also not a whitelist
  • Then tried requesting to validae that there is not a path traversal using the normalize method. Did what requested including the prefix check from previous prompt but Micah noted that’s what did in the past

What can do

  • Use tool to scan code and tell when makes mistake
  • Learn. Ex Snyk has a lesson on prompt injection. – Getting AI to tell info it shouldn’t

Future

  • Currently have Math aware AIs. Maybe will have security aware AI in future

My take

Good mix of background and live code. I like that Micah didn’t assume security knowlege while keeping it engaging for people who are familiar. I had never seen BurpSuite so that was a happy bonus in the demo.