Speaker: Stevel Poole
@spoole167
For more, see the 2024 DevNexus Blog Table of Contents
General
- Supply chain
- Now we are all attack vectores
Wifi
- We also use wifi
- How many use VPN?
- Easy to spoof wifi
- Only need battery, raspberry pi and a few more things
- Would you notice a box on the wall?
Charger
- Plug in Mac laptop charger at conference
- If leave unattended, someone could add hardware
- Any USB has problem
- USB data cable and power cable look same
Hotel rooms
- Hidden camera
- In some countries during cold war, used human cherography to influence where sit
- Becoming more common
- More people are pass thru to company now
Phishing
- Getting better
- More targetting. Can know how company does things. Or knowing boss;’ namePhishing -> Spear Phishing -> Personalized Attacks
- Moving towards more organized and long term attacks
Adding AI
- Deep fake. Takes 7 seconds of voice to impersonate you. Example of CEO targetting
- Speechify clone voice
- Deepfake of Video to steal 25 million – on video call, people could look like who you expect but be controlled by others
- TikTok with Deep Fakes of Tom Cruise
- Can’t tell if video real
- Better fakes if spend more moeny
Bad things can do
- Deepfake nude generator
- Deepfake phishing grew by three thousand percent in 2023
Why now
- Not hard to do a reasonable fake. USB acceleration is sixty bucks
- Huggingface.co has lots of models
- Models and data avaialble to you and bad guys
Other problems
- Business looking at using AI/ChatGPT. Risk of bad info (Air Canada held liable for chat bot’s bad answer) or another command injection opportunity (someone had a chat bot run Python)
- NIST report on prompt injection
- Hugging face has clone and run pip install to work with model
- Python has a lot of dependency confusion problems. Python also has more intermediate binaries in GitHUb
- Malicious aptX package
- Java has more protections than Python ecosystem
- Models can be poisoned to act as sleeper agent. Ex: depending on circumstances (date, IP, etc) give bad answer
- Mutating to avoid checks
How Protect
- Paper on identifying mouth inconsistencies for lip synching
- Text/numbers wrong
- Find anomalies from lack of training data – this is going to be an arms race. Once AI knows wrong, can do better next time.
- Be more suspicious
- Secure supply chain – all the pieces involved in creating and delivering software
- Control AI tools in process
- Look at where models came from and decide if safe. Will have to prove where got it from
- Consider how train AI and when retrain it
- Government wants a SBOM, automated supply chain, evidence of software integrity and regular aduit
- SBOM (software bill of materials) don’t find malicious code but ensure you know what have
My take
Demos were great. Security has changed a lot. Good emphasis on depending on how much money you spend at it. It’s scary, but supposed to be. Need to think about what else I can do in my own life.
Someone challenged saying the grandparent scam sounds fake and nothing like the person. Steve didn’t get to reply, but it’s not a fare analogy. The grandparent same isn’t targeting (at least not much). Some targeting you specifically will have audio/bideo of you to base it off of. And then we are back to the 7 seconds is enough.