Back for a second year, the National Missing Persons Hackathon (NMPH) took place primarily online this time round due to COVID-19 restrictions, with only 2 physical locations setup; ACT and WA.

Personally I found this years event much more difficult in terms of finding information on the missing persons. Though I'm proud to say that my team and I still managed to finish 3rd overall for a second year running.

If you haven't read my NMPH 2019 Key Takeaways article, I would suggest that you do, as it covers some very valuable points which will assist contestants in all Trace Labs events, not just the NMPH.

Key Takeaways

  • The first takeaway is actually quite funny, as my own advise from last year about selecting the best target to begin the investigation backfired! In my previous article I mention that “selecting individuals with uncommon or unique names” will benefit you in your investigations and make searching for the individual so much easier.

    Now this isn't entirely false and I would still recommend this going forward in future competitions. Though in this particular scenario, I began by selecting the two (what I thought were) most uniquely named individuals. Both of Asian heritage, I assumed that there would be very few individuals with similar names as I had personally never seen them before. Boy was I wrong!

    It turned out that there was A LOT of people with similar names. Considering the Asian population within Australia is roughly 15% — 20%, this was a big oversight for me and something that wasted a significant amount of time, resulting in only 2–4 pieces of low scoring intel being found. So the lesson is: Just because you think something is unique, doesn't mean its unique! Looking back, I should have consulted with my teammates to determine how unique the names were.
  • Which leads us to my next takeaway…Its never too late! It took me a good 2 hours to really get going, especially after the initial setback in target selection. I struck out on a lot of the cases and didn't manage to find a lot of low hanging fruit too easily. Half way through the competition my team was still only around 100th.

    The second half of the competition went much better once the team and I started getting hits and pivoting off the initial findings. Honestly all it takes is 3–4 good cases to get the points needed to place in the top 10. Really double down on the cases which are the easiest and don't waste your time on others.
  • Now this takeaway is somewhat controversial and I was sceptical to even include it, but its something that I believe needs to be brought up. I don’t intend it in a malicious manner, rather as feedback with good intentions.

    After participating in this event for a second year and encountering the same issues, as well as speaking with a multitude of other contestants which have experienced similar issues, in addition to judges witnessing inconsistencies on their side, contestants need to be made aware that judges are not always correct. Sometimes valid intelligence can often be rejected, resulting in time spent over justifying the connection to a subject.

    It’s great to bring the open source community together and experience in such events negates most of these issues. Though the fact that individuals who lack industry experience can become judges is concerning. I’m aware that judges undergo training before events, though it would be good to see a requirement on behalf of TraceLabs which ensures all judges have previously competed in a number of TL CTFs or have relevant experience prior to assessing intel. Consistency in judging is key to ensuring a fair competition.

    As I mentioned before, this is not only something that contestants have brought up with me, but judges as well. Often times after this event and others, they will mention they witnessed a 50/50 split, where some judges accept certain intel while others do not.

    I know that in the end, the senior judges go over pretty much all the results and clean up any inconsistencies, though its important to note that the incorrect assessment of intelligence, can ultimately impact not only the team but even more so the investigation on the subject/s themselves.
Hopefully this is NOT how the TL community reacts haha

All in all, the event was great, I love Trace Labs and the work they do by helping to find missing persons and bringing closure to their families. As always, I want to give a big thank you to Trace Labs, AFP and AustCyber for organizing this spectacular event. See you all next year.

I will be posting more articles about OSINT, cyber security, threat intelligence and investigating, so make sure you follow me on here and on my Twitter @CassiusXIII

--

--

Cassius•X•III
Cassius•X•III

No responses yet