Tech Challenge Explained

The tech challenged I’ve used in Skuola.net was designed in order to be submitted by any developer, regardless the experience level or the programming language.

So the easiest choice was to do an algorithm test, and amongst the infinite list of test you can find online, I’ve picked one based on anagrams.

Another self-imposed constraint was about simplicity, no-one should’ve worked on it for more than a couple of days. The tests could’ve been solved in a couple of hours with no problems (in fact, the fastest candidate took half an hour).

Here’s the task:

Objective: Check that an anagram of a string is contained in another string.

Task: Prepare a command-line script which accepts 2 strings in input, checks if a given string A is any anagram contained in a string B, and prints out “true” or “false” based on the result of such comparison.

Assume that:

  • The code will be implemented preferably in PHP.
  • A is a string no longer than 1024 characters.
  • B is a string no longer than 1024 characters.
  • No native language functions will be used to anagram a string.
  • The comparison will be case-insensitive.

Example: Given 2 strings A = “abc” and B = “itookablackcab”, the scripts will print out “true”, because by anagramming A it can be found an occurrence of “cab” in the string B.

It’s an easy one, or so I thought. Then, once started getting the source-codes I’ve began to see all the possible weird things:

  • No coverage for edge cases (that was the normality)
  • No commented code (many many examples)
  • Many solutions with no OOP
  • 7+ level of nesting
  • Requirements misunderstood, and even got simply a strpos match
  • Lots of copy&paste from all over the internet (even code that wasn’t doing what requested)
  • Scripts that didn’t returned the expected result for the example provided
  • My laptop stuck frozen (with subsequent hard-reboot) by infinite loops sucking all the RAM they could possibly allocate
  • Scripts that weren’t CLI at all, instead a HTML form
  • Source code copy&pasted in the email body, one solution even in Word!

But instead I was expecting something more like this:

  • Source code versioned in a git repo
  • Unit Testing
  • Some sort of CI
  • Package Management via composer
  • A simple README file

Thanks God, there were few solutions that really stood out in a way I didn’t expect at the start:

  • Quick&Dirty CLI Script (i.e. one file straight to the point – aka procedural code) + Clean Version (i.e. a very structured solutions with all my expectation mentioned earlier satisifed)
  • Additional explanation of the approach followed to get to the solution
  • Scripts with really good performances, i.e. O(n)

Kudos to those guys!

So I began to ask myself the efficacy of this kind of tests, how high should we raise the bar (or lower it), whether it should be adapted based on the candidate. In particular what are the requirements for the bare minimum in order to pass the stage.

So with the time I’ve build kind of a thing around it, I’ve started with a bash script to quickly validate all the responses received and verify against a custom suite of tests. Then, we’ve built a checklist of aspects that the source code could/should/must have in order to score a pass. Also, started to expand and cover all the possible things one could have used in the challenge: from unicode to UML, from DDD to a dockerized version, from i18n to buffer overflow.

With the time, I became more aware of the importance of a tech test, not a live one face-to-face (to avoid pressure and blank stares) nor a mid-size project (at which the candidate had to work after-work for a week). A simple, quick and offline test, in order to see to what extent they are going to stretch my assignment.

Want to jump in the challenge and give it a try? Go ahead and send me the link back to your repo, I want to know how you approached to it! ๐Ÿ˜‰

I took myself the challenge back then, I took the challenge now, and you can find on GitHub my version of it.