{"id":782,"date":"2020-04-01T11:28:54","date_gmt":"2020-04-01T09:28:54","guid":{"rendered":"https:\/\/devpath.pro\/?p=782"},"modified":"2020-04-01T11:28:54","modified_gmt":"2020-04-01T09:28:54","slug":"tech-challenge-explained","status":"publish","type":"post","link":"https:\/\/fabiocicerchia.it\/career\/tech-challenge-explained","title":{"rendered":"Tech Challenge Explained"},"content":{"rendered":"
The tech challenged I’ve used in Skuola.net was designed in order to be submitted by any developer, regardless the experience level or the programming language.<\/p>\n
So the easiest choice was to do an algorithm test, and amongst the infinite list of test you can find online, I’ve picked one based on anagrams.<\/p>\n
Another self-imposed constraint was about simplicity, no-one should’ve worked on it for more than a couple of days. The tests could’ve been solved in a couple of hours with no problems (in fact, the fastest candidate took half an hour).<\/p>\n
Here’s the task:<\/p>\n
Objective: <\/em>Check that an anagram of a string is contained in another string.<\/p>\n
Task:<\/em> Prepare a command-line script which accepts 2 strings in input, checks if a given string A is any anagram contained in a string B, and prints out “true” or “false” based on the result of such comparison.<\/p>\n
Assume that:<\/em><\/p>\n
\n
- The code will be implemented preferably in PHP.<\/li>\n
- A is a string no longer than 1024 characters.<\/li>\n
- B is a string no longer than 1024 characters.<\/li>\n
- No native language functions will be used to anagram a string.<\/li>\n
- The comparison will be case-insensitive.<\/li>\n<\/ul>\n
Example: Given 2 strings A = “abc” and B = “itookablackcab”, the scripts will print out “true”, because by anagramming A it can be found an occurrence of “cab” in the string B.<\/p><\/blockquote>\n
<\/p>\n
It’s an easy one, or so I thought. Then, once started getting the source-codes I’ve began to see all the possible weird things:<\/p>\n
\n
- No coverage for edge cases (that was the normality)<\/li>\n
- No commented code (many many examples)<\/li>\n
- Many solutions with no OOP<\/li>\n
- 7+ level of nesting<\/li>\n
- Requirements misunderstood, and even got simply a
strpos<\/code> match<\/li>\n
- Lots of copy&paste from all over the internet (even code that wasn’t doing what requested)<\/li>\n
- Scripts that didn’t returned the expected result for the example provided<\/li>\n
- My laptop stuck frozen (with subsequent hard-reboot) by infinite loops sucking all the RAM they could possibly allocate<\/li>\n
- Scripts that weren’t CLI at all, instead a HTML form<\/li>\n
- Source code copy&pasted in the email body, one solution even in Word!<\/li>\n<\/ul>\n
But instead I was expecting something more like this:<\/p>\n
\n
- Source code versioned in a git repo<\/li>\n
- Unit Testing<\/li>\n
- Some sort of CI<\/li>\n
- Package Management via composer<\/li>\n
- A simple README file<\/li>\n<\/ul>\n
Thanks God, there were few solutions that really stood out in a way I didn’t expect at the start:<\/p>\n
\n
- Quick&Dirty CLI Script (i.e. one file straight to the point – aka procedural code) + Clean Version (i.e. a very structured solutions with all my expectation mentioned earlier satisifed)<\/li>\n
- Additional explanation of the approach followed to get to the solution<\/li>\n
- Scripts with really good performances, i.e. O(n)<\/li>\n<\/ul>\n
Kudos to those guys!<\/p>\n
So I began to ask myself the efficacy of this kind of tests, how high should we raise the bar (or lower it), whether it should be adapted based on the candidate. In particular what are the requirements for the bare minimum in order to pass the stage.<\/p>\n
So with the time I’ve build kind of a thing around it, I’ve started with a bash script to quickly validate all the responses received and verify against a custom suite of tests. Then, we’ve built a checklist<\/a> of aspects that the source code could\/should\/must have in order to score a pass. Also, started to expand and cover all the possible things one could have used in the challenge: from unicode to UML, from DDD to a dockerized version, from i18n to buffer overflow.<\/p>\n
With the time, I became more aware of the importance of a tech test, not a live one face-to-face (to avoid pressure and blank stares) nor a mid-size project (at which the candidate had to work after-work for a week). A simple, quick and offline test, in order to see to what extent they are going to stretch my assignment.<\/p>\n
Want to jump in the challenge and give it a try? Go ahead and send me the link back to your repo, I want to know how you approached to it! \ud83d\ude09<\/p>\n
I took myself the challenge back then, I took the challenge now, and you can find on GitHub<\/a> my version of it.<\/p>\n","protected":false},"excerpt":{"rendered":"
The tech challenged I’ve used in Skuola.net was designed in order to be submitted by any developer, regardless the experience level or the programming language. So the easiest choice was to do an algorithm test, and amongst the infinite list of test you can find online, I’ve picked one based on anagrams. Another self-imposed constraint […]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"advgb_blocks_editor_width":"","advgb_blocks_columns_visual_guide":"","footnotes":""},"categories":[5],"tags":[113,114],"aioseo_notices":[],"author_meta":{"display_name":"fabio","author_link":"https:\/\/fabiocicerchia.it\/author\/fabio"},"featured_img":null,"coauthors":[],"tax_additional":{"categories":{"linked":["Career<\/a>"],"unlinked":["Career<\/span>"]},"tags":{"linked":["interview<\/a>","test<\/a>"],"unlinked":["interview<\/span>","test<\/span>"]}},"comment_count":"0","relative_dates":{"created":"Posted 4 years ago","modified":"Updated 4 years ago"},"absolute_dates":{"created":"Posted on April 1, 2020","modified":"Updated on April 1, 2020"},"absolute_dates_time":{"created":"Posted on April 1, 2020 11:28 am","modified":"Updated on April 1, 2020 11:28 am"},"featured_img_caption":"","series_order":"","_links":{"self":[{"href":"https:\/\/fabiocicerchia.it\/wp-json\/wp\/v2\/posts\/782"}],"collection":[{"href":"https:\/\/fabiocicerchia.it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fabiocicerchia.it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fabiocicerchia.it\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fabiocicerchia.it\/wp-json\/wp\/v2\/comments?post=782"}],"version-history":[{"count":0,"href":"https:\/\/fabiocicerchia.it\/wp-json\/wp\/v2\/posts\/782\/revisions"}],"wp:attachment":[{"href":"https:\/\/fabiocicerchia.it\/wp-json\/wp\/v2\/media?parent=782"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fabiocicerchia.it\/wp-json\/wp\/v2\/categories?post=782"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fabiocicerchia.it\/wp-json\/wp\/v2\/tags?post=782"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}