Meta (Facebook) Machine Learning Mock Interview: Illegal Items Detection

SdĂ­let
VloĆŸit
  • čas pƙidĂĄn 17. 01. 2022
  • Today Zarrar talks us through this question asked by Facebook about how to use Machine Learning to flag illegal items posted on a marketplace.
    Try adding your own solution to the question here: www.interviewquery.com/questi...
    👉 Subscribe to my data science channel: bit.ly/2xYkyUM
    đŸ”„ Get 10% off machine learning interview prep: www.interviewquery.com/pricin...
    ❓ Check out our machine learning interview course: www.interviewquery.com/course...
    🔑 Get professional coaching from Zarrar here: www.interviewquery.com/coachi...
    🐩 Follow us on Twitter: / interview_query
    Attention Hiring Managers & Recruiters: Ready to find the top 1% of machine learning talent for your team? Accelerate your hiring with Outsearch.ai. Their AI-powered platform seamlessly filters the best candidates, making building your dream team easier than ever: www.outsearch.ai/?...
    More from Jay:
    Read my personal blog: datastream.substack.com/
    Follow me on Linkedin: / jay-feng-ab66b049
    Find me on Twitter: / datasciencejay
    Related Links:
    Facebook Data Science Interview Questions: www.interviewquery.com/blog-f...
    Facebook Data Science Internships: How to Land the Job: www.interviewquery.com/p/face...
  • Věda a technologie

Komentáƙe • 57

  • @AlexXPandian
    @AlexXPandian Pƙed měsĂ­cem +5

    This guy has mastered the art of how to talk for 20 minutes something that can be explained to a technologist in 2 minutes, and that my friends is what a system design interview is all about. You have to talk about every detail no matter how boring/mundane it is to you or how obvious you might think it is.

  • @anasal-tirawi2096
    @anasal-tirawi2096 Pƙed 2 lety +63

    Typical end to end ML Question:
    Understate the problem, Data collection, Feature Engineering, Building Model, Train Model, Evaluate Performance ( Confusion Matrix: Precision ± Recall) , Deploy Model, Rebuild Model if needed

    • @hongliangfei3170
      @hongliangfei3170 Pƙed rokem +2

      Good summary!

    • @sophiophile
      @sophiophile Pƙed 3 měsĂ­ci +5

      Decent summary, but most FAANG interviewers would probably dock for not discussing online training, A/B testing, exploratory analysis for selecting model

  • @julianmartindelfiore7420
    @julianmartindelfiore7420 Pƙed 2 lety +26

    I feel this video is a fantastic resource, not only the explanation was great and very insightful, but I think you also made the right questions, going for the extra-mile of the explanation/analysis...thank you for sharing!

  • @umamiplaygroundnyc7331
    @umamiplaygroundnyc7331 Pƙed 7 měsĂ­ci +1

    Wow this guy is good. I really like how he start from model framework with baseline model, point out the reasoning and key considerations - and we can evolve from there to more complicated model just by all similar reasoning

  • @sallespadua
    @sallespadua Pƙed rokem +3

    Amazing! As a point to improve even more, I’d add as finishing touch fine-tuning the model with adversarial examples.

  • @ploughable
    @ploughable Pƙed 3 měsĂ­ci +1

    2 points that I would added for the end questions:
    1. in order to overcome the coded firearm words -> use tranformers models like BERT as you can catch the meaning by the embeddings (ie: cosine similarity) and filter the best ratings
    2. Computer Vision on the images can be used as additional inference if the F1 score is low, but not always as this type of inference is more expensive

  • @being.jajabor2187
    @being.jajabor2187 Pƙed 2 lety

    This is a fantastic video for giving an idea for an ML system design interview ! Thanks for making this.

  • @sunny2253
    @sunny2253 Pƙed 3 měsĂ­ci +1

    Should've mentioned that people try to disguise the actual product description using proxy words.
    Also, to include image analysis or not, I'd draw multiple samples and train models in A/B setting. Then run a t-test to see if the mean prediction metric is significantly different or not.

  • @marywang8013
    @marywang8013 Pƙed rokem +1

    Were you use white board for ML design architecture? Is white boarding helpful in the interview?

  • @iqjayfeng
    @iqjayfeng  Pƙed 2 lety +2

    Thanks for tuning in! If you're interested in learning more about machine learning, be sure to check out our machine learning course. It's designed to help you master the key concepts and skills needed to excel in machine-learning roles.
    www.interviewquery.com/learning-paths/modeling-and-machine-learning

  • @junweima
    @junweima Pƙed rokem +2

    It's also possible to use re-ranking or bagging approaches to combine xgboost model and vision/nlp model, which would most likely improve performance

    • @Gerald-iz7mv
      @Gerald-iz7mv Pƙed 5 měsĂ­ci

      you mean use a gradient boosted tree in the first stage and in the second stage use a vision/mlp model (which is more complex and takes longer to excute)?

  • @RanjitK1
    @RanjitK1 Pƙed rokem

    Great Interview Zarrar!

  • @mdaniels6311
    @mdaniels6311 Pƙed 3 dny

    You wan a system that overfits and hits lots of false positives, as false negatives can be catastrophic, legally, for the reputation of the business, could even lead to regulatory action and media scrutiny, killing sales, market cap, etc.. You then have agents go through the false positives and efficiently decide if they are truly false positive or not. This data can also help train the model. The cost of hiring people to go through and check is much lower than losing 5% of market cap due to negative press.

  • @robertknight9242
    @robertknight9242 Pƙed 2 lety +2

    Great videos! Where do you get the sample questions from shown at the start of the video?

  • @Gerald-iz7mv
    @Gerald-iz7mv Pƙed 5 měsĂ­ci

    what does the following mean? TF-IDF: "We scale the values of each word based of each frequency in different postings"?

  • @87prak
    @87prak Pƙed 9 měsĂ­ci +2

    Sorry, where did you discuss the label generation part? There are multiple ways to generate labels with pros and cons:
    1. user feedback: Automatic, lot of data but noisy.
    2. Manual annotation: accurate labels but not scalable. Very high proportion of examples would be tagged as negative.
    3. Bootstrap: Train a simple model and sample more examples based on model scores to get a higher proportion of positive examples.
    4. Hybrid: Manually annotate examples marked as "X" by users where "X" can be tags like "illegal", "offsensive", etc.

    • @sophiophile
      @sophiophile Pƙed 3 měsĂ­ci

      You can also scrape for images, and generate listing using LLMs for high quality synthetic data.

  • @_seeker423
    @_seeker423 Pƙed rokem +3

    Re; whether or not to do CV on images - shouldn't one do error analysis to check if text and other features lacked the predictive power and the signal was elsewhere (aka images) which is why we should invest in extracting signals from images; as opposed to building a giant model with all features and doing ablations to understand feature class importance. Latter seems quite expensive?

    • @besimav
      @besimav Pƙed rokem +3

      If you are working for FB, you can afford to go for an expensive model. If a candidate didn’t mention CV, I would be unhappy since there is a good source of data you are not making use of.

  • @bhartendu_kumar
    @bhartendu_kumar Pƙed 2 lety +3

    Great insights to sample questions

  • @ArunKumar-bp5lo
    @ArunKumar-bp5lo Pƙed 2 lety +3

    great insights but the text data can be various language but when he also said augment the some keywords to detect can that work or train different language different??just curious

    • @iqjayfeng
      @iqjayfeng  Pƙed 2 lety +1

      Synonyms and similar words can help embellish the classifier and create new features

  • @jamessukanto8078
    @jamessukanto8078 Pƙed 2 lety +7

    Hello. I think this was super helpful overall. I'm a little confused when he describes Gradient Boosting. For each successor tree, we should set new target labels for training errors in the predecessor, no? (and leave the weights alone)

    • @jiahuili2133
      @jiahuili2133 Pƙed 2 lety +10

      I think he was talking about adaboost instead of gradient boosting.

    • @Gerald-iz7mv
      @Gerald-iz7mv Pƙed 5 měsĂ­ci

      @@jiahuili2133 how does a Gradient Boosted Tree work in this context? Any other models would could use here? Unsupervised machine learning?

    • @sophiophile
      @sophiophile Pƙed 3 měsĂ­ci

      ​@@Gerald-iz7mvYou already have labels, though. So supervised learning is probably superior.

    • @Gerald-iz7mv
      @Gerald-iz7mv Pƙed 3 měsĂ­ci

      @@sophiophile but labeling the data is a lot of effort?

    • @sophiophile
      @sophiophile Pƙed 3 měsĂ­ci

      @@Gerald-iz7mv You already have labelled data in this case. They described having the historical set of previously flagged posts. Also, expecting to cluster out the gun posts in an unsupervised manner when they make up such a small proportion of the listings is unrealistic. The other thing is that feature engineering and labeling pipelines are simply part of the job, when it comes to ML.
      Nowadays, you can also very easily create synthetic labelled data of a very high quality using generative models as well to help with the imbalanced set.

  • @pratikmandlecha6672
    @pratikmandlecha6672 Pƙed rokem

    Wow this was so useful.

  • @_seeker423
    @_seeker423 Pƙed rokem +2

    Around @12:00 the algorithm that upweights incorrect prediction is Adaboost instead of GBM, right?

  • @alexeystysin8265
    @alexeystysin8265 Pƙed rokem

    I can never remember what Precision and Recall stands for. It is clearly visible how the interveiwee was also confused and video is edited around that point.

  • @fahnub
    @fahnub Pƙed rokem

    thanks Zarrar

  • @fahnub
    @fahnub Pƙed rokem

    this is the best video ever

  • @dkshmeeks
    @dkshmeeks Pƙed rokem +3

    Great video. I find all the quick cuts to be a bit disorienting though.

  • @KS-df1cp
    @KS-df1cp Pƙed rokem +2

    I would have suggested CNN as an alternative approach but ya agree. The listing is not only about an image but also text. Edge case where they have different text and different images then that won't get captured. Thank you.

    • @sophiophile
      @sophiophile Pƙed 3 měsĂ­ci +2

      I haven't watched the video yet, but a lot of people will dock points for over-engineering. I haven't seen his suggested solution yet, but if a really basic ensemble approach (one model for image, one for text) can achieve the goal instead of a single multi-modal one and with less resources at every step- go for that and explain why.
      Now, to be fair, you were commenting prior to the multimodal LLMs being everywhere, so that does change the considerations.

    • @KS-df1cp
      @KS-df1cp Pƙed 3 měsĂ­ci

      @@sophiophile thank you

  • @hasnainmamdani4534
    @hasnainmamdani4534 Pƙed 2 lety +3

    Very useful! Thanks for sharing. Do they ask about data pipelines and technologies that might be useful to scale the model (for the MLE role)? Would love to know more resources on it! as well as more mock interviews :)

    • @iqjayfeng
      @iqjayfeng  Pƙed 2 lety +1

      Definitely in the MLE interview loops!

  • @evanshlom1
    @evanshlom1 Pƙed 2 lety

    INFORMATIVE GOOD SIR

  • @claude7222
    @claude7222 Pƙed 3 měsĂ­ci

    @iqjayfeng I think Zarrar mistakenly mixed up False Pos and False Neg around 2:00 mark. It would be ok if customer service received False Neg (model pred True but its really False) not False Pos

  • @goelnikhils
    @goelnikhils Pƙed rokem +2

    Excellent

  • @georgezhou9211
    @georgezhou9211 Pƙed rokem +1

    Why does he say that it is a better idea to use NN rather than gradient boosted trees if we need to continuously train/update the model with every new training label that we collect from the customer labeling team?

    • @sandeep9282
      @sandeep9282 Pƙed rokem

      Because you can update a NN weights with just new data points by fine-tuning unlike tree based models which *may* require re-training with old+new data

    • @sandeep9282
      @sandeep9282 Pƙed rokem

      Remember tree based models are sensitive to change in data

  • @huanchenli4137
    @huanchenli4137 Pƙed rokem

    GBM is fast to train?????

  • @scchouhansanjay
    @scchouhansanjay Pƙed 2 lety +1

    F2 score will be better here I think đŸ€”

  • @songsong2334
    @songsong2334 Pƙed 2 lety +1

    If the dataset is biased? Why bother using accuracy as the metrics to evaluate the model?

    • @mikiii880
      @mikiii880 Pƙed rokem +2

      I believe he was using accuracy in its semantic meaning, not the actual metric. He already said he would use F1, and then referred to it as “accuracy” because it’s an easier word. Probably “score” would have cleared the confusion.

  • @lidavid6580
    @lidavid6580 Pƙed rokem +1

    Too mock, not like real interview, all things were mouth work without any drawing and writing.