[Notes] (Ir)Reproducible Machine Learning: A Case Study

Photo Credit I just read this (draft) paper named “(Ir)Reproducible Machine Learning: A Case Study” (blog post; paper). It reviewed 15 papers that were focusing on predicting civil war and evaluated using a train-test split. Out of these 15 papers: 12 shared the complete code and data for their results. 4 have errors. 9 do not have hypothesis testing or uncertainty quantification (including 3 of 4 papers with errors). Three of the papers with errors shared the same dataset. Muchlinski et al.[1] created the dataset, and then Colaresi and Zuhaib Mahmood[2] and Wang[3] reused the dataset without noticing the critical error in Muchlinski et al.’s dataset construction process — data leakage due to imputing the training and test data together. ...

August 27, 2021 · Ceshine Lee

[Notes] Understanding XCiT - Part 2

Photo Credit In Part 1, we introduced the XCiT architecture and reviewed the implementation of the Cross-Covariance Attention(XCA) block. In this Part 2, we’ll review the implementation of the Local Patch Interaction(LPI) block and the Class Attention layer. from [1] Local Patch Interaction(LPI) Because there is no explicit communication between patches(tokens) in XCA, a layer consisting of two depth-wise 3×3 convolutional layers with Batch Normalization with GELU non-linearity is added to enable explicit communication. ...

July 25, 2021 · Ceshine Lee

[Notes] Understanding XCiT - Part 1

credit Overview XCiT: Cross-Covariance Image Transformers[1] is a paper from Facebook AI that proposes a “transposed” version of self-attention that operates across feature channels rather than tokens. This cross-covariance attention has linear complexity in the number of tokens (the original self-attention has quadratic complexity). When used on images as in vision transformers, this linear complexity allows the model to process images of higher resolutions and split the images into smaller patches, which are both shown to improve performance. ...

July 24, 2021 · Ceshine Lee

How to Create a Documentation Website for Your Python Package

Photo Credit Motivation Sphinx is a tool that helps you create intelligent and beautiful documentation. I use it to generate documentation for the pytorch-lightning-spells project and publish it on readthedocs.io for free (if the project is open-source). Documentation is tremendously helpful to users of your project (including yourselves). As long as you maintain the good habit of writing docstrings in your code, Sphinx will convert the docstrings into webpages for you, drastically reducing the manual labor required from you. ...

June 13, 2021 · Ceshine Lee

Text Analysis using Julia

Photo Credit Overview I tried to conduct some exploratory analysis on the title field of the “Shopee - Price Match Guarantee” dataset. I wanted to know how similar the titles are within the same group, so we can have a rough idea of how useful the field would be in determining if two listings belong to the same group. I used StringDistances.jl for raw string analysis and WordToeknizers.jl for token analysis. Instead of using Jupyter Notebook, I used Pluto.jl to get reactive notebooks with more presentably visual design right out of the box. The experience was a blast. Writing in Julia is not as hard as I expected, and the end result is very clean and blazing fast. ...

May 1, 2021 · Ceshine Lee