Innovative GitHub Projects for Machine Learning in Python

Overview


Looking for machine learning projects to do right now? Here are 7 wide-ranging GitHub projects to try out


These projects cover multiple machine learning domains, including NLP, computer vision and Big Data


Add these to your machine learning skillset and expand your knowledge


 


Introduction


I have conducted tons of interviews for data science positions in the last couple of years. One thing has stood out – aspiring machine learning professionals don’t focus enough on projects that will make them stand out.


And no, I don’t mean online competitions and hackathons (though that is always a plus point to showcase). I’m talking about off-the-cuff experiments you should do using libraries and frameworks that have just been released. This shows the interviewer two broad things:


You have an unquenchable curiosity for machine learning. This is a vital aspect of being a successful data scientist


You are not afraid to experiment with new algorithms and techniques


And guess which platform has the latest machine learningdevelopments and code? That’s right – GitHub!




So let’s look at the top seven machine learning GitHub projects that were released last month. These projects span the length and breadth of machine learning, including projects related to Natural Language Processing (NLP)Computer Vision, Big Data and more.


This is part of our monthly Machine Learning GitHub series we have been running since January 2018. Here are the links for this year so you can catch up quickly:



February


March


April


May


June


 


Top Machine Learning GitHub Projects





I’ll be honest – the power of Natural Language Processing (NLP) blows my mind. I started working in data science a few years back and the sheer scale at which NLP has grown and transformed the way we work with text – it almost defies description.


PyTorch-Transformers is the latest in a long line of state-of-the-art NLP libraries. It has beaten all previous benchmarks in various NLP tasks. What I really like about PyTorch Transformers is that is contains PyTorch implementations, pretrained models weights and other important components to get you started quickly.


You might have been frustrated previously at the ridiculous amount of computation power required to run state-of-the-art models. I know I was (not everyone has Google’s resources!). PyTorch-Transformers eradicates the issue to a large degree and enables folks like us to build state-of-the-art NLP models.



 



Multi-label classification on text data is quite a challenge in the real world. We typically work on single label tasks when we’re dealing with early stage NLP problems. The level goes up several notches on real-world data.


In a multi-label classification problem, an instance/record can have multiple labels and the number of labels per instance is not fixed.


NeuralClassifier enables us to quickly implement neural models for hierarchical multi-label classification tasks. What I personally like about NeuralClassifier is that it provides a wide variety of text encoders we are familiar with, such as FastText, RCNN, Transformer encoder and so on.




We can perform the below classification tasks using NeuralClassifier:


Binary-class text classification


Multi-class text classification


Multi-label text classification


Hierarchical (multi-label) text classification


 


TDEngine (Big Data)


This TDEngine repository received the most stars of any new project on GitHub last month. Close to 10,000 stars in less than a month. Let that sink in for a second.


TDEngine is an open-source Big Data platform designed for:


Internet of Things (IoT)


Connected Cars


Industrial IoT


IT Infrastructure, and much more.


TDEngine essentially provides a whole suit of tasks that we associate with data engineering. And we get to do all this at super quick speed (10x speed on processing queries and 1/5th computational usage).


There’s a caveat (for now) – TDEngine only supports execution on Linux. This GitHub repository includes the full documentation and starter’s guide with code.



 


Video Object Removal (Computer Vision)


Have you worked with any image data yet? Computer Vision techniques for manipulating and dealing with images are quite advanced. Object detection for images is considered a basic step to becoming a computer vision expert.


What about videos, though? The difficult level goes up several notches when we’re asked to simply draw bounding boxes around objects in videos. The dynamic aspect of objects makes the entire concept more complex.


So, imagine my delight when I came across this GitHub repository. We just need to draw a bounding box around the object in the video to remove it. It really is that easy! Here are a couple of examples of how this project works:





 


Python Autocomplete (Programming)


You’ll love this machine learning GitHub project. As data scientists, our entire role revolves around experimenting with algorithms (well, most of us). This project is about how a simple LSTM model can autocomplete Python code.


The code highlighted in grey below is what the LSTM model filled in (and the results are at the bottom of the image):




As the developers put it:


We train and predict on after cleaning comments, strings and blank lines in the python code. The model is trained after tokenizing python code. It seems more efficient than character level prediction with byte-pair encoding.


If you’ve ever spent (wasted) time on writing out mundane Python lines, this might be exactly what you’re looking for. It’s still in the very early stages so be open to a few issues.



 



TensorFlow and PyTorch both have strong user communities. But the incredible adoption rate of PyTorch should see it leapfrog TensorFlow in the next year or two. Note: This isn’t a knock on TensorFlow which is pretty solid.


So if you have written any code in TensorFlow and a separate one in PyTorch and want to combine the two to train a model – the tfpyth framework is for you. The best part about tfpyth is that we don’t need to rewrite the earlier code.




This GitHub repository includes a well structured example of how you can use tfpyth. It’s definitely a refreshing look at the TensorFlow vs. PyTorch debate, isn’t it?


Installing tfpyth is this easy:


pip install tfpyth


 



I associate transfer learning with NLP. That’s my fault – I am so absorbed with the new developments that I did not imagine where else transfer learning could be applied. So I was thrilled when I came across this wonderful MedicalNet project.




This GitHub repository contains a PyTorch implementation of the ‘Med3D: Transfer Learning for 3D Medical Image Analysis‘ paper. This machine learning project aggregates the medical dataset with diverse modalities, target organs, and pathologies to build relatively large datasets.


And as we well know, our deep learning models do (usually) require a large amount of training data. So MedicalNet, released by TenCent, is a brilliant open source project I hope a lot of folks work on.


The developers behind MedicalNet have released four pretrained models based on 23 datasets. And here is an intuitive introduction to transfer learning if you needed one:



 


End Notes


Quite a mix of machine learning projects we have here. I have provided tutorials, guides and resources after each GitHub project.


Comments

Popular posts from this blog

SSO — WSO2 API Manager and Keycloak Identity Manager

Garbage Collectors - Serial vs. Parallel vs. CMS vs. G1 (and what’s new in Java 8)

Recommendation System Using Word2Vec with Python