Although numeric data is easy to work with in Python, most knowledge created by humans is actually raw, unstructured text. By learning how to transform text into data that is usable by machine learning models, you drastically increase the amount of data that your models can learn from. In this tutorial, we'll build and evaluate predictive models from real-world text using scikit-learn. (Presented at PyCon on May 28, 2016.)
GitHub repository: https://github.com/justmarkham/pycon-2016-tutorial
Enroll in my online course: http://www.dataschool.io/learn/
== OTHER RESOURCES ==
My scikit-learn video series: https://www.youtube.com/playlist?list=PL5-da3qGB5ICeMbQuqbbCOQWcS6OYBr5A
My pandas video series: https://www.youtube.com/playlist?list=PL5-da3qGB5ICCsgW1MxlZ0Hq8LL5U3u9y
== LET'S CONNECT! ==
JOIN the "Data School Insiders" community and receive exclusive rewards:
Hey, You have used 2 classes for classification right? What if I need more than 2 class, eg: contempt, depression, anger, joy and many such emotions. Do I need to change any of the code in here, or providing a data set with multiple classes is enough?
And I have one more doubt; Once the model is built and prepared, how can I actually know into which class, a new text document supplied as input will belong to? eg: If the new document is ham or spam?
1. Most of the time, you don't need to modify your scikit-learn code for multi-class classification.
2. Using the predict method
Hope that helps! You might be interested in my course: https://www.dataschool.io/learn/
two question about the Bag of Words which have obsessed me for a while.first question is my source file has 2 columns, one is email content, which is text format, the other is country name(3 different countries) from where the email is sent, and I want to label if the email is Spam or not, here the assumption is the email sent from different countries also matters if email is spam or not. so besides the bag of words, I want to add a feature which is country, the question is that is there is way to implement it in sklearn.The other question is besides Bag of Words, what if I also want to consider the position of the words, for instance if word appears in first sentence, I want to lower its weight, if word appears in last sentence, I want to increase its weight, is there a way to implement it in sklearn.Thanks.
The vectorization is creating a sparse matrix, which is quite memory efficient. It sounds like the problem is that you are merging a sparse matrix with a dense matrix, which forces the sparse matrix to become dense, which would definitely create memory problems.
One solution is to train models on the datasets separately and then ensemble them. It sounds like you might be doing this already, but aren't getting good results? If so, I don't think it's because of class imbalance.
I think that using the max_features parameter of CountVectorizer will accomplish what you are trying to do, though I don't think it's necessarily a good strategy. You will lose too much valuable data.
My recommended strategy is not super simple, so I can't describe it briefly, but it's covered in module 5 of my online course: http://www.dataschool.io/learn/
Hope that helps!
Thanks for the suggestion. I figured if I explain the problem better I'd get a better help. I'm trying to predict whether an item will fail or not. I have a data set with over 30 variables one of which I'm trying to vectorize. Doing this blows that one variable to over 7,000. Because of this I run out of memory when merging them to the data set containing the 30 other variables. Also due to the data set being unbalanced, the models don't train well using the two data sets independently (similar results both as good as random). I recently created an account on AWS and bought a powerful instance; I was able to merge the two and still it didn't train well. My goal is to use say the top 20 feature and merge with the 30 other variables to train. I used dtm=fit_transform() for that one variable. Is there a way to limit the number of features to an arbitrary number say 20; that is the ones with the highest tf idf scores? Or can I manually get them? Sorry for the length and thanks for the help
I have progressively watched your video's from Pandas to Scikit Learn to this video on ML with Text. All have been brilliant videos and very nicely paced.
Kudo's on that and hope you continue with more videos (shout out for Jupyter Notebooks ;-) ).
I have one question specific to the topic on this video.
For text analytic, the recommendation is to create a vocabulary and document-term matrix of the train data using a Vectorizer (i.e. instantiate a CountVectorizer and use fit_transform).
Then use the fitted vocabulary to build a document-term matrix from the testing data (i.e. from the vector used during training perform a transform).
If I use TfidfVectorizer and then TruncatedSVD as shown below, is the commented step-3 the right way ?
# Step 1: perfrom train, test split.
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# Step 2: create a Tf-Idf Matrix and perform SVD on it.
tfidf_vectorizer = TfidfVectorizer(sublinear_tf=True, stop_words='english')
tfidf_train = tfidf_vectorizer.fit_transform(X_train)
svd = TruncatedSVD(n_components=200, random_state=42)
X_train_svd = svd.fit_transform(tfidf_train)
# Step 3: transforming Testing Data ??
# Is this the right way:
# tfidf_test = tfidf_vectorizer.transform(X_test)
# X_test_svd = svd.transform(tfidf_test)
Thanks in advance.
I have gone through a hell of videos and materials in machine learning. But this is the best which is properly paced and make it easy to follow and learn and takes you inside machine learning. I am keen to know whether you would start on Deep learning and tensorflow soon? It would be really helpful for those who are confused on overwhelming amount of materials. Thanks a lot!!
Hi. It was mentioned on 1:06 that the X should be 1 dimensional. What if I have 2 set/column of text? The 2 column has certain relationship, so merging them into a single column is probably not the best way.
Great question! Sometimes, merging the text columns into the same column is the best solution. Other times, you should build separate feature matrices and merge them, either using FeatureUnion or SciPy.
It is a great lecture. Eventhough I am new to this machine learning, I understood the basics of machine learning and logistic regression. I have a doubt. Can we classify into more than two groups(Ham, Spam and some_other) ?
I'd like to jump in into the questions around 55:00 and ask:
Why don't we keep track of the order of the words in a dataset? The meaning of two datasets containing the same words could be really different, for example "Call me "Tom". " and "Tom, call me!". Right now those two datasets look exactly the same to us when vectorized like in the lecture. I thought maybe we could create a higher dimensional matrix and represent those word combinations as vectors in space and then fit a model on this. Would this work?
Great question! We don't keep track of word order in order to simplify the problem, and because we don't believe that word order is useful enough to justify including it. (That would add more "noise" than "signal" to our model, reducing predictive accuracy.) That being said, you can include n-grams in the model, which preserves some amount of word order and can sometimes be helpful.
Hi Kevin, that is a great video. I have one question. When i am dealing with a dataframe have large number of rows, each row having large texts which text vectorizer will be better tfidfvectotizer or countvectorizer or hashingvectorizer. I applied tfidf but it generates many feauture vectors which later becomes difficult to append it to the origial dataframe becoz
of large array size
It's impossible to know in advance which vectorizer will work best, sometimes you just have to experiment!
Once you have generated a document-term matrix, you should not put it back in pandas. It should remain a sparse array.
Hope that helps!
Great question! The scikit-learn documentation says that "All scikit-learn classifiers are capable of multiclass classification": http://scikit-learn.org/stable/modules/multiclass.html
So yes, that should work!
Hi does anyone know how we can extract and store the words that are thrown out during the transformation? Like is there an easier way (built-in function) other than writing python text regular expression or manipulation to compare the words and feature names?
Holy crap, he can talk at a normal speed! Anyway, this series was great. I can find my way around with Python but I'm a complete beginner to data science and machine learning and I've learned a ton. I will definitely be re-watching this entire series to really grasp the material. Thanks again, keep up the good work.
THIS is some great stuff. . .really helpful I am working on my final year project. I am working on the classification of cattle and wanted to use machine learning (for the facial recognition of both pet and livestock)
Nevermind my previous comment, problem solved. But now I have a new one and would be very happy if you can help me answer it! When I calculate my ham and spam frequencies, my ham count is completely different than yours. It reads: 1.373624e-09 for very, 4.226535e-11 for nasty, 2.113267e-11 for villa, 4.226535e-11 for beloved, and 2.113267e-11 for textoperator. Any way to fix this or has the data changed since then?
The dataset hasn't changed. Are you sure all the code you wrote was identical to my code? You can check your code here: https://github.com/justmarkham/pycon-2016-tutorial/blob/master/tutorial_with_output.ipynb
Great question! You could use a similar approach with other classification models, though the code would be a bit more complicated because you wouldn't have access to the feature_count_ and class_count_ attributes of the Naive Bayes model.
Once again thanks a lot for the video, been learning a lot from this. Quick question though, can you give the full url for the one you provided around 1:00:00? I tried both methods and none worked! Thanks!
Here's the URL for the SMS dataset: https://raw.githubusercontent.com/justmarkham/pycon-2016-tutorial/master/data/sms.tsv
And you can find all of the code shown in the video here: https://github.com/justmarkham/pycon-2016-tutorial
Hope that helps!
Thanks for the wonderful tutorial. I just have a very basic question - We did image classification in past and used neural network. There we used few convolutional layers and activation function. However I see here that you did not use any convolutional layers and activation function. Is this because you are using naive bayes classifier not neural network classifier algorithms?
Thanks in advance.
When I say that it's "150 by nothing", that really just means that it's a one-dimensional object in which the magnitude of the first dimension is 150, and there is no second dimension. That is distinct from a two-dimensional object of 150 by 1. Does that help?
If I misunderstood your question, please let me know!
Thank you, very much, just viewed all your online course. Im not really that super duper with machine learning, but your courses certainly got me thinking and able to get scikit to work at least.
One thing i will have to research, is if your initial dataset uses classes good/bad etc instead of numbers such 1,0 how to actually get that into the i think its “label.map” from this video.
This video shows me how to do it kinda of briefly but your “Machine learning in Python with scikit-learn” series does not cover it at all – (unless i missed it somewhere).
Also near the end of your “Machine learning in Python with scikit-learn” the course length become longer which means i have to stop it more often. So maybe more breaks could help.
As i said, its amazing what you have provided, and im just trying to offer some feed back – instead of just being all take.
Thanks for your feedback! Regarding your question about numeric labels, I think this video might be helpful to you: https://www.youtube.com/watch?v=P_q0tkYqvSk&list=PL5-da3qGB5ICCsgW1MxlZ0Hq8LL5U3u9y&index=30
Very nice work Kevin. I suspect I did what a lot do -- jump into ML without a lot of fundamentals. My experience was after doing one of the "hello world" tutorials on ML (IRIS dataset), I immediately "wired up" my features, which were of course full of text, and crashed my model with string errors. After that crash, your video was my "back to the drawing board" trek to get some fundamentals in place and I'm now refreshed and ready to go try it again!
Question: My real world problem are trouble tickets (documents) with a variety of "features" including some long text fields (ie. problem description or action taken which has sentiment in it) and some category fields which can be resolved to maybe 8 categories. I'm ultimately trying to categorize these "tickets" in the trouble work into about 5-6 categories (multi-class classification problem). so, using your ham/spam email example, I have 2-3 long text fields that will need to be vectorized to DTF (probably each with separate vocabularies), and some category feature inputs to the model. And rather than ham/spam, the model needs to predict to multiple classes (ie. 5-6 categories of tickets). I'm running into problems where the pandas frame has all this but has some of it in Object columns which don't directly product np arrays.
Can you make any suggestions on how to approach the work? I think after spending my Saturday and Sunday with your exercise, this is how I should approach it:
1) Read data into Pandas dataframe
2) Count Vectorize the two long text columns into separate DTFs. Do I need to the join the arrays?
3) You mentioned that scikit is not clear on whether category features have to be binarized or not. I'll figure that out. Same with prediction classes.
4) train the model on that.
Also, I recall in your course, you mentioned some concepts called "feature unions" and "transformers" in response to a question I could not hear. You gave some recommendations on using ensemble methods and "transformer features next to one another." This sounds like a clue to my problem. Any recommendations on how to go deeper into that area?
Of course, one of my very next steps is to sign up for your course!!
Thanks for the follow-up! Yes, I would agree that adding the categorical features to the DTM makes sense. However, you may want to append some text to the category names before adding them to the column of free-form text. For example, if the category is color, and possible values are "red" and "blue", you may want to add them to the free-form text as "colorred" and "colorblue". Why do this? Well, it's possible that seeing the word "red" in the text is a good predictor of ticket type A, and seeing the category "red" is a good predictor of ticket type B, and you want the model to be able to learn that information separately. Does that make sense?
Wow! that is a good point Kevin. One DTM makes a lot of sense. Would you agree even for the categorical features?
IN other words, would you just mix the two text fields -- one with the messy free form text request and the other with a category field into the same DTM and allow the vectorization just to do it thing on two columns rather than just one? I could see how that would "look" the same to the estimator as a category is just an extension on the DTM
yes, I also have since found Zac Stewart's good work on feature unions and pipelines and have even talked to him a bit about the approach. it seems like he has moved his methods onto using things like the Sklearn-pandas library (https://github.com/paulgb/sklearn-pandas/tree/feature_union_pipehttps://github.com/paulgb/sklearn-pandas/tree/feature_union_pipethe -- the PR that uses feature unions and pipelines in the code) which better supports pandas and data frames.
In contemplating your elegant simple approach of combining, I'm now thinking I have over engineered this. But I did end up making this work by building parallel pipes of features from pandas columns with multiple transformers (countvectorizer, tfidftransformer, and labelbinarizer) and then feature joined these before inputting to the estimator. this method does simplify the learning and transforming process. But, the tradeoff is that it does also complicate the process of being able to discern what features drove the decision logic (ie. hard to get features from the complex pipeline of steps).
Your approach of combining to 1 DTM may give me best of both worlds. thanks for your help and would appreciate confirmation on putting categorical features into the single DTM
Thanks for the detailed question! I think that for step 2, my default approach would be to combine the text fields together for each ticket before vectorizing, which would result in a single document-term matrix (DTM). In other words, you avoid combining multiple DTMs, which may not provide any additional value over a single DTM.
Regarding feature unions, here are some public resources that might be helpful to you:
Regarding my course, I think you'd get a lot of value out of it given your goals. More information is here: http://www.dataschool.io/learn/
Hope that helps, and good luck!
Anyone having audio issues, the right channel is completely out of phase with the left channel. So if you use something like audio hi-jack pro and insert an audio unit between (safari|chrome|firefox) and the output speakers to either duplicate the left channel or flip the right channel. Or use headphones as your brain will sum it just fine, it just sounds like it's left heavy because of the haas effect. Using speakers is a sure way to make yourself feel uncomfortable and lastly if you don't hear anything it's because your device is mono and summing the signals renders very little wave. (To the venue engineer: Don't record in stereo unless you know how to record in phase)
I run into this problem as well. Check the ordering of X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 1). If "X_train, X_test, y_train, y_test" is in a different order from this, you are most likely going to get the error.
Hi Kevin....I have several tokenized text files... I want to compare each of these text file with another text file and check the similarities or differences
how i am i able to do that using scikit or nltk
Thanks for the great tutorial. However several times I cant see the rightmost part of an
instruction. So I cant type it, execute it, follow the Python action. Very frustrating!
For example: 1:06:25 frome sklearn.cross_validation import train_test_split
but then I cant see the rest of the instruction so I cant follow the next several minutes of your tutorial using Python
Anyhow: I appreciate your tutorial...Thank you!
Hi Kevin! Thanks for that valuable presentation!
Just a question...
Is the following the right way to apply K-fold cross validation on text data?
X_train_dtm = vect.fit_transform(X_train)
scores = cross_val_score(<any classifier>, X_train_dtm, y, cv=5)
I am not totally sure if X_train_dtm and y are correct on the cross_val_score function above..
Great tutorial, really enjoyed and loved the way you explain things :)
I have a little question, I'm working with product reviews, so by using CountVectorizer I have created binary dtf sparse matrix for each of my reviews and created a feature vector something like <dtf-matrix , sentiment score, pos/neg tag> , so I have approx 200k+ reviews and have to store the same for each of them. I have read about "Feature Vector Hashing" technique, how to use that in Python, so that I can keep only hash of dtf-matrix but not actual dtf-matrix . I have no idea how to do that and how it actually works. It would be great if you help or suggest some good tutorial.
Thank again for this wonderful tutorial !
Thanks for your kind words! This section from the scikit-learn documentation on feature hashing might be helpful to you: http://scikit-learn.org/stable/modules/feature_extraction.html#feature-hashing
It's a tricky concept! Basically, you want to simulate the real world, in which words will be seen during the testing phase that were not seen during the training phase. By splitting before vectorization, you accomplish this. Hope that helps!
i just love your videos .They are great help specially for a non programmer like me trying to learn data science.It has helped me a lot in understanding all the concepts clearly in a short time rather than reading stuff.Your videos are my goto stuff for my college work.
I want to see some content on grid search and pipeline.Also could you please share your email,i have some more doubts
Thanks for your kind words! I'm glad they have been helpful to you!
Regarding grid search, I cover it in video 8 of my scikit-learn series: https://www.youtube.com/watch?v=Gol_qOgRqfA&index=8&list=PL5-da3qGB5ICeMbQuqbbCOQWcS6OYBr5A
Regarding pipeline, I cover it in modules 4 and 5 of my online course: http://www.dataschool.io/learn/ (You can also find my email address on this page.)
Hope that helps!
Wonderful set of videos. I have started my ML journey with these videos. Now gonna go deeper and practise more and more.
Thanks Kevin for the best possible head start.
A beginner Data Scientist.
Yes, you could use cross-validation instead. However, to do cross-validation properly, you have to also use a pipeline so that the vectorization takes place during cross-validation, rather than before it. Hope that helps!
You are indeed a "GURU" who can train and share knowledge in true sense.
I'm a non technical person but learning python and scikit-learn for my research and this video has taken my understanding to higher level, just in 3 hours....THANK YOU VERY MUCH Kevin!!! Can you please recommend some links where I can learn more on short text sentiment analysis using machine learning in python, especially to learn feature engineering aspect, like using POS, word embedding as features...Thanks again ...
You are very welcome! Regarding recommended links, I think this notebook might be helpful to you: http://nbviewer.jupyter.org/github/skipgram/modern-nlp-in-python/blob/master/executable/Modern_NLP_in_Python.ipynb
Thank you for the resource.
I have a question
In real life, the initiation of class CountVectorizer can fail if the volume of input text is BIG ( e.g. I want to encode a big number of text files). Did it happen to you ?
I haven't had that happen, but if it did, it should happen during the 'fit' stage rather than during the instantiation of the class. In any case, HashingVectorizer is designed to deal with very large vocabularies: http://scikit-learn.org/stable/modules/feature_extraction.html#vectorizing-a-large-text-corpus-with-the-hashing-trick
Hope that helps!
Great video. The problem with the audio is that the channels are the inverse of each other, so on mono devices where the L and R channels are summed together, they completely nullify the output signal. I don't know of a work-around except to listen using a 2-channel system
Wow! Thanks for the explanation. How did you figure that out? I spent probably an hour with the A/V people at the conference as they tried to figure out the problem, and they never came up with any clear explanation.
Hi, thanks for the video! Do you know if it's possible to supply each article to CountVectorizer as a list of features already created (for example noun phrases or verb-noun combinations) rather than the raw article which CountVectorizer would usually then extract n-grams from? Thanks!
From the CountVectorizer documentation, it looks like you can define the vocabulary used by overriding the 'vocabulary' argument: http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
However, it's not clear to me if that will work when using a vocabulary containing phrases rather than single words.
Try it out, and let me know if you are able to get it to work!
Hands down the best machine learning presentation I've seen thus far. Definitely looking forward to enrolling in your course once I'm done with your other free intro material. I think what sold me is how you've focused ~3 hours on a specific ML approach (supervised learning) to a common domain (text analysis). Other ML intros try to fit classification/regression/clustering all into 3 hours, which becomes too superficial a treatment. Anyway, bravo and keep up the great work!
Wow, thank you so much! What you're describing was exactly my goal with the tutorial, so I'm glad it met your needs!
For others who are interested, here's a link to my online course: http://www.dataschool.io/learn/
Glad you liked it! Yes, that audio problem affects some devices and browsers, especially mobile devices. It's caused by the audio encoding of the original recording. I tried to fix it, but didn't come up with any solutions. I'm sorry!
Sure, LabelEncoder is useful as long as you are encoding labels (also known as "response values" or "target values") or binary categorical features. If you are using it to encode categorical features with more than 2 levels, you'll want to think carefully about whether it's an appropriate encoding strategy.
Great material! I've been working on my own machine learning model before I learned about sklearn and now I have discovered feature extraction from text, which I coded myself :)
Happy and sed situation. Happy to find a useful tool. Sad that I spent so much time on coding my own feature extractor :)
Thanks for the video! Great job!
His health issues were debilitating.
Everyday rituals to ensure a relaxing nights rest.
All the wellness news you need to know today, including a huge almond milk recall, peppers saving grasslands, and the pH of your brain.
9 Giant Companies That Have Made Impressive Green Commitments This Year.
Some good things have happened in 2018 after all.
It all has to do with the gut-brain connection.
First Tennessee Bank extends partnership with Bristol Motor Speedway.
NASCAR Related Racing Birthdays.
Sunday Watkins Glen Notebook.
Kahne expected to remain at Leavine Family Racing.
Cummins joins Stewart-Haas Racing.
Civil trial against Greg Biffle underway.
Hendrick Motorsports extends relationships with Nationwide and Alex Bowman.
Hanson to sing National Anthem at Darlington Raceway.
Sad News - Tom Higgins.
NASCAR considering rule change on windshield wipers.