HomeОбразованиеRelated VideosMore From: Data School

Machine Learning with Text in scikit-learn (PyCon 2016)

1142 ratings | 89779 views
Although numeric data is easy to work with in Python, most knowledge created by humans is actually raw, unstructured text. By learning how to transform text into data that is usable by machine learning models, you drastically increase the amount of data that your models can learn from. In this tutorial, we'll build and evaluate predictive models from real-world text using scikit-learn. (Presented at PyCon on May 28, 2016.) GitHub repository: https://github.com/justmarkham/pycon-2016-tutorial Enroll in my online course: http://www.dataschool.io/learn/ == OTHER RESOURCES == My scikit-learn video series: https://www.youtube.com/playlist?list=PL5-da3qGB5ICeMbQuqbbCOQWcS6OYBr5A My pandas video series: https://www.youtube.com/playlist?list=PL5-da3qGB5ICCsgW1MxlZ0Hq8LL5U3u9y == LET'S CONNECT! == Newsletter: https://www.dataschool.io/subscribe/ Twitter: https://twitter.com/justmarkham Facebook: https://www.facebook.com/DataScienceSchool/ LinkedIn: https://www.linkedin.com/in/justmarkham/ YouTube: https://www.youtube.com/user/dataschool?sub_confirmation=1 JOIN the "Data School Insiders" community and receive exclusive rewards: https://www.patreon.com/dataschool
Html code for embedding videos on your blog
Text Comments (210)
sosscs (2 days ago)
print false positive: 1:38:01
Rahul Bhatia (1 month ago)
Is it still relevant in 2019? Thanks for letting me know
Data School (5 days ago)
Absolutely still relevant! However, there are some changes to the scikit-learn API that are useful to know about: https://www.dataschool.io/how-to-update-your-scikit-learn-code-for-2018/
Karthik Udupa (2 months ago)
Thanks a lot Kevin
Data School (2 months ago)
You're welcome!
Bin Yu (2 months ago)
This is a great, great tutorial and in depth explanation on many related topics! Thanks so much!
Data School (2 months ago)
You're very welcome!
Payal Bhatia (3 months ago)
@Data School, Again and Again you are the best Kevin. I was scared of the text analytics and web scraping. You can teach in such an intuitive and lucid way. Thanks a ton
Data School (2 months ago)
Thanks very much for your kind words!
Mohini K (3 months ago)
I need to test Pega system build along with python for machine learning.I am automation tester but need to do AI testing,can you please guide how can i go about.
Data School (2 months ago)
I won't be able to help, I'm sorry!
Zilong Liu (4 months ago)
1.25 speed perfect
anuja silampur (4 months ago)
in my case..,shape of x_train and x_train_dtm is different..and getting ValueError: Found input variables with inconsistent numbers of samples: [25, 153] at fit.....please help
Data School (4 months ago)
It's hard for me to say what is going wrong... good luck!
prakhar sahu (5 months ago)
Great video. I would like to know if you would be doing videos on tokenizing ,stemming and lemmatizing and other core NLP techniques.
Data School (4 months ago)
You might be interested in my course, Machine Learning with Text in Python: https://www.dataschool.io/learn/
Torakashi (5 months ago)
I really enjoy your structured approach to teaching these classes :)
Data School (5 months ago)
Thanks! You should check out my online course: https://www.dataschool.io/learn/
Priyanka P (6 months ago)
Hey, You have used 2 classes for classification right? What if I need more than 2 class, eg: contempt, depression, anger, joy and many such emotions. Do I need to change any of the code in here, or providing a data set with multiple classes is enough? And I have one more doubt; Once the model is built and prepared, how can I actually know into which class, a new text document supplied as input will belong to? eg: If the new document is ham or spam?
Priyanka P (4 months ago)
+Data School Thanks a lot. This lecture was very helpful for me. I love the way you teach. Great teacher :)
Data School (6 months ago)
1. Most of the time, you don't need to modify your scikit-learn code for multi-class classification. 2. Using the predict method Hope that helps! You might be interested in my course: https://www.dataschool.io/learn/
Eugeny Dolgy (6 months ago)
Great video!
Data School (6 months ago)
Thanks!
Jayanth Kumar (8 months ago)
Your videos just feel so friendly and inclusive, while being really educational. Your way of teaching is great. I thank you sincerely!
Data School (7 months ago)
Thanks very much for your kind words! You are very welcome!
Anjan Gurung (8 months ago)
thankyou so much for this video. cleared all the doubts i had. thankyou again
Data School (8 months ago)
You're very welcome!
Rayudu yarlagadda (9 months ago)
Awesome video would u please make videos on performance metrics and featurization and feature engineering
Data School (6 months ago)
I wrote a blog post about feature engineering: https://www.dataschool.io/introduction-to-feature-engineering/
Data School (8 months ago)
Thanks for your suggestions!
jun dou (9 months ago)
two question about the Bag of Words which have obsessed me for a while.first question is my source file has 2 columns, one is email content, which is text format, the other is country name(3 different countries) from where the email is sent, and I want to label if the email is Spam or not, here the assumption is the email sent from different countries also matters if email is spam or not. so besides the bag of words, I want to add a feature which is country, the question is that is there is way to implement it in sklearn.The other question is besides Bag of Words, what if I also want to consider the position of the words, for instance if word appears in first sentence, I want to lower its weight, if word appears in last sentence, I want to increase its weight, is there a way to implement it in sklearn.Thanks.
Data School (8 months ago)
Great questions! 1. Use FeatureUnion, or combine the two columns together and use CountVectorizer on the combined column. 2. You would write custom code to do this.
A. Brantley (9 months ago)
I cant wait till I have watched enough of your content to start on your courses.
Data School (9 months ago)
Great! :)
Saurabh Singh (10 months ago)
Excellent video. Thankyou so much Kevin Sir,it really helped me a lot.
Saurabh Singh (10 months ago)
Data School Sir, I dropped an email few days back by the I'd [email protected] . Could you please go through it and let me know ?
Data School (10 months ago)
You're welcome!
Drtuts com (10 months ago)
Thanks for the detailed information, Is that possible to use Multidimensional?
Data School (10 months ago)
I'm sorry, I don't understand your question. Could you clarify? Thanks!
Deepansh Nagaria (1 year ago)
sir, the video series was a great learning experience. Sir can you suggest me the algos in descending order of their accuracies for a model to find emotions from text data?
Data School (1 year ago)
I don't have any resources to recommend, I'm sorry!
Data School (1 year ago)
It is impossible to know what algorithm will work best in advance of trying it out!
watching this on a Speed 1.5, and still understandable.
Data School (1 year ago)
Great!
Anton (1 year ago)
The audio makes this video unbearable
Data School (1 year ago)
Sorry! I wish they didn't have audio problems when recording it. Here's a shorter video with a lot of the same content: https://www.youtube.com/watch?v=vTaxdJ6VYWE
KurzedMetal (1 year ago)
Using the x1.5 speed YT feature is perfect for this video :) I'm half of the video so far, and I'm enjoying it a lot, kudos to the presenter.
Nureyn A (5 months ago)
I did the same from video 1, I have just use 3 days to practice every thing, and I really enjoy the show :)
Data School (1 year ago)
Glad you are enjoying it! :)
Aparna Saripaka (1 year ago)
Great Video. We have a scanned tabled data converted into text which is not formatted.could you give us suggestion to extract the required information from that data. It would very helpful.
Data School (1 year ago)
Sorry, the solution has to be customized to fit the exact text structure. Good luck!
R J (1 year ago)
Thanks a lot for the course. Very powerful indeed. Is there a way to create a dataframe with say the top 20 features? Thanks again
R J (1 year ago)
Data School thanks a lot. I will definitely watch that recommended video and keep playing with it
Data School (1 year ago)
The vectorization is creating a sparse matrix, which is quite memory efficient. It sounds like the problem is that you are merging a sparse matrix with a dense matrix, which forces the sparse matrix to become dense, which would definitely create memory problems. One solution is to train models on the datasets separately and then ensemble them. It sounds like you might be doing this already, but aren't getting good results? If so, I don't think it's because of class imbalance. I think that using the max_features parameter of CountVectorizer will accomplish what you are trying to do, though I don't think it's necessarily a good strategy. You will lose too much valuable data. My recommended strategy is not super simple, so I can't describe it briefly, but it's covered in module 5 of my online course: http://www.dataschool.io/learn/ Hope that helps!
R J (1 year ago)
Thanks for the suggestion. I figured if I explain the problem better I'd get a better help. I'm trying to predict whether an item will fail or not. I have a data set with over 30 variables one of which I'm trying to vectorize. Doing this blows that one variable to over 7,000. Because of this I run out of memory when merging them to the data set containing the 30 other variables. Also due to the data set being unbalanced, the models don't train well using the two data sets independently (similar results both as good as random). I recently created an account on AWS and bought a powerful instance; I was able to merge the two and still it didn't train well. My goal is to use say the top 20 feature and merge with the 30 other variables to train. I used dtm=fit_transform() for that one variable. Is there a way to limit the number of features to an arbitrary number say 20; that is the ones with the highest tf idf scores? Or can I manually get them? Sorry for the length and thanks for the help
Data School (1 year ago)
Glad you liked it! Regarding your question, is this what you are looking for? df = tokens.head(20).copy()
Santosh Kumar (1 year ago)
Hello Kevin, I have progressively watched your video's from Pandas to Scikit Learn to this video on ML with Text. All have been brilliant videos and very nicely paced. Kudo's on that and hope you continue with more videos (shout out for Jupyter Notebooks ;-) ). I have one question specific to the topic on this video. For text analytic, the recommendation is to create a vocabulary and document-term matrix of the train data using a Vectorizer (i.e. instantiate a CountVectorizer and use fit_transform). Then use the fitted vocabulary to build a document-term matrix from the testing data (i.e. from the vector used during training perform a transform). If I use TfidfVectorizer and then TruncatedSVD as shown below, is the commented step-3 the right way ? # Step 1: perfrom train, test split. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # Step 2: create a Tf-Idf Matrix and perform SVD on it. tfidf_vectorizer = TfidfVectorizer(sublinear_tf=True, stop_words='english') tfidf_train = tfidf_vectorizer.fit_transform(X_train) svd = TruncatedSVD(n_components=200, random_state=42) X_train_svd = svd.fit_transform(tfidf_train) # Step 3: transforming Testing Data ?? # Is this the right way: # tfidf_test = tfidf_vectorizer.transform(X_test) # X_test_svd = svd.transform(tfidf_test) Thanks in advance.
Data School (1 year ago)
Thanks for your very kind comments, I appreciate it! Regarding your question, I'm not really familiar with TruncatedSVD, so I'm not able to say. Good luck!
Jhonatan (1 year ago)
For some reason I can't do vect.transform withou doing vect.fit
Jhonatan (1 year ago)
Oh, I figure it out kkk
Anusha James (1 year ago)
I have gone through a hell of videos and materials in machine learning. But this is the best which is properly paced and make it easy to follow and learn and takes you inside machine learning. I am keen to know whether you would start on Deep learning and tensorflow soon? It would be really helpful for those who are confused on overwhelming amount of materials. Thanks a lot!!
Data School (1 year ago)
So glad to hear that my videos have been helpful to you! As far as deep learning, I don't have any upcoming videos or courses planned, but it is certainly under consideration.
im18already (1 year ago)
Hi. It was mentioned on 1:06 that the X should be 1 dimensional. What if I have 2 set/column of text? The 2 column has certain relationship, so merging them into a single column is probably not the best way.
Data School (1 year ago)
Great question! Sometimes, merging the text columns into the same column is the best solution. Other times, you should build separate feature matrices and merge them, either using FeatureUnion or SciPy.
Donovan Keating (1 year ago)
Really good talk. Very easy to follow. Thank you for sharing! :)
Data School (1 year ago)
You're very welcome!
Deepika Dowluri (1 year ago)
Hi Kevin, It is a great lecture. Eventhough I am new to this machine learning, I understood the basics of machine learning and logistic regression. I have a doubt. Can we classify into more than two groups(Ham, Spam and some_other) ? Thank you.
Data School (1 year ago)
Great to hear! Regarding your question, you can classify into more than two categories - it's called multi-class classification. scikit-learn does support that. Hope that helps!
Takbir Hossain Tushar (1 year ago)
dear sir , please tell me how can i classify more then 2 class like 3 or four class prediction model by using the same way .
Tulasi Jamun (1 year ago)
Please read up on OnevsOne and OnevsAll classifiers to answer your question.
Data School (1 year ago)
Most scikit-learn classification models inherently support multi-class prediction. So, the process is exactly the same!
bhanu lekhala (1 year ago)
Clear and awesome. Thanks for sharing
Data School (1 year ago)
You're welcome!
Slesa Adhikari (1 year ago)
So very helpful. Thanks Kev!
Data School (1 year ago)
You're welcome!
chris demchalk (1 year ago)
Any recommendation for a multi label classification example where there will be a high number >200 of potential classes.
Data School (1 year ago)
I recommend reducing the complexity of the problem by reducing the number of classes.
Ibtsam Gujjar (1 year ago)
Just wanna thank you for the awesome series. I am new to machine learning and you are one of my first and favorite teacher in this journey :)
Data School (1 year ago)
You are very welcome! Good luck on your journey! :)
Rainer Wahnsinn (1 year ago)
I'd like to jump in into the questions around 55:00 and ask: Why don't we keep track of the order of the words in a dataset? The meaning of two datasets containing the same words could be really different, for example "Call me "Tom". " and "Tom, call me!". Right now those two datasets look exactly the same to us when vectorized like in the lecture. I thought maybe we could create a higher dimensional matrix and represent those word combinations as vectors in space and then fit a model on this. Would this work?
Data School (1 year ago)
Great question! We don't keep track of word order in order to simplify the problem, and because we don't believe that word order is useful enough to justify including it. (That would add more "noise" than "signal" to our model, reducing predictive accuracy.) That being said, you can include n-grams in the model, which preserves some amount of word order and can sometimes be helpful.
vivek athilkar (1 year ago)
great learning experience
Data School (1 year ago)
Thanks!
Biswajit Patowary (1 year ago)
Hi Kevin, that is a great video. I have one question. When i am dealing with a dataframe have large number of rows, each row having large texts which text vectorizer will be better tfidfvectotizer or countvectorizer or hashingvectorizer. I applied tfidf but it generates many feauture vectors which later becomes difficult to append it to the origial dataframe becoz of large array size
Data School (1 year ago)
It's impossible to know in advance which vectorizer will work best, sometimes you just have to experiment! Once you have generated a document-term matrix, you should not put it back in pandas. It should remain a sparse array. Hope that helps!
Zee Man (1 year ago)
Kev, has anyone ever told you that you remind them of Sheldon Cooper? Keep up the great work btw
Data School (1 year ago)
Ha! I have heard that a few times recently :) Glad you like the videos!
SVV (1 year ago)
Can we use Naive Bayes to classify text into more than just 2 or 3 categories (potentially 10+ categories)?
Data School (1 year ago)
Great question! The scikit-learn documentation says that "All scikit-learn classifiers are capable of multiclass classification": http://scikit-learn.org/stable/modules/multiclass.html So yes, that should work!
stefanos (1 year ago)
Hi Kevin, excellent presentation! I would like to ask you a question. How can "tokens_ratio" improve the accuracy score of Naive Bayes model?
Data School (1 year ago)
Glad you liked it! tokens_ratio was just a way to understand the model - it won't actually help the model to become better.
Pham Binh (1 year ago)
Thanks Kelvin for your great presentation as always. I think it could be great if the presentation included feature selection i.e. chi-squared test...
Data School (1 year ago)
Thanks for the suggestion! I'll consider that for future videos.
Amos Munezero (1 year ago)
Hi does anyone know how we can extract and store the words that are thrown out during the transformation? Like is there an easier way (built-in function) other than writing python text regular expression or manipulation to compare the words and feature names? Thanks.
Data School (1 year ago)
Great question! I don't know of a simple way to do this, but perhaps someone else here knows...
Yuan Xiang (1 year ago)
That's a great tutorial. Just a quick question, if I were to apply svm, random forest, latent dirichlet allocation, instead of naive bayes, does the input data still be document-term matrix form?
Data School (1 year ago)
I'm not sure for LDA, but for SVM and Random Forests, yes, the input format would be the same.
Spas (1 year ago)
great video and the information was very clearly presented. Good work!
Data School (1 year ago)
Thanks!
mmmBurekNJAMNJAM (1 year ago)
Holy crap, he can talk at a normal speed! Anyway, this series was great. I can find my way around with Python but I'm a complete beginner to data science and machine learning and I've learned a ton. I will definitely be re-watching this entire series to really grasp the material. Thanks again, keep up the good work.
Data School (1 year ago)
HA! Yes, that's my normal talking speed :) Glad you liked the series - I appreciate your comment!
Deyun Yin (1 year ago)
I like you and your videos very much. Hope you could develop a more detailed course on scikit-learn and Deep Learning (tensorflow)
Data School (1 year ago)
Thanks for the suggestion! I'll definitely consider it for the future! Subscribing to my newsletter is a great way to hear when I release new courses: http://www.dataschool.io/subscribe/
Phumzile Mathonsi (1 year ago)
THIS is some great stuff. . .really helpful I am working on my final year project. I am working on the classification of cattle and wanted to use machine learning (for the facial recognition of both pet and livestock)
Data School (1 year ago)
Very cool project! So glad to hear that the video was helpful to you!
cartoonjerk (1 year ago)
Nevermind my previous comment, problem solved. But now I have a new one and would be very happy if you can help me answer it! When I calculate my ham and spam frequencies, my ham count is completely different than yours. It reads: 1.373624e-09 for very, 4.226535e-11 for nasty, 2.113267e-11 for villa, 4.226535e-11 for beloved, and 2.113267e-11 for textoperator. Any way to fix this or has the data changed since then?
Data School (1 year ago)
The dataset hasn't changed. Are you sure all the code you wrote was identical to my code? You can check your code here: https://github.com/justmarkham/pycon-2016-tutorial/blob/master/tutorial_with_output.ipynb
Mac Pc (1 year ago)
is it possible to calculate spamminess and haminess irrespective of the lcassifier used ?
Data School (1 year ago)
Great question! You could use a similar approach with other classification models, though the code would be a bit more complicated because you wouldn't have access to the feature_count_ and class_count_ attributes of the Naive Bayes model.
cartoonjerk (1 year ago)
Once again thanks a lot for the video, been learning a lot from this. Quick question though, can you give the full url for the one you provided around 1:00:00? I tried both methods and none worked! Thanks!
Data School (1 year ago)
Here's the URL for the SMS dataset: https://raw.githubusercontent.com/justmarkham/pycon-2016-tutorial/master/data/sms.tsv And you can find all of the code shown in the video here: https://github.com/justmarkham/pycon-2016-tutorial Hope that helps!
Puneet Jain (1 year ago)
Hi Kevin, Thanks for the wonderful tutorial. I just have a very basic question - We did image classification in past and used neural network. There we used few convolutional layers and activation function. However I see here that you did not use any convolutional layers and activation function. Is this because you are using naive bayes classifier not neural network classifier algorithms?  Thanks in advance.
Data School (1 year ago)
That's correct! Naive Bayes does not involve any layers or an activation function.
Itube (1 year ago)
This is just a minor point but how come y is 150 by nothing when it's a vector?
Data School (1 year ago)
When I say that it's "150 by nothing", that really just means that it's a one-dimensional object in which the magnitude of the first dimension is 150, and there is no second dimension. That is distinct from a two-dimensional object of 150 by 1. Does that help? If I misunderstood your question, please let me know!
Ankit Sharma (1 year ago)
I am doing a college project on machine learning. It was very helpful. Thank you
Data School (1 year ago)
You're welcome!
Bozo Jimmy (2 years ago)
tip: you can run this at 1.25X
Data School (1 year ago)
Thanks for sharing!
Yvette Kondoh (2 years ago)
I really like your systematic style of teaching. Thanks for this great resource Kevin!
Data School (1 year ago)
Thanks so much! I'm glad it's helpful to you!
Ben Ben (2 years ago)
Thank you, very much, just viewed all your online course. Im not really that super duper with machine learning, but your courses certainly got me thinking and able to get scikit to work at least. One thing i will have to research, is if your initial dataset uses  classes good/bad etc instead of numbers such 1,0  how to actually get that into the i think its “label.map” from this video. This video shows me how to do it kinda of briefly but your “Machine learning in Python with scikit-learn” series does not cover it at all – (unless i missed it somewhere). Also near the end of your “Machine learning in Python with scikit-learn” the course length become longer which means i have to stop it more often.  So maybe more breaks could help. As i said, its amazing what you have provided, and im just trying to offer some feed back  – instead of just being all take.
Data School (2 years ago)
Thanks for your feedback! Regarding your question about numeric labels, I think this video might be helpful to you: https://www.youtube.com/watch?v=P_q0tkYqvSk&list=PL5-da3qGB5ICCsgW1MxlZ0Hq8LL5U3u9y&index=30
sudhiir reddy (2 years ago)
Thanks a Lot for this resource...Hoping to see more videos like this
Data School (2 years ago)
You're welcome! Glad it was helpful to you.
Lee Prevost (2 years ago)
Very nice work Kevin. I suspect I did what a lot do -- jump into ML without a lot of fundamentals. My experience was after doing one of the "hello world" tutorials on ML (IRIS dataset), I immediately "wired up" my features, which were of course full of text, and crashed my model with string errors. After that crash, your video was my "back to the drawing board" trek to get some fundamentals in place and I'm now refreshed and ready to go try it again! Question: My real world problem are trouble tickets (documents) with a variety of "features" including some long text fields (ie. problem description or action taken which has sentiment in it) and some category fields which can be resolved to maybe 8 categories. I'm ultimately trying to categorize these "tickets" in the trouble work into about 5-6 categories (multi-class classification problem). so, using your ham/spam email example, I have 2-3 long text fields that will need to be vectorized to DTF (probably each with separate vocabularies), and some category feature inputs to the model. And rather than ham/spam, the model needs to predict to multiple classes (ie. 5-6 categories of tickets). I'm running into problems where the pandas frame has all this but has some of it in Object columns which don't directly product np arrays. Can you make any suggestions on how to approach the work? I think after spending my Saturday and Sunday with your exercise, this is how I should approach it: 1) Read data into Pandas dataframe 2) Count Vectorize the two long text columns into separate DTFs. Do I need to the join the arrays? 3) You mentioned that scikit is not clear on whether category features have to be binarized or not. I'll figure that out. Same with prediction classes. 4) train the model on that. Also, I recall in your course, you mentioned some concepts called "feature unions" and "transformers" in response to a question I could not hear. You gave some recommendations on using ensemble methods and "transformer features next to one another." This sounds like a clue to my problem. Any recommendations on how to go deeper into that area? Of course, one of my very next steps is to sign up for your course!!
Data School (2 years ago)
Thanks for the follow-up! Yes, I would agree that adding the categorical features to the DTM makes sense. However, you may want to append some text to the category names before adding them to the column of free-form text. For example, if the category is color, and possible values are "red" and "blue", you may want to add them to the free-form text as "colorred" and "colorblue". Why do this? Well, it's possible that seeing the word "red" in the text is a good predictor of ticket type A, and seeing the category "red" is a good predictor of ticket type B, and you want the model to be able to learn that information separately. Does that make sense?
Lee Prevost (2 years ago)
Wow! that is a good point Kevin. One DTM makes a lot of sense. Would you agree even for the categorical features? IN other words, would you just mix the two text fields -- one with the messy free form text request and the other with a category field into the same DTM and allow the vectorization just to do it thing on two columns rather than just one? I could see how that would "look" the same to the estimator as a category is just an extension on the DTM yes, I also have since found Zac Stewart's good work on feature unions and pipelines and have even talked to him a bit about the approach. it seems like he has moved his methods onto using things like the Sklearn-pandas library (https://github.com/paulgb/sklearn-pandas/tree/feature_union_pipehttps://github.com/paulgb/sklearn-pandas/tree/feature_union_pipethe -- the PR that uses feature unions and pipelines in the code) which better supports pandas and data frames. In contemplating your elegant simple approach of combining, I'm now thinking I have over engineered this. But I did end up making this work by building parallel pipes of features from pandas columns with multiple transformers (countvectorizer, tfidftransformer, and labelbinarizer) and then feature joined these before inputting to the estimator. this method does simplify the learning and transforming process. But, the tradeoff is that it does also complicate the process of being able to discern what features drove the decision logic (ie. hard to get features from the complex pipeline of steps). Your approach of combining to 1 DTM may give me best of both worlds. thanks for your help and would appreciate confirmation on putting categorical features into the single DTM
Data School (2 years ago)
Thanks for the detailed question! I think that for step 2, my default approach would be to combine the text fields together for each ticket before vectorizing, which would result in a single document-term matrix (DTM). In other words, you avoid combining multiple DTMs, which may not provide any additional value over a single DTM. Regarding feature unions, here are some public resources that might be helpful to you: http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html http://scikit-learn.org/stable/auto_examples/hetero_feature_union.html Regarding my course, I think you'd get a lot of value out of it given your goals. More information is here: http://www.dataschool.io/learn/ Hope that helps, and good luck!
Navkiran Kaur (2 years ago)
ValueError: multiclass format is not supported I am getting this error when i am running the auc score
Data School (2 years ago)
Are you using the same dataset as me, or your own dataset?
Jason Cox (2 years ago)
Anyone having audio issues, the right channel is completely out of phase with the left channel. So if you use something like audio hi-jack pro and insert an audio unit between (safari|chrome|firefox) and the output speakers to either duplicate the left channel or flip the right channel. Or use headphones as your brain will sum it just fine, it just sounds like it's left heavy because of the haas effect. Using speakers is a sure way to make yourself feel uncomfortable and lastly if you don't hear anything it's because your device is mono and summing the signals renders very little wave. (To the venue engineer: Don't record in stereo unless you know how to record in phase)
michael akatwijuka (1 year ago)
sad, the audio could not work..am stranded too
Data School (2 years ago)
You're very welcome! Thanks for your kind comments, and I'm glad you have enjoyed the videos!
Jason Cox (2 years ago)
Right on, hopefully it helps someone else. Took me a while to figure out how to flip the channel. By the way, you videos are great man. Thanks so much for them!
Data School (2 years ago)
Thanks for the suggestions and the technical explanation! I talked with the audio engineers at the conference numerous times, and they were never able to explain the source of the problem!
Stephen Iezzi (2 years ago)
To fix the audio issue on iPhone use headphones and in settings turn off mono audio (general>>>accessibility, then scroll down to hearing)
Data School (2 years ago)
Thanks for sharing that solution!
Pankaj Nayak (2 years ago)
X_train_dtm = vect.fit_transform(X_train) X_train_dtm.shape AttributeError: 'numpy.int64' object has no attribute 'lower'
Yvette Kondoh (2 years ago)
I run into this problem as well. Check the ordering of X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 1). If "X_train, X_test, y_train, y_test" is in a different order from this, you are most likely going to get the error.
Data School (2 years ago)
I'm not able to evaluate the cause of this error without knowing what steps took place before this line of code, and with what dataset. Good luck!
Will Beasley (2 years ago)
Are you in the works of a course to use Tensorflow for NLP?
Data School (2 years ago)
I'm not, but I appreciate the suggestion and will keep it in mind for the future!
Hamzath Anees (2 years ago)
i coudnt find any relevant video on youtube to do text analysis using machine learning... wow that was a great video and an eye opener for machine learning.. thank you so much kevin
Hamzath Anees (2 years ago)
Hi Kevin....I have several tokenized text files... I want to compare each of these text file with another text file and check the similarities or differences how i am i able to do that using scikit or nltk
Data School (2 years ago)
You're very welcome! Glad it was helpful to you!
Robert Schnabel (2 years ago)
Thanks for the great tutorial. However several times I cant see the rightmost part of an instruction. So I cant type it, execute it, follow the Python action. Very frustrating! For example: 1:06:25 frome sklearn.cross_validation import train_test_split but then I cant see the rest of the instruction so I cant follow the next several minutes of your tutorial using Python Anyhow: I appreciate your tutorial...Thank you!
Data School (2 years ago)
Sorry to hear! However, all of the code is available in the GitHub repository: https://github.com/justmarkham/pycon-2016-tutorial Hope that helps!
tiflosourtisfilms (2 years ago)
Hi Kevin! Thanks for that valuable presentation! Just a question... Is the following the right way to apply K-fold cross validation on text data? X_train_dtm = vect.fit_transform(X_train) scores = cross_val_score(<any classifier>, X_train_dtm, y, cv=5) I am not totally sure if X_train_dtm and y are correct on the cross_val_score function above.. Thanks again!
Data School (2 years ago)
Glad you liked the tutorial! Regarding your question, I actually cover this in detail in my online course: http://www.dataschool.io/learn/
tiflosourtisfilms (2 years ago)
I just saw Andrew's comment... http://bit.ly/2mXdwZ9
Srikant Mishra (2 years ago)
I wish you had a text mining course in python :(
Data School (2 years ago)
I offer an online course called "Machine Learning with Text in Python" - check it out! http://www.dataschool.io/learn/
Rahul Sripuram (2 years ago)
Awesome. I really liked it. Will do a POC. Please few suggest some datasets otherthan spam-ham
Data School (2 years ago)
There are lots of great datasets here: http://archive.ics.uci.edu/ml/ https://www.kaggle.com/datasets Hope that helps!
Atul Kumar (2 years ago)
Great tutorial, really enjoyed and loved the way you explain things :) I have a little question, I'm working with product reviews, so by using CountVectorizer I have created binary dtf sparse matrix for each of my reviews and created a feature vector something like <dtf-matrix , sentiment score, pos/neg tag> , so I have approx 200k+ reviews and have to store the same for each of them. I have read about "Feature Vector Hashing" technique, how to use that in Python, so that I can keep only hash of dtf-matrix but not actual dtf-matrix . I have no idea how to do that and how it actually works. It would be great if you help or suggest some good tutorial. Thank again for this wonderful tutorial !
Data School (2 years ago)
Thanks for your kind words! This section from the scikit-learn documentation on feature hashing might be helpful to you: http://scikit-learn.org/stable/modules/feature_extraction.html#feature-hashing Good luck!
Rockefeller (2 years ago)
Hi Kevin, I didn’t catch up very well the why we should do the train-test split before vectorization ? Could you help? Rockefeller from Cameroon
Data School (2 years ago)
It's a tricky concept! Basically, you want to simulate the real world, in which words will be seen during the testing phase that were not seen during the training phase. By splitting before vectorization, you accomplish this. Hope that helps!
lalith dupathi (2 years ago)
I am an electronic student, but your vigor and teaching skills on ML has got me inclined towards it very much. Thank you for the great head start you've given
Data School (2 years ago)
You're very welcome! Good luck in your machine learning education.
Naveen V (2 years ago)
you said 3 documents as an explanation for 3*6 sparse matrix(around 35.10)...where did we give the 3 documents?
Naveen V (2 years ago)
Thank you
Data School (2 years ago)
The 3 documents are the 3 elements of the 'simple_train' list, which we passed to the vectorizer during the 'fit' step. Hope that helps!
Jui Guram (2 years ago)
i just love your videos .They are great help specially for a non programmer like me trying to learn data science.It has helped me a lot in understanding all the concepts clearly in a short time rather than reading stuff.Your videos are my goto stuff for my college work. I want to see some content on grid search and pipeline.Also could you please share your email,i have some more doubts
Data School (2 years ago)
Jui Guram (2 years ago)
The contact information part doesn't load on my system,can you please post your email here
Data School (2 years ago)
Thanks for your kind words! I'm glad they have been helpful to you! Regarding grid search, I cover it in video 8 of my scikit-learn series: https://www.youtube.com/watch?v=Gol_qOgRqfA&index=8&list=PL5-da3qGB5ICeMbQuqbbCOQWcS6OYBr5A Regarding pipeline, I cover it in modules 4 and 5 of my online course: http://www.dataschool.io/learn/ (You can also find my email address on this page.) Hope that helps!
royxss (2 years ago)
This channel to so helpful. Actually helped me a lot during my semesters. Thank you so much (y)
Data School (2 years ago)
Awesome! You're very welcome!
Lingobol :) (2 years ago)
Wonderful set of videos. I have started my ML journey with these videos. Now gonna go deeper and practise more and more. Thanks Kevin for the best possible head start. Your Fan, A beginner Data Scientist.
Data School (2 years ago)
You're very welcome! That's excellent to hear... good luck!
Andrew Hintermeier (2 years ago)
Is it possible to use KFolds cross validation instead of test train split with this method?
Data School (2 years ago)
You're very welcome, and thanks for your kind words! :)
Andrew Hintermeier (2 years ago)
Thank you so much. Your series is honestly the best I've found for learning ML, it's been so helpful for me :D
Data School (2 years ago)
Here's a nice example that includes a pipeline: http://radimrehurek.com/data_science_python/
Andrew Hintermeier (2 years ago)
Thanks! I've never used pipelines before but I've seen pipelines used in some example code before, I'll have to look into it.
Data School (2 years ago)
Yes, you could use cross-validation instead. However, to do cross-validation properly, you have to also use a pipeline so that the vectorization takes place during cross-validation, rather than before it. Hope that helps!
Neha Gupta (2 years ago)
You are indeed a "GURU" who can train and share knowledge in true sense. I'm a non technical person but learning python and scikit-learn for my research and this video has taken my understanding to higher level, just in 3 hours....THANK YOU VERY MUCH Kevin!!! Can you please recommend some links where I can learn more on short text sentiment analysis using machine learning in python, especially to learn feature engineering aspect, like using POS, word embedding as features...Thanks again ...
Data School (2 years ago)
You are very welcome! Regarding recommended links, I think this notebook might be helpful to you: http://nbviewer.jupyter.org/github/skipgram/modern-nlp-in-python/blob/master/executable/Modern_NLP_in_Python.ipynb Good luck!
Vishwas Garg (2 years ago)
great videos man...I have become your fan
Data School (2 years ago)
Thanks very much!
Giang Lam Tung (2 years ago)
Thank you for the resource. I have a question In real life, the initiation of class CountVectorizer can fail if the volume of input text is BIG ( e.g. I want to encode a big number of text files). Did it happen to you ?
Giang Lam Tung (2 years ago)
Thank you very much. You are correct. The problem happens during the fitting stage. I will try with HashingVectorize
Data School (2 years ago)
I haven't had that happen, but if it did, it should happen during the 'fit' stage rather than during the instantiation of the class. In any case, HashingVectorizer is designed to deal with very large vocabularies: http://scikit-learn.org/stable/modules/feature_extraction.html#vectorizing-a-large-text-corpus-with-the-hashing-trick Hope that helps!
Zank Bennett (2 years ago)
Great video. The problem with the audio is that the channels are the inverse of each other, so on mono devices where the L and R channels are summed together, they completely nullify the output signal. I don't know of a work-around except to listen using a 2-channel system
Data School (2 years ago)
Wow! Thanks for the explanation. How did you figure that out? I spent probably an hour with the A/V people at the conference as they tried to figure out the problem, and they never came up with any clear explanation.
Casey Lickfold (2 years ago)
Hi, thanks for the video! Do you know if it's possible to supply each article to CountVectorizer as a list of features already created (for example noun phrases or verb-noun combinations) rather than the raw article which CountVectorizer would usually then extract n-grams from? Thanks!
Data School (2 years ago)
From the CountVectorizer documentation, it looks like you can define the vocabulary used by overriding the 'vocabulary' argument: http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html However, it's not clear to me if that will work when using a vocabulary containing phrases rather than single words. Try it out, and let me know if you are able to get it to work!
Jagadeesh Gajula (2 years ago)
The best tutorial i have ever watched! Kevin you have mastered both the art of machine learning and teaching :)
Nureyn A (5 months ago)
This guy is gifted.
Data School (2 years ago)
Wow! What a kind compliment... thanks so much!
Tsering Paljor (2 years ago)
Hands down the best machine learning presentation I've seen thus far. Definitely looking forward to enrolling in your course once I'm done with your other free intro material. I think what sold me is how you've focused ~3 hours on a specific ML approach (supervised learning) to a common domain (text analysis). Other ML intros try to fit classification/regression/clustering all into 3 hours, which becomes too superficial a treatment. Anyway, bravo and keep up the great work!
Data School (2 years ago)
Wow, thank you so much! What you're describing was exactly my goal with the tutorial, so I'm glad it met your needs! For others who are interested, here's a link to my online course: http://www.dataschool.io/learn/
Aykut Çayır (2 years ago)
This video is excellent. Thanks for the video, but there is a problem for the mobile version of the video. After opening talk of the video, I cannot hear the voice. Did you notice that before?
Data School (2 years ago)
Glad you liked it! Yes, that audio problem affects some devices and browsers, especially mobile devices. It's caused by the audio encoding of the original recording. I tried to fix it, but didn't come up with any solutions. I'm sorry!
Gaurav Mitra (2 years ago)
Another of your fantastic videos.
Data School (2 years ago)
Thanks for your kind words!
dualphase (2 years ago)
Video Request: Random Forests, Gradient Boosting etc: I see they are very popular in Kaggle. Also Introduction to Neural Networks/Deep Learning in Python. Thank you so much :)
Data School (2 years ago)
Thanks for the suggestions, I will consider them!
Ghanemi mehdi (2 years ago)
Hi, Thanks for sharing, it's very usefull ! I have a little question : for the labelization i use "preprocessing.LabelEncoder()" is it ok ?
Data School (2 years ago)
Sure, LabelEncoder is useful as long as you are encoding labels (also known as "response values" or "target values") or binary categorical features. If you are using it to encode categorical features with more than 2 levels, you'll want to think carefully about whether it's an appropriate encoding strategy.
Dmitrii Beliakov (2 years ago)
Great material! I've been working on my own machine learning model before I learned about sklearn and now I have discovered feature extraction from text, which I coded myself :) Happy and sed situation. Happy to find a useful tool. Sad that I spent so much time on coding my own feature extractor :) Thanks for the video! Great job!
Data School (2 years ago)
You're very welcome! Yes, scikit-learn makes a lot of tasks easier, but you probably learned a lot by first building your own feature extraction tool :)
Carlos A. PT (2 years ago)
Thanks for sharing Kevin, apart of the obvious I also curious about how you use evernote in your daily lectures task, maybe that's could be another great video to follow on ..
Data School (2 years ago)
My Evernote usage is pretty simple... just storing and organizing task lists and links! :)
GCM (2 years ago)
This is a great resource. Thank you for sharing
anakwesleyan (1 year ago)
A great resource indeed. What I find extremely helpful is that it explains the small but critical aspects of the library, e.g. CountVectorizer only takes 1D, what sparse data in scipy looks like, etc.
7justfun (2 years ago)
Data School , Can you help point me to a demo /material for hierarchical clustering(aglometric pref)... would counter vectorization work for such a scenario befroe we apply knn or shiftmeans
Data School (2 years ago)
You're very welcome!

Would you like to comment?

Join YouTube for a free account, or sign in if you are already a member.