gsoc2017

Google Summer of Code 2017: wrap up

  • Posted on: 24 September 2017
  • By: Guy

The Google Summer of Code 2017 is officially over and we just can't believe how much work @markus and @masha_ivenskaya have done over the last couple of months. Pattern3 is close to being ready for public release. And an amazing left-right bias classifier has been finished and published on github. A valuable tool in the fight against fake news! Check their blog posts below.

THANK YOU!

We want to thank the mentors Vincent Merckx and Amra Dorjbayar for their most valuable input. We will now start localizing the fake news algorithms to Dutch and are happy to report that they have agreed to lend their continued support. Also a big shout out to Google for putting all of this together! The Google Summer of Code is an amazing project and we can't recommend joining this initiative enough!

Finally, of course a huuuuuuge thank you to our students. It was an honor for us to work with such talented people and we wish Masha Ivenskaya and Markus Beuckelmann all the best in their future careers. We are sure that they will be very successful in their future endeavors!

Porting Pattern to Python 3: DONE!

  • Posted on: 24 September 2017
  • By: markus

The final days of this year's Google Summer of Code have arrived and I am wrapping up my project. The last three months have been full of intense coding on the Pattern library and I'm happy to say that all milestones described in my project proposal are knocked off within the official coding period.

An exhaustive list of all my commits to the clips/pattern repository can be found here. A very nice commit–based comparison is available here (full diff and full patch). The official commit graph can be seen here as soon as the changes have been merged into the master branch. The Travis CI build for different branches can be looked at here together with the automated unit test coverage reports on coveralls.io. The last official GSoC commit is ec95f97 on the development branch.

Overview & Synopsis

This is what the official project description reads:

The purpose of this GSoC project will be to modernize Pattern, a Python library for machine learning, natural language processing (NLP) and web mining. A substantial part of this undertaking is to port a majority of the code base to Python 3. This involves porting the individual modules and sub–modules piece by piece, where the whole process will be guided by unit tests. In the beginning, I will remove all tests from the pipeline that do not pass for Python 3 and take this pared–down code base as a starting point, porting parts of the code and putting the respective unit tests back in as I go along. Missing unit tests must be added before moving on. Since porting Python 2 code to Python 3 code is a standard problem for the Python community, there are many different tools available that can help in this regard. In addition to that, I'd like to extend this project to a bit of a Hausmeister project (housekeeping for Pattern), and optimize/modernize the code base in terms of execution speed, memory usage and documentation.

At the beginning of the project in May (launch time machine), Pattern was in a position where it wasn't actively maintained due to time constraints. Many unit tests were failing, some features were deprecated (e.g. in pattern.web) but most importantly, it lacked Python 3 support which effectively made it unavailable for a large user base. Now, three months later, we are at a point where all of Pattern's modules (i.e. pattern.text, pattern.vector, pattern.web, pattern.graph, pattern.db, pattern.metrics and the language modules pattern.en, pattern.de, pattern.nl. pattern.fr, pattern.es, pattern.it) except for pattern.server are fully ported to Python 3. This task included working on some other major milestones such as removing the bundled PyWordNet in favor of NLTK's WordNet interface, transitioning to BeautifulSoup 4, removing sgmllib etc.. However, the biggest challenge for a joint Python 2 / Python 3 code base is always to carefully deal with unicode handling in all parts of the library, which can sometimes be tedious. Whenever possible we attempted to write forward–compatible code, i.e. code that handles Python 3 as the default and Python 2 as the exception, which required some extra effort, but will hopefully make the code more readable in the long term and makes it easier to drop Python 2 support entirely at some point. The next release will deprecate Python 2.6- support in favor of Python 2.7 – which will be the last Python 2 version – and Python 3.6+.

Furthermore, several general maintenance tasks have been performed such as code cleanup, documentation, refactoring of duplicate code to an additional pattern.helpers as well as general PEP 8 compliance.

Roadmap & Milestones

By far the largest chunk of work was dealing with the subtle differences between Python 2 and Python 3 to ensure that the code works identically regardless of the interpreter. Moving to a joint code base is a major undertaking, since there are many differences when it comes to strings (unicode vs. byte strings), generators and iterators, package import precedence, division and even fundamental data types such as dict and set. It is more or less hopeless to obtain a joint code base for Python 2.5- and Python 3, but fortunately it is possible to make it work for Python 2.6+ with some precautions, even without using six.

However, the following points on the roadmap were important milestones that don't necessarily have anything to do with the actual porting per se:

  • The following bundled packages and (vendorized) libraries have been removed in favor of external dependencies: feedparser, BeautifulSoup, pdfminer, simplejson, docx, cherrypy, PyWordNet. This also involved adapting Pattern's code to changes introduced in these external libraries.
  • The removal of PyWordNet came with the need for a new interface for Pattern to wrap NLTK's WordNet interface. This was quite time–consuming, since there had of course been many incompatible changes over the years that needed to be dealt with.
  • We set up Travis CI, a continuous integration platform to keep track of passing or failing unit tests on different branches / Python versions. This will run automatically for every PR and report changes in unit test coverage.
  • libSVM and libLINEAR have been updated to the latest versions. The pre–compiled libraries have been removed for now because they were incompatible with the newer libsvm/liblinear versions.
  • The unit tests were refactored to work with pytest. There is more work that can be done and it might be a good idea to leave the unittest entirely behind at some point in the future.
  • In the last days of the official coding period we went through a big PEP 8 (Style Guide for Python Code) cleanup which aims for a more consistent code base. However, we decided not to aggressively enforce all PEP 8 guidelines.

The new release will introduce the following external dependencies: future, mysqlclient, beautifulsoup4, lxml, feedparser, pdfminer (or pdfminer.six for Python 3), numpy, scipy, nltk, python-docx, cherrypy. For a more in–depth discussion of each of these items, check out my detailed progress reports (phase #1, phase #2).

Statistics

Let's play the numbers game: Over the course of the last three months, I have pushed > 403 commits to four different branches on the clips/pattern repository. This affected 238 files with a total of 11129 insertions and 53735 deletions (git diff --stat). My first commit was 1e17011 on the python3 branch. My last commit was ec95f97 on the development branch.

Here is what the contribution graph and the heat map on GitHub looks like: GSoC: Commits

The following panel shows deletions (left) and insertions (right) as a function of time: GSoC: Insertions and Deletions

This graph seems to reflect the roadmap pretty accurately. The majority of the deletions in the first period correspond to the removal of vendorized libraries. As the project progressed, more and more insertions took place and new or modified lines found their way into the code base.

Future Work & Next Steps

We will do some more testing and release the next major version of Pattern in autumn. The following items are predominantly independent of my particular project, but should be tackled before the next major release:

  • The only Python 3 related issue currently remaining is a bug in pattern.vector that affects the information gain tree classifier IGTree. It's hard to debug but it looks to me like it has something to do with order differences when iterating over dict objects. In any case, someone needs to take a closer look. This issue will be tracked on GitHub in the near future.
  • The pattern.server module has the major parts ported, but since there are no unit tests available for this module, it's hard to test it in a systematic way apart from running the examples. I believe it's currently not fully functional on Python 3.
  • The pattern.web module contains code to access some popular web APIs. Some of the APIs are deprecated or changed in some other way that requires refactoring. Some APIs have moved to paid subscription models without free quotas.
  • The current unit test coverage seems to be around 70%. This is okay for now, but there certainly is room for improvement.

Resources

The following resources proved to be invaluable during the porting, especially when it comes to the more subtle differences between Python 2 to Python 3:

Acknowledgments

So this is it – the end of the official GSoC coding period, time to sign off for a couple days. Thank you to the Google Open Source Team for bringing this project to life, and of course special thanks to my mentors Tom and Guy for their valuable feedback and guidance throughout the entire project.

Altogether, this was a great experience and I will remain an active contributor for the foreseeable future. Happy coding!

Text-based fake news detection: DONE!

In the final phase of the Google Summer of Code, Masha fine-tuned the classifier for sensationalism detection and added a left-right bias classifier.

The system is a bit too resource-heavy to run as an on-site demo, but all of the code and data to run the classifier locally is available from Github. The Bias Classifier directory on Github contains the trained model, as well as Python code to train a model and to classify new data, the data used for training, and a sample of test data (a held-out set) with true labels and the classifier scores. In the training data, label ‘0’ corresponds to the ‘least-biased’ class, ‘1’ corresponds to ‘left’, and ‘2’ corresponds to ‘right’.

This classifier takes as input a 2-column CSV file, where the first column corresponds to the headlines and second one corresponds to the article texts.

Usage for the Python code:

python bias_classifier.py -args

The arguments are:
-t, --trainset: Path to training data (if you are training a model)
-m, --model: Path to model (if you are using a pre-trained model)
-d, --dump: Dump trained model? Default is False
-v, --verbose: Default is non-verbose
-c, --classify: Path to new inputs to classify
-s, --save: Path to the output file (default is 'output.csv')

Output:

The output is a number between -1 and 1, where -1 is most left-biased, 1 is most right-biased, and 0 is least-biased.

Data:

The articles come from the crawled data - a hand-picked subset of sites that were labeled as "right", "right-center", "left", "left-center", and "least-biased" by mediabiasfactcheck.com.   I used one subset of sources for the training data and a different subset of sources for the testing data in order to avoid overfitting.   I also trained a separate model on all of the sources I had available - since it is trained on more data, it may perform better. This model is also available in the Github directory under the name “trained_model_all_sources.pkl”

It is worth noting that articles from  'right-center' and 'left-center' sources often exhibit only a subtle bias, if any at all.  This is because the bias of these sources is often not evident on a per-article basis, but only on a per-source basis.  It may exhibit itself, for example, through story selection rather than through loaded language.  For this reason I did not include articles from 'right-center' and 'left-center' sources in the training data, but I did use them for evaluation. 

Architecture:

The classifier has a two-tiered architecture, where first the unbiased articles are filtered out, and then a second model distinguishes between right and left bias.  Both models are Logistic Regressions based on lexical n-gram features, implemented through scikit-learn.

Features:

Both models rely on bag-of-word n-gram features (unigrams, bigrams, trigrams).

Results:

The output is a number between -1 and 1, where -1 is most left-biased, 1 is most right-biased, and 0 is least-biased. For evaluation purposes, scores below 0 are considered “left”, above 0 are considered “right”, and 0 is considered “least-biased”. 

As previously mentioned, along with the 3 classes that are present in the training data, there are two addition in-between classes that I used for evaluation only.  

In order to be counted as correct for recall, right-center can be predicted as either 'right' or 'least-biased', and left-center can be predicted as 'left' or 'least-biased'.  In addition, when calculating the precision of the 'least-biased' class,  'least-biased', 'right-center' and 'left-center' true classes all count as correct. 

Class Precision Recall Right 45% 82% Left 70% 71% Right-center N/A 70% Left-center N/A 60% Least-biased 96% 33%

Note:

Unlike the Sensationalism classifier, this classifier relies on lexical features, which may be specific to the current political climate etc.  This means that the training data might "expire" and as a result the accuracy could decrease.  

Phase II Completed!

  • Posted on: 3 August 2017
  • By: Guy

Phase II saw our students picking up even more steam. @markus has almost completed his port of pattern 3 and @masha_ivenskaya has developed two tools that will aid in the detection of fake news with more to come. One of the tools is available as a demo on this site. Let us know what you think!

On to the final stage!

Text-based fake news detection: Phase II

During Phase 2 of Google Summer of Code, I continued my data-aggregation efforts, developed the Source Checker tool, and trained a model that detects sensationalist news articles.

1. Data Aggregation

Throughout Phase 2, I crawled over 200 domains daily, and continued researching news domains and adding them to my crawler. As of today, I have aggregated over 30k news articles. As I plan to use these articles for classification models, below is the breakdown by each potential class:

Sensationalism Classifier:

Sensationalist: 13k Objective: 8.5k

Bias Classifier:

Right: 12k

Right-center: 1k

Least-biased: 3.5k

Left-center: 2k

Left: 4.5k

2. Source Checker

This is a tool that was requested by GSOC-mentors, @vincent_merckx and @amra_dorjbayar. It takes as input a snippet of text - presumably, a news article or part of a news article. It returns a graph output that shows what types of domains publish the text (or parts of the text)"

Example Graph:

  • The circles correspond to returned domains.

  • Circle size corresponds to amount of overlap between the input snippet and the domain.

  • Circle border color corresponds to bias: blue = left, red = right, green = neutral, grey = unknown.

  • Circle fill corresponds to unreliability: black circles are classified by one of the lists as either fake, unreliable, clickbait, questionable, or conspiracy. The blacker the circle - the more unreliable it is.

  • Edges that connect circles correspond to overlap of statements - the thicker the edge, the bigger the overlap.

After GSOC ends, we will localize this tool for Dutch articles as well.

Architecture of the tool:

The text snippet is broken down into n-grams using the Pattern n-gram module. N-grams that consist primarily of stop-words or named entities are discarded. A sample of the remaining n-grams is reconstructed into the original strings and run through the Google API as an exact phrase (in quotation marks) . The returned domains are then rated by the amount of queries that returned that domain (more than 6 out of 10 = "high overlap", 3 to 6 = "some overlap", less than 3 = "minimal overlap"), and matched against our database. The graph is rendered using the Pattern Graph module.

3. Sensationalism Classifier

I used the aforementioned crawled data to train a model that classifies a news article as either sensationalist or not. This model currently achieves an F1-score of 92% (obtained through 5-fold cross-validation).

It takes as input a 2-column CSV file, where the first column corresponds to the headlines and second one corresponds to the article texts. The output file contains a third column with the label - 1 if the input is categorized as sensationalist, 0 if not.

The classifier is an SVM, and it uses the following features:

  • POS tags (unigrams and bigrams)

  • Punctuation

  • Sentence length

  • Number of capitalized tokens (normalized by length of text)

  • Number of words that overlap with the Pattern Profanity word list (normalized by length of text)

  • Polarity and subjectivity scores (obtained through the Pattern Sentiment module)

Pages