How We Keep Data Fresh?

[This is an old blog written on Thursday, December 14, 2017]

We live in a big data era, where biological data and thus knowledge extracted grow rapidly.  Tools such as Metascape sit on top of various bioinformatics knowledge bases; the quality of analysis results heavily depends on the freshness of the underlying data content.

We know DAVID had not been updated for over ten years, as the result of this, Wadi et al. estimated a total of 2,601 publications within the year 2015 alone only captured ~20% of the annotations compared to what should have been captured [1]!  Given all the efforts and costs went into generating our precious data sets, losing 80% of insights due to an outdated tool is a serious issue.  Although DAVID finally updated its database after Wadi’s publication, no more activity afterwards, 1.5 year went by and counting …

At Metascape, one of our main goals is to keep our data sources Sushi-fresh.  Metascape’s update engine used to run once a month.  However, due to the large amount of data sources Metascape integrates (Figure 1) and over ten organisms it covers, the automated pipeline broke a few times due to format changes in some sources, due to mistakes in missing species-specific data in NCBI, due to data sources switched to a more protected access mode for funding reasons (OMIM), etc.  The volunteers at Metascape were no longer able to keep up with these changes at a monthly bases, therefore, we see some lag in our updates this year.

Figure 1.  To bring a rich set of features to users, Metascape integrates many data sources for over ten model organisms.  Previously, when one data source breaks, the update workflow halts.  In the future, existing snapshot will be used for problematic data source, so that update can resume for the rest sources to produce a release.

We placed our focus on polishing the data update workflow for past few months.  Two measures are now in place:

First, when the pipeline failed to fetch a data source, the copy from the previous snapshot will be used, so that computation can continue unaffectedly.  We will certainly be notified and take actions afterwards (sometimes the fix can take a while if the issue resides on the data provider’s side).  Nevertheless, we will be able to produce a release.

Second, the pipeline automatically generates a graphical report at the end, comparing data in the new release to its previous one.  An example report is shown here.  This is critical to catch issues that do not cause code to crash, e.g., all locus_tag for a certain species is missing in the new NCBI release.  The report will be reviewed by us, before we trigger the official deployment of the new knowledge base.  The snapshot below (Figure 2) is compiled for A. thaliana.  It is very clear that there are some additions to UniPro identifiers highlighted in green, and some GO annotations highlighted in orange were removed probably due to clean up efforts by curators.  As these changes are minor, we can assume there is no obvious issue in the new release.  Outstanding green/orange bars will deserve our attention, in that case, release will be held off and a careful examination is required.

Figure 2.  Comparison plots are automatically generated by Metascape’s update engine; we can easily review where the changes are and the magnitude of the change between two releases.  Problems can be caught and corrected before they propagate into the release.

We believe with these two new mechanisms in place, Metascape will continue to provide fresh data, so that our users can always extract the maximum value from gene lists.

Metascape has been cited over 70 times by the time of this blog [link], thank you for using Metascape and help spread the words.  The best reward for Metascape volunteers is to see Metascape helping users.

Reference

1. Wadi L, et al. Impact of outdated gene annotations on pathway enrichment analysis. Nat Methods. 2016 Aug 30;13(9):705-6. [link]

This entry was posted in Comment and tagged , , . Bookmark the permalink.