Sunday, October 16, 2016

New paper: "XMetDB: an open access database for xenobiotic metabolism"

Back in 2013 at the OpenTox conference in Mainz I spoke with Ola, Patrik, and Nina. They were working on a database for CYP metabolism, XMetDB, which I joined on the spot. The database has Open Data, an Application Programming Interface (API), is Open Source, and good amount of experimental detail, like specific enzyme involved and the actual atom mapping of the reaction. A few weeks ago, the paper describing the database was published in the Journal of Cheminformtics (doi:10.1186/s13321-016-0161-3). It's not perfect, but we hope it is a seed for more to follow.

The data, it turns out, is really hard to come by. While I was adding data to the database for most-selling drugs, it was hard to find publications where a human experiment was done (many experiments use rat microsome experiments. Not only makes that hard to identify the specific CYP enzyme, it also is not the human homologue. BTW, since the background of this paper is to create a knowledge base for computational prediction of CYP metabolism, ideally we would even have a specific protein sequence, including any missense SNPs affecting the 3D structure of the enzyme.

However, even for the (at least then) most selling drug aripiprazole, literature was really hard to find! There is a lot of literature just copy/pasting knowledge from other papers, and those other "papers" may in fact be the information sheet you get when you buy the actual drug. Alternatively, personal communication and conference posters can be cited as primary literature too. So, only stressing the importance of a database like this.

At this moment the project is a stalled. None of the currently involved groups has funding for continued development. I guess collaborations are welcome! ChEMBL 22 now was metabolism data for compounds, but I have not explored yet if it has all the details for the transformations needed for XMetDB. At the very least, it may serve as a source of primary literature references.

Spjuth, O., Rydberg, P., Willighagen, E. L., Evelo, C. T., Jeliazkova, N., Sep. 2016. XMetDB: an open access database for xenobiotic metabolism. Journal of Cheminformatics 8 (1). doi:10.1186/s13321-016-0161-3

Friday, September 30, 2016

NanoSafety Cluster presentation: Open Data & NSC Activities

Two weeks ago (already!), the NanoSafety Cluster (NSC) organized two meetings. First, there was on Wednesday afternoon the NSC half-yearly meeting. Second, on Thursday and Friday, in the beautiful Visby on Gotland, the 2nd NanoSafety Forum for Young Scientists. I ran an experiment there, which I will blog about later. Here, please find the slides of my presentation about Open Data I gave on Wednesday:

Oh, and I also presented a few slides about the Working Group 4 activities:

Monday, September 12, 2016

Metabolite identifier mapping databases

Caffeine metabolites. Source: Wikimedia.
If you want to map experimental data to (digital) biological pathways, you need to know what measured datum matches which metabolite in the pathways (that also applies to transcriptomics and proteomics data, of course). However, if a pathways does not have a single database from which identifiers are used, or your analysis platform outputs data with CAS registry numbers, then you need something like identifier mapping. In Maastricht we use BridgeDb for that, and I develop the metabolite identifier mapping databases, which provide the mapping data to BridgeDb, which performs the mapping.

However, identifier mapping for metabolites is non-trivial, and I won't got into details in this post. Instead, the mapping databases that I have been releasing under the CCZero waiver on Figshare use other data sources. When I took over the building of these databases, it used data from the Human Metabolome Database (doi:10.1093/nar/gks1065). It still does. However, I added as data sources to this, ChEBI (doi:10.1093/nar/gkv1031) and Wikidata. The latter I need to support people with, for example, KNApSAcK (doi:10.1093/pcp/pct176).

So, this weekend I released a new mapping database, based on HMDB 3.6, ChEBI 142, and data from Wikidata from September 7. Here are the total number of identifiers and changes compared to June release for the supported identifier databases:

Number of ids in Kd (KEGG Drug): 2013 (unchanged)
Number of ids in Cks (KNApSAcK): 4357 (unchanged)
Number of ids in Ik (InChIKey): 52337 (unchanged)
Number of ids in Ch (HMDB): 41520 (6 added, 0 removed -> overall changed +0.0%)
Number of ids in Wd (Wikidata): 22648 (195 added, 10 removed -> overall changed +0.8%)
Number of ids in Cpc (PubChem-compound): 30699 (154 added, 36 removed -> overall changed +0.4%)
Number of ids in Lm (LIPID MAPS): 2611 (unchanged)
Number of ids in Ce (ChEBI): 131580 (4 added, 6 removed -> overall changed -0.0%)
Number of ids in Ck (KEGG Compound): 15968 (unchanged)
Number of ids in Cs (Chemspider): 24948 (10 added, 2 removed -> overall changed +0.0%)
Number of ids in Wi (Wikipedia): 4906 (unchanged)

An overview of recent releases (I'm trying to keep a monthly schedule) can be found here and the version I release this weekend has doi:10.6084/m9.figshare.3817386.v1.

Friday, September 09, 2016

Doing science has just gotten even harder

Annotation of licenses of life science
databases in Wikidata.
Those following me on Twitter may have seen the discussion this afternoon. A weird law case went to the European court, which sent our their ruling today. And it's scary, very scary. The details are still unfolding and several media have written about it earlier. It's worth checking out for everyone doing research in Europe, particularly if you are a chem- or bioinformatician. I may be wrong in my interpretation, and hope to be, but hope even more to be proven wrong soon, but fear it will not be soon at all. The initial reporting I saw was in a Dutch news outlet, but I was pointed by Sven Kochmann to this press release from the Court of Justice of the European Union. Worth reading. I will need to write more about this soon, to work out the details why this may turn out disastrous for European research. For now, I will quote this part of the press release:
    Furthermore, when hyperlinks are posted for profit, it may be expected that the person who posted such a link should carryout the checks necessary to ensure that the work concerned is not illegally published.
I stress this is only part of the full ruling, because the verdict is on a combination of arguments. What this argument does, however, is turn around some important principle: you have to proof you are not violating copyright.

Now, realize that in many European Commission funded projects, with multiple partners, sharing IP is non-trivial, ownership even less (just think about why traditional publishers require you to reassign copyright to them! BTW, never do that!), etc, etc. A lot of funding actually goes to small and medium sized companies, who are really not waiting for more complex law, nor more administrative work.

A second realization is that few scientists understand or want to understand copyright law. The result is hundreds of scholarly databases which do not define who owns the data, nor under what conditions you are allowed to reuse it, or share, or reshare, or modify. Yet scientists do. So, not only do these database often not specify the copyright/license/waiver (CLW) information, the certainly don't really tell you how they populated their database. E.g. how much they copied from other websites, under the assumption that knowledge is free. Sadly, database content is not. Often you don't even need wonder about it, as it is evident or even proudly said they used data from another database. Did they ask permission for that? Can you easily look that up? Because you are now only allowed to link to that database until you figured out if they data, because of the above quoted argument. And believe me, that is not cheap.

Combine that, and you have this recipe for disaster.

A community that knows these issues very well, is the open source community. Therefore, you will find a project like Debian to be really picky about licensing: if it is not specified, they won't have it. This is what is going to happen to data too. In fact, this is also basically why eNanoMapper is quite conservative: if it does not get clear CLW information by the rightful owner (people are more relaxed with sharing data from others, than their own data!), it is not going to be included in the output.

IANAL, but I don't have to be to see that this will only complicate matters, and the last thing that will do is help the Open Data efforts of the European Commission.

I have yet to figure out what this means for my Linked Data work. Some databases do great work and have very clear CLW information. Think ChEMBL, WikiPathways, and also Open PHACTS did a wonderful job in tracking and propagating this CLW information. On the other hand, Andra Waagmeester did an analysis of database license information of life sciences databases and note the number of 'free content' and 'proprietary' databases (top right figure), which are the two categories of databases where the CLW info is not really clear. How large the problem is with illegal content in those databases (e.g. text mined from literature, screenscraped from another database), who knows, but I can tell you this is not insignificant, unless you think it's 99%.

At the same time, of course, the solution is very simple. Only use and link to websites with clear CLW information and good practices. But that rules out many of the current databases, but also supplementary information, where, even more than in databases, the rules of copyright are ignored by scientists.

And, honestly, I cannot help but wonder what all the publishers will now do with all the articles published in the past 20 years with hyperlinks in them. I hope for them it doesn't link to illegal material. Worse, the above quoted argument will have to make sure, none(!) of those hyperlinks point to material with unclear copyright.

I'll end this post with a related Dutch law (well, at least for the sake of this post). If you buy second hand goods, and the price is less than something like 1/3rd of the new price, you must demand the original receipt of the first buy. Because if not provided, you are legally assumed to realize it is probably stolen. How will that translate to this situation? If the linked scientific database is less then 1/3rd of the cost of the commercial alternative, you may assume it is illegal? Fortunately, this argumentation does not apply.

Problem is, there are enough "smart" people that misuse weird laws and ruling like this to make money. Think of the patent trolls, or about this:
What can possibly go wrong?

Friday, September 02, 2016

Elsevier launches

Elsevier (RELX Group) has seen a lot of publicity this week again. After the patent on peer review earlier this week, today I learned from Max Kemman about the website. This is great! Finding data (think FAIR, doi:10.1038/sdata.2016.18) is hard. Elixir Europe aims at fixing this, and working on open standards to have data explain itself, e.g. adoption of But an entry point that finds information is still very much welcome. Like the search interface for eNanoMapper that indexes information from multiple data sources (well, two at this moment, including the server).

For scientific information this doesn't exist; we have to do with tools like Google Scholar and Google Images. Both are pretty brilliant and allow you to filter on things, besides your regular keyword search. Of course, what we really need is an ontology-backed search, which Google seamlessly integrates under the hood, e.g. using the aforementioned

Now, particularly for my teaching roles, I am frequently looking for material for slides, to support my message. Then, Google Images is great, as it allows me to filter for images that I am allowed to use, reuse, and even modify (e.g. highlight part of the image). Now, I know that some jurisdictions (like the USA) have more elaborate rules about fair use in education, but these rules are too often challenged and money, DRM, etc, limit those rights. Let alone scary, proposed European legislation (follow Julia Reda!).

So, I very much welcome this new effort! Search engine have a better track record than catalogs, like the Open Knowledge Foundation's DataHub. Of course, some repositories are getting so large, like FigShare, to a large extend by very active population by publishers like PLOS, they may soon become a single point of entry.

Anyway, Elsevier is looking for peer-review, which I give them for free (like I gave them free peer reviews until they crossed an internal, mental line, see The Cost of Knowledge). I can only hope that I am not violating their patent. Oh, and please don't look at the HTML of the website. You would certainly be violating their Terms of Use. They really need to talk to their lawyers; they're making a total mess of it.