A Step toward Data Interoperability? Linked Data, Databases and avoiding the security headache.

May 26th, 2009 by Ken Fischer Leave a reply »

Tim Berners-Lee concept of linked data clearly is a way to make data more usable whether this is public data or data within a large enterprise.   Linked data promises a future which makes related data more interoperable, discoverable and opens the door for innovation.

But how do we take large existing data stores and apply linked data principles to achieve these benefits?  We currently have massive existing data stores with complex security regimes which are depended upon for many legacy applications.   To make them available as Linked Data is a huge challenge especially if we were to recreate these data stores in XML syntax using RDF/RDFa or even simpler XML schemas.  This is coupled with the fact that many of benefits of the reconstituted data have not yet been invented so an ROI argument cannot clearly be made.  Of course, they haven’t been invented  yet because while many can agree the data would be more usable, those uses must be discovered by fiddling with the data in linked form and discovering the uses that emerge.  Since the linked form,  doesn’t yet exist, we have the classic chicken in the egg problem.

Perhaps there is a step we can take toward linked data without making large changes to the existing data stores in government and industry.  Let’s review the principles of Linked Data first (as paraphrased from wikipedia to add clarity):

  • Use URIs (Unique Resource Identifiers) to identify things that you expose to the Web as resources.
  • Use HTTP URIs so that people can locate and look up (dereference) these things.
  • Provide useful information about the resource when its URI is dereferenced.
  • Include links to other, related URIs in the exposed data as a means of improving information discovery on the Web.

The striking thing about these principles is that they don’t mention XML or RDFa etc but focus instead on linking data to definitions.  So it would seem a hybrid solution between the linked data concept and existing databases is possible.  We could add URIs as fields in existing databases for important elements and define a central location where we will track information about that element.  For instance, in the US government there are lots of federal buildings used by multiple agencies.  So I would assume many agencies have databases which refer to federal buildings.  Why not establish a central location to define those buildings and assign each a URI. (A URI by the way is essentially a universal identifier for a real world object.  Essentially it is a web page for each building, but the page would more like contain data links than nice pictures.  (Oh and some people refer to URIs as URNs or Unique Resource Name in an effort to make them more human readable which is nice too) .

So each federal building would have a URI/URN and we could of course put more information about each building in a centrally defined schema, but that will start to be real work and have instant security issues.  So why not initially just have URIs contain recipricol links to databases which also contain that identifier?  The links would have brief non-security breaking descriptions of what type of data is stored in the database which is linked to.    This would remove the need to re-securitize a lot of information to make it cross-department/cross-agency available.   And here is the other key to success for this type of solution: Don’t require the back links to the databases to expose data unless they already do so.   If we start requiring data to be exposed in this step,  it opens up the security pandora’s box.   We need to avoid imposing a new security regime for centralized data,  because it is a stumbling block which would create delays and costs.  And if people do not clearly see the benefits of this step, then it would simply die in committee in most cases.

So that is fine you say.  We have URIs for important data elements and for databases which contain those elements but it is not exposing data,  so where is the benefit?  I think this stripped down version of linked data would have 4 definite benefits:

  • Reference.  The URIs could serve as reference documents to find where similar information is stored. Users could then apply for security permissions on an as needed basis when they need to link to other databases.
  • Innovation.  Users, who would now have a more complete map of available data could be begin to suggest more uses for linking the data.
  • Discoverability.  Search engines (internal or external depending on the security decided upon for the URNs) could make existing databases more discoverable because the engines could discover  important data elements in the databases.  Search engines make use of links to discoverable relevance to searches and are often key to researching problems .
  • Interoperability.  The process of assigning URIs will begin to expose problems in data interoperability due to different definitions in different databases. The URI map would serve as a survey of issues in creating truly interoperable data.

So now the readers of this blog are in at least 2 camps.

  • Those who feel this is a half measure and would be a distraction from advocating for more completely linked data.
  • Those who are still not clear on the benefits of bothering to start the process of linking data at all.

I am hoping there is a third camp which sees this as a doable step in large enterprises such as the US government.  And that it would be the first step toward data which is more linked and therefore more usable for both public and internal uses, and eventually interoperable.

Let me know which camp you are in!

Ken Fischer

Ken Fischer

Ken Fischer is the Chief Innovation Officer (CIO) for ClickforHelp.com Inc, a web-based software and social media strategy company. At CFH, Ken has led over 100 software and web projects, including creating online communities, tools to measure the effectiveness of public service announcements, web based messaging, and online collaboration tools with unique search capabilities. Ken has also led software development projects in a wide diversity of industries such as finding new way to better deliver reliability centered-maintenance, to onsite visual iAHAection to creating online communities. Ken is also the founder of Gov20Labs and Director of Gov 2.0 Events for Potomac Forum. He has been involved in the Gov 2.0 movement to create continuing education workshops, as a sponsor, and as a solutions provider for over three years. Ken is especially interested in using technology to make Government more effective, efficient, and accountable through transparency, participation, and collaboration. He actively blogs on Open Government and creates training programs for the planning and implementation of Open Government. (He does not speak on behalf of any federal, state or local governments.) Ken also blogs about the commercial side of web 2.0 at web20blog.

Website - Twitter - Facebook - More Posts

Subscribe
Advertisement

1 comment

  1. woddiscovery says:

    Ken,

    Good write-up with very interesting thoughts. Not sure in which camp I’m in though ;)

    Maybe in the camp #4: using linked data in the simplest possible way, that is, assigning HTTP URIs to all entities of interest, which makes them available and discoverable on the Web.

    Then, leave the RDFisation and interlinking process up to tools (such as surveyed in RDB2RDF [1]) and/or piggyback CMS and social media sites, where users contribute the interlinking (manually) anyway, such as with Drupal [2].

    Cheers,
    Michael

    [1] http://www.w3.org/2005/Incubator/rdb2rdf/RDB2RDF_SurveyReport.pdf
    [2] http://www.buytaert.net/rdfa-and-drupal

Leave a comment