• About this blog

    Web Tastings is a blog about Web 2.0 and knowledge management for agriculture and rural development.
    more >
  • Top Posts

  • RSS What we are reading

    • An error has occurred; the feed is probably down. Try again later.
  • Advertisements

Are we already practicing web 3.0?

I just finished reading an article about web 3.0 on ReadWrite Web where web 3.0 is defined as being “about feeding you the information that you want, when you want it (in the proper context).”

Web 2.0 made it much easier to share and search information, but it also led to information overload for many users. Following the above definition, web 3.0 will be about personalization and recommendation. While it would be great if this really became possible technically soon, I am wondering if many of us are not already practicing web 3.0 as knowledge brokers in our respective niches.

I, for example, play the role of a filter and aggregator of information within CAPRi: After searching, receiving and digesting all the information that might be relevant to our network (i.e. a lot of futzing) I filter it down to the bits and pieces I feel are important to share. It seems that people are appreciating my work, but for this system to work, I have to have credibility and people need to trust my judgment.

Will semantic tools and their algorithms ever be trusted in the same way we trust our friends, colleagues or a blog we love to read? Will there be the continuing need for a person’s involvement? What does this mean for outreach of research and others who create new information?


Measuring impact on the web

Like many development research organizations, IFPRI staff often debate the question of how to best measure the impact of our work. As opposed to some of our partner CG Centers, IFPRI’s work focuses on policy-level interventions, which are rather different from producing new crop varieties for the purpose of increasing agricultural yields, household incomes, etc. Thus, it could be claimed (and in fact has been claimed by some) that IFPRI primarily is in the business of producing “intellectual varieties.” Defining what we produce, however, is quite different from knowing how to measure impact, which effectively underscores the importance of choosing which indicators it’s worth paying attention to. Below I list a few potential indicators for measuring impact on the web while trying to provide my own impressions on their overall value and utility for measuring impact.

  • Page views/visitors: It seems like many webmasters and other mangers of web statistics reports have been questioning the value of these for years, but they continue to be reported year after year. Criticisms include the lack of proper contextualization (from what constitutes a ‘hit,’ ‘visitor session,’ etc. to how do we compare our statistics with others) as well as the lack of any kind of clear correlation between these easily-measurable indicators and others designed to measure impact.
  • Downloads: It seems to follow that the more a particular publication is downloaded, the more people are reading it, thereby expanding its sphere of influence. In practice, however, the same person might be downloading a particular publication multiple times and even more important than the how many question is the by whom question. As with page views/visitor sessions, Google Analytics‘ mapping features can be highly useful in obtaining this type of data to make sure that research outputs are reaching audiences in the countries they’re intended to benefit.
  • Citations: Once again, context is everything. Many research organizations only report on their peer-review publications, and it follows that they are only interested in citations in PR journals, books, etc. Several online databases offer citation tracking information, including Google Scholar, ISI Web of Science, CrossRef’s Forward Linking Program, and CiteSeer, among others. One might be tempted to add these citation figures up in order to obtain a clearer picture of overall impact, but this would not yield such a picture since several of these services count the same citations. Thus, benchmarking your organization’s citations vis-a-vis your counterparts would probably prove more effective in monitoring the impact of your research publications.
  • Mentions in the media/blogs: It also seems like many research organizations have been monitoring this for years, but the addition of services like RSS feeds and automated alerts from Google News has made it much easier than in the past. The same also applies to monitoring the blogs via advanced search operators for Google and Google Blog searches as well as Technorati.
  • RSS feeds: Once again, many are questioning the value of measuring ‘hits’ as an indicator for impact. Instead, they argue, we’d be much better served in paying attention to the number of feed subscribers out there, though this can also fluctuate heavily over time. For our purposes, Feedburner fits the bill rather nicely, even while sticking only to their free, basic service.
  • Search engine rankings: Obviously, there’s one search engine that need not be named (at least, not for the umpteenth time in this post!) that most organizations pay attention to above all others, but few fail to take into account differences across different country platforms. My own advice is to pay attention to your ranking among search results for several keywords your organization may be targeting and look into purchasing ad words for your institution via the Google Grants program (free for non-profits).